abstract
stringlengths 1
4.43k
| claims
stringlengths 14
189k
| description
stringlengths 5
1.46M
|
---|---|---|
In accordance with one aspect of the invention, a semiconductor processing method of treating a semiconductor wafer provides a wafer within a volume of liquid. The wafer has some electrically conductive material formed thereover. The volume of liquid within the chamber with the wafer therein is established at a pressure of greater than 1 atmosphere and at a temperature of at least 200° C. and below and within 10% of the melting point of the electrically conductive material. In accordance with another aspect, the volume of liquid within the chamber with the wafer therein is established at a pressure of greater than 1 atmosphere. After establishing the pressure of greater than 1 atmosphere, the pressure of the volume of liquid is lowered to a point effective to vaporize said liquid and the vapor is withdrawn from the chamber. In accordance with still another aspect, a semiconductor processing method of increasing planarity of an outer surface on a substrate comprises exposing the outer surface to a volume of liquid at a pressure of greater than about 200 atmospheres. The invention has particular utility to more completely filling contact openings with electrically conductive material, and to increasing substrate planarity. A typical preferred treatment is expected to last anywhere from seconds up to ten minutes or more. |
What is claimed is: 1. A semiconductor processing method of filling a contact opening comprising:forming a dielectric layer over a substrate; forming a contact opening into the dielectric layer; depositing an electrically conductive material to within the contact opening; providing the substrate having the electrically conductive material received in the contact opening within a liquid bath; and establishing the liquid bath with the substrate therein at a pressure of greater than or equal to about 200 psi and at a temperature of at least 200[deg.] C. and below and within 10% of the melting point of the electrically conductive material. 2. The semiconductor processing method of claim 1 wherein the pressure is greater than or equal to about 100 atmospheres.3. The semiconductor processing method of claim 1 wherein the pressure is greater than or equal to about 500 atmospheres.4. The semiconductor processing method of claim 1 wherein the liquid is selected from the group consisting of ethylene glycol, molten indium, a mineral based hydraulic fluid, a perfluorinated ether and a perfluorinated alkane, or mixtures thereof.5. The semiconductor processing method of claim 1 wherein the electrically conductive material comprises aluminum, copper or a mixture of aluminum and copper.6. The semiconductor processing method of claim 1 wherein the electrically conductive material comprises titanium, gold, silver, solder or mixtures thereof.7. A semiconductor processing method of filling a contact opening comprising:forming a dielectric layer over a substrate; forming a contact opening into the dielectric layer; depositing aluminum, copper or a mixture thereof to within the contact opening; providing the substrate having the aluminum, copper or a mixture thereof received in the contact opening within a liquid bath selected from the group consisting of ethylene glycol, molten indium, a mineral based hydraulic fluid, a perfluorinated ether and a perfluorinated alkane, or mixtures thereof; and establishing the liquid bath with the substrate therein at a pressure of greater than or equal to about 500 atmospheres and at a temperature of at least 200[deg.] C. and below and within 10% of the melting point of the aluminum, copper or mixture thereof. 8. The semiconductor processing method of claim 7 further comprising after said establishing, lowering the pressure of the liquid bath to a point effective to vaporize said liquid bath away from the substrate.9. The semiconductor processing method of claim 1 wherein the liquid bath comprises ethylene glycol.10. The semiconductor processing method of claim 1 wherein the liquid bath comprises molten indium.11. The semiconductor processing method of claim 1 wherein the liquid bath comprises a mineral based hydraulic fluid.12. The semiconductor processing method of claim 1 wherein the liquid bath comprises a perfluorinated ether.13. The semiconductor processing method of claim 1 wherein the liquid bath comprises a perfluorinated alkane.14. The semiconductor processing method of claim 7 wherein the liquid bath comprises ethylene glycol.15. The semiconductor processing method of claim 7 wherein the liquid bath comprises molten indium.16. The semiconductor processing method of claim 7 wherein the liquid bath comprises a mineral based hydraulic fluid.17. The semiconductor processing method of claim 7 wherein the liquid bath comprises a perfluorinated ether.18. The semiconductor processing method of claim 7 wherein the liquid bath comprises a perfluorinated alkane.19. The semiconductor processing method of claim 1 wherein the electrically conductive material comprises titanium.20. The semiconductor processing method of claim 1 wherein the electrically conductive material comprises gold.21. The semiconductor processing method of claim 1 wherein the electrically conductive material comprises silver.22. The semiconductor processing method of claim 1 wherein the electrically conductive material comprises solder.23. The semiconductor processing method of claim 1 further comprising after said establishing, lowering the pressure of the liquid bath to a point effective to vaporize said liquid bath away from the substrate. |
RELATED PATENT DATAThis patent resulted from a divisional application of U.S. patent application Ser. No. 09/146,116, which was filed on Sep. 2, 1998.TECHNICAL FIELDThis invention relates to semiconductor processing methods of filling contact and other openings with electrically conductive material, and to semiconductor processing planarizing and other techniques.BACKGROUND OF THE INVENTIONThe invention primarily grew out of needs for making highly reliable, high density dynamic random access memory (DRAM) and other electrical contacts. Advanced semiconductor fabrication is employing increasing vertical circuit integration as designers continue to strive for circuit density maximization. Such typically includes multi-level metallization and interconnect schemes.Electrical interconnect techniques typically require making electrical connection between metal or other conductive layers, or regions, which are present at different elevations within the substrate. Such interconnecting is typically conducted, in part, by etching a contact opening through insulating material to the lower elevation of a layer or conductive region. The significant increase in density of memory cells and vertical integration places very stringent requirements for contact fabrication technology. The increase in circuit density has resulted in narrower and deeper electrical contact openings between layers within the substrate, something commonly referred to as increasing aspect ratios. Such currently range from 1.5 to 5 and are expected to increase. Adequate contact coverage of electrically conductive materials ultimately placed within these deep and narrow contacts continues to challenge the designer in assuring adequate electrical connection between different elevation areas within the substrate.As contact openings become narrower and deeper, it becomes more difficult for the artisan to completely fill the contact openings. An example of the problem is best understood with reference to the accompanying FIGS. 1 and 2. There illustrated is a semiconductor wafer fragment 10 comprised of a bulk substrate 12 and an overlying silicon dioxide layer 14, such as borophosphosilicate glass (BPSG). Bulk substrate 12 includes a dopant diffusion region 16 to which electrical connection is to be made. A contact opening 18 is provided through BPSG layer 14 to active area 16.A thin layer 20 of titanium is deposited atop the wafer to within contact opening 18. Titanium layer 20 is provided to function as a silicide formation layer at the base of contact 18 for reducing resistance. An undesired oxide layer (not shown) also typically forms atop diffusion region 16. The deposited elemental titanium also functions to break-up this undesired oxide and thereafter form a titanium silicide with the silicon of substrate 12 to reduce contact resistance between active area 16 and subsequently deposited plug filling tungsten. Additionally, titanium layer 20 functions as an adhesion/nucleation layer for the subsequently deposited conductive material, for example tungsten. Tungsten does not readily deposit over silicon dioxide and exposed silicon substrate, and the intervening titanium layer 20 facilitates deposition and adhesion of tungsten thereto.Titanium layer 20 is typically deposited by sputter deposition, and undesirably results in formation of contact projecting cusps 22. This results in a back or re-entrant angle 24 being formed relative to contact opening 18. A layer 26 of tungsten is subsequently deposited with the intent being to completely fill the remaining volume of contact opening 18. Unfortunately, an undesired keyhole 28 typically forms, leaving a void within contact 18.Referring to FIG. 2, layers 26 and 20 are subsequently etched back by dry etch or chemical-mechanical polishing to form a contact-filling plug 30. Undesirably, this typically opens-up the upper end of keyhole 28. This undesirably creates a thin void which is difficult to clean and rinse during processing. Also in the final construction, the outer surface area of plug 30 is reduced due to the void created by keyhole 28. This counters the desired goal of maximizing electrical contact with plug 30 with a subsequent layer for ultimately making electrical connection with active area 16. Further, the etch back typically conducted to produce plug 30 undesirably over-etches titanium layer 20, forming edge "fangs" 32. Even where a desired overlying metal line and plug filling material constitute the same material deposited in a common step, undesired voids typically form within the contacts.Prior art techniques have been developed which desirably cause some degree of reflow of the contact filling materials and/or overlying metal conductive lines to facilitate filling of contacts and eliminating voids. One such prior art method subjects the substrate to an extremely high pressure gas phase treatment within a sealed vessel. An example gas phase pressure is around 700 atmospheres and an example temperature of around 400[deg.] C. Such conditions apparently cause extrusion of the metal such that it reflows to a slight degree to 11 completely fill contacts, yet without melting to a point of completely losing its previously patterned shape outside of the contacts. One industry process of doing so is referred to as a "force fill" process.However, such extreme gas pressures and treatment vessels create considerable safety problems to all those working in the vicinity of such vessels. Specifically, if a gas leak or crack were to develop in the reactor vessel, the rapidly expanding gas flowing through such crack could cause the reactor to completely blow apart much like a bomb, or alternately turn the reactor into a lethal projectile.It would be desirable to overcome these and other problems associated with formation of electrically conductive contact plugs. Although the invention principally arose out of concerns specific to contact filling, the artisan will appreciate that the invention has other applicability in semiconductor processing with the invention only be limited by the accompanying claims appropriately interpreted in accordance with the Doctrine of Equivalents.SUMMARY OF INVENTIONIn accordance with one aspect of the invention, a semiconductor processing method of treating a semiconductor wafer provides a wafer within a volume of liquid. The wafer has some electrically conductive material formed thereover. The volume of liquid within the chamber with the wafer therein is established at a pressure of greater than 1 atmosphere and at a temperature of at least 200[deg.] C. and below and within 10% of the melting point of the electrically conductive material. In accordance with another aspect, the volume of liquid within the chamber with the wafer therein is established at a pressure of greater than 1 atmosphere. After establishing the pressure of greater than 1 atmosphere, the pressure of the volume of liquid is lowered to a point effective to vaporize said liquid and the vapor is withdrawn from the chamber. In accordance with still another aspect, a semiconductor processing method of increasing planarity of an outer surface on a substrate comprises exposing the outer surface to a volume of liquid at a pressure of greater than about 200 atmospheres.BRIEF DESCRIPTION OF THE DRAWINGSPreferred embodiments of the invention are described below with reference t e following accompanying drawings.FIG. 1 is a diagrammatic sectional view of a prior art semiconductor wafer fragment, and is discussed in the "Background" section above.FIG. 2 is a view of the FIG. 1 wafer taken at a prior art processing step subsequent to that shown by FIG. 1.FIG. 3 is a diagrammatic representation of wafer processing in accordance with the invention.DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTSThis disclosure of the invention is submitted in furtherance of the constitutional purposes of the U.S. Patent Laws "to promote the progress of science and useful arts" (Article 1, Section 8).In accordance with one aspect of the invention, the FIG. 1 or FIG. 2 wafer having a deposited electrically conductive material, is exposed to a liquid at a pressure greater than 1 atmosphere and at a temperature of at least 200 degrees C. and below and within 10% of the melting point of the electrically conductive material. Example and preferred electrically conductive materials for treatment include aluminum, copper, titanium, gold, silver, solder or mixtures/alloys thereof. A goal of such treatment is to extrude the electrically conductive material within the contact opening to assure more substantial complete filling thereof, and preferably remove any voids therein. A preferred pressure for the liquid is greater than or equal to about 100 atmospheres, with greater than or equal to about 200 atmospheres being more preferred. Most preferred is a pressure greater than or equal to 500 atmospheres. A typical preferred treatment is expected to last anywhere from seconds up to ten minutes or more.The liquid is ideally chosen to be some material which is substantially inert to the wafer material. The liquid is also preferably of a material which can be reasonably easily cleaned from the wafer. Examples include ethylene glycol, molten indium, mineral based hydraulic fluids, perfluorinated ethers, and perfluorinated alkanes.FIG. 3 diagrammatically illustrates an example process in accordance with the invention. Such includes a treatment chamber 50 having a wafer 52 positioned therewith. A suitable liquid inlet 54 is provided for filling chamber 50 with liquid, and forcing suitable additional liquid thereto to effectively provide the liquid under the desired pressure. An outlet 56 is provided for pumping or otherwise evacuating the liquid from the chamber after treatment. One preferred manner of evacuating the liquid after the high pressure exposure treatment is to lower the pressure of the liquid within the chamber to a point effective to essentially vaporize the liquid away from the substrate and chamber. Temperature of the liquid could also be lowered in conjunction with the pressure lowering. As an example for ethylene glycol, if temperature were established at 150[deg.] C. after treatment, pressure could be lowered to the ethylene glycol vapor pressure at that temperature of 20.2 kPa (0.2 atm) to achieve vaporization. If temperature were established at 25[deg.] C. after treatment, pressure could be lowered to the ethylene glycol vapor pressure at that temperature of 0.010 kPa (9.87 *10<-5 >atm or 75 mTorr) to achieve vaporization.The invention is understood to have several significant potential advantages over the prior art high pressure gas phase treatment. For example, liquid surface tension exists between a liquid and a solid surface and is essentially nonexistent between a gas and a solid surface. Accordingly, liquid/solid systems inherently seek to minimize surface area such that surface tension is reduced. Whether attempting void filling reflow with high gas or now with high pressure liquid in accordance with the invention, exposed surface area of the treated material will be less at the conclusion of the subject treatment as compared to before the treatment. Therefore, the driving surface tension minimizing features inherent in a liquid/solid system will facilitate or drive a greater desired planarizing or reflow in a liquid system than a gas system.Also, establishing the temperature with elevated pressure to within 10% of the melting point of the electrically conductive material facilitates reflow and planarization considerably more so than at lower temperatures.Further, the subject liquid system is considerably safer than the high pressure gas phase system. This is principally due to the essential non-compressibility of liquids as compared to gases. There is no practical risk of uncontrollable or violent liquid expansion upon inadvertent release from the reactor with a liquid system as compared to a gas system as there has been no fundamental volume compression of the subject molecules in the first place. Accordingly, an inadvertent leak of a substantial isostatic system, while causing initial spraying of liquid, will not cause the explosive expansion of the prior art high pressure gas systems.Further, a high pressure liquid treatment system in accordance with the invention is expected to put fewer particles or other contaminants into the system. For example with respect to the prior art high pressure gas treatment, gas is continuously fed into the reactor system until the point that a desired pressure is achieved. However, this doesn't occur in accordance with the liquid treatment system. The chamber is initially filled with liquid and then very little additional liquid must be added to get the pressure to go much higher than one atmosphere due to the non-compressible nature of liquids as compared to gases. Further, these features are expected to facilitate achieving a higher throughput of wafers for treatment than with the prior art high pressure gas treatments.Although the invention was motivated in the context of contact filling associated with aluminum or other lines, the artisan will appreciate other utility of the invention. For example, the above described process can be utilized to increase the planarity of an outer surface on a substrate (i.e., an insulating oxide or other layer, or other conductive layer, or combination thereof, etc.). Pressures lower than 200 atmospheres might also be utilizable in some systems.In compliance with the statute, the invention has been described in language more or less specific as to structural and methodical features. It is to be understood, however, that the invention is not limited to the specific features shown and described, since the means herein disclosed comprise preferred forms of putting the invention into effect. The invention is, therefore, claimed in any of its forms or modifications within the proper scope of the appended claims appropriately interpreted in accordance with the doctrine of equivalents. |
A system and method for displaying an interactive screen, such as an end-user license agreement or verification form, on the graphic display (13) of a wireless device (12) when the wireless device connects to a network server (26) on a wireless network (14) and attempts to access or download software applications and data. The user of the wireless device (12) must then affirmatively interact with the interactive screen in order to access or download a software application or data from the network server. The interactive screen can be transmitted from the network server (26) where the wireless device seeks to access or download an application or data, or can be transmitted from a separate server to the wireless device (12). The records of the wireless device-server interactions can be stored on a network server or other data stores on the wireless network. |
CLAIMS WHAT IS CLAIMED IS: [cl] 1. A system for displaying an interactive screen on the graphic display of a wireless device communicating with network server prior to download of data to the wireless device, the system comprising: one or more user-interactive wireless devices, each wireless device including a computer platform and a graphic display thereon, and each wireless device in selective communication to a wireless network; and one or more network servers in selective communication to the wireless network, and each network server selectively in communication with the one or more wireless devices and selectively downloading applications and data to wireless devices thereto, wherein, upon a wireless device attempting to download data from a network server across the wireless network, the system transmitting an interactive screen to the computer platform of the wireless device across the wireless network prior to downloading the requested data to the wireless device, and the wireless device displaying the interactive screen on the graphic display thereof. [c2] 2. The system of claim 1, wherein, upon the user of the wireless device interacting with the interactive screen displayed on the graphic screen of the wireless device, the wireless device sending a signal to the network server indicating the interaction, and the network server downloading the requested data to the computer platform of the wireless device. [c3] 3. The system of claim 1, wherein upon a wireless device attempting to download a software application from an network server across the wireless network, the system transmitting an interactive screen to the computer platform of the wireless device across the wireless network prior to downloading the requested software application to the wireless device, and the wireless device displaying the interactive screen on the graphic display thereof. <Desc/Clms Page number 18> [c4] 4. The system of claim 1, wherein the interactive screen is transmitted to the wireless device from a first network server that the wireless device attempted to download data from. [c5] 5. The system of claim 4, wherein, upon a wireless device attempting to download data from a first network server across the wireless network, the interactive screen is transmitted to the wireless device from a second network server across the wireless network. [c6] 6. The system of claim 5, wherein, upon the user of the wireless device interacting with the interactive screen displayed on the graphic screen of the wireless device, the wireless device sending a signal to the second network server indicating the interaction, the second network server sending a signal to the first network server indicating the interaction at the wireless device, and the first network server downloading the requested data to the computer platform of the wireless device. [c7] 7. The system of claim 5, wherein the interactive screen allows user input of data at the wireless device, and upon the user of the wireless device inputting data on the interactive screen displayed on the graphic screen of the wireless device, the wireless device sending the input data to the second network server, the second network server sending a signal to the first network server indicating the input of data at the wireless device, and the first network server downloading the requested data to the computer platform of the wireless device. [c8] 8. The system of claim 1, wherein the interactive screen allows user input of data at the wireless device, and upon the user of the wireless device inputting data on the interactive screen displayed on the graphic screen of the wireless device, the wireless device sending the inputted data to the network server, and the network server processing the input data and selectively downloading the requested data to the computer platform of the wireless device. [c9] 9. A system for displaying an interactive screen on the graphic display of a wireless device communicating with a network server prior to allowing the wireless <Desc/Clms Page number 19> device to access data and applications resident on the network server, the system comprising : one or more user-interactive wireless devices, each wireless device including a computer platform and a graphic display thereon, and each wireless device in selective communication to a wireless network; and one or more network servers in selective communication to the wireless network, and each network server selectively in communication with the one or more wireless devices and providing access data and applications resident thereon to wireless devices, wherein, upon a wireless device attempting to access data or applications on a network server across the wireless network, the system transmitting a interactive screen to the computer platform of the wireless device across the wireless network prior to granting access to the wireless device, and the wireless device displaying the interactive screen on the graphic display thereof. [c10] 10. A system for displaying a user-interactive screen across a wireless network, comprising: a wireless communication means for selectively communicating to a wireless network, the wireless communication means being user-interactive and including a computer platform and a graphic display thereon; and a download means for selectively downloading applications and data to the wireless communication means across the wireless network, wherein, upon the wireless communication means attempting to download data from the download means across the wireless network, the system transmitting a interactive screen to the computer platform of the wireless communication means across the wireless network prior to downloading the requested data to the wireless communication means, and the wireless communication means displaying the interactive screen on the graphic display thereof. [cll] 11. A method for displaying an interactive screen on the graphic display of a user-interactive wireless devices including a computer platform, the wireless device selectively communicating with a network server and downloading applications and data therefrom, the method comprising: <Desc/Clms Page number 20> attempting to download data to the wireless device from the network server across the wireless network; transmitting a interactive screen to the computer platform of the wireless device across the wireless network prior to downloading the requested data to the wireless device; and displaying the interactive screen on the graphic display of the wireless device. [c12] 12. The method of claim 11, wherein attempting to download data to the wireless device from the network server across the wireless network includes attempting to download a software application to the wireless device from a network server across the wireless network. [c13] 13. The method of claim 11, further comprising: interacting with the interactive screen at the wireless device; sending a signal to the network server from the wireless device indicating the interaction; and downloading the requested data from the network server to the computer platform of the wireless device. [c14] 14. The method of claim 11, wherein transmitting an interactive screen to the wireless device across the wireless network includes transmitting an interactive screen from the network server to the wireless device across the wireless network. [c15] 15. The method of claim 11, wherein the interactive screen allows user input of data at the wireless device, and further comprising: inputting data on the interactive screen displayed on the graphic screen of the wireless device; sending the inputted data from the wireless device to the network server; processing the input data at the network server; and selectively downloading the requested data from the network server to the computer platform of the wireless device. [c 16] 16. The method of claim 11, wherein: <Desc/Clms Page number 21> attempting to download data to the wireless device from the network server across the wireless network includes attempting to download data to the wireless device from a first network server across the wireless network; and transmitting an interactive screen to the wireless device across the wireless network includes transmitting an interactive screen to the wireless device from a second network server across the wireless network. [c17] 17. The method of claim 16, further comprising: interacting with the interactive screen displayed on the graphic screen of the wireless device; sending a signal from the wireless device to the second network server indicating the interaction; sending a signal from the second server to the first network server indicating the interaction at the wireless device; and downloading the requested data from the first network server to the computer platform of the wireless device. [c18] 18. The method of claim 16, wherein the interactive screen allows user input of data at the wireless device, and further comprising: inputting data on the interactive screen displayed on the graphic screen of the wireless device; sending the input data from the wireless device to the second network server; sending a signal from the second network server to the first network server indicating the input of data at the wireless device; and downloading the requested data from the first network server to the computer platform of the wireless device. [cl9] 19. A method for displaying an interactive screen on the graphic display of a user-interactive wireless devices including a computer platform, the wireless device selectively communicating with a network server and downloading applications and data resident on the network server, the method comprising the steps of: a download attempt step for attempting to download data to the wireless device from a network server across the wireless network; <Desc/Clms Page number 22> an interactive screen transmission step for transmitting a interactive screen to the computer platform of the wireless device across the wireless network prior to downloading the requested data to the wireless device; and a interactive screen display step for displaying the interactive screen on the graphic display of the wireless device. [c20] 20. A method for displaying an interactive screen on the graphic display of a user-interactive wireless devices including a computer platform, the wireless device selectively communicating with a network server and accessing applications and data resident on the network server, the method comprising: attempting to access from a wireless device applications and data resident on a network server across a wireless network; transmitting a interactive screen to the computer platform of the wireless device across the wireless network prior to allowing the wireless device the requested access; and displaying the interactive screen on the graphic display of the wireless device. [c21] 21. A wireless device including a computer platform and a graphic display thereon, comprising: the wireless device in selective communication to one or more network servers across a wireless network, each network server selectively downloading data and applications to the wireless device, and upon the wireless device attempting to download data from an application download server across the wireless network, the computer platform of the wireless device receiving an interactive screen transmitted across the wireless network, and the wireless device displaying the transmitted interactive screen on the graphic display thereof. [c22] 22. The wireless device of claim 21, wherein the interactive screen allows user input of data at the wireless device, and the wireless device allowing the user to input data on the interactive screen displayed on the graphic screen of the wireless device, and the wireless device sending the inputted data to the network server. <Desc/Clms Page number 23> [c23] 23. In a computer readable medium, a program that, when executed, directs a wireless device having a computer platform and a graphic display, the wireless device selectively downloading applications and data from a network server across a wireless network and performs the method comprising: attempting to download data from a network server across the wireless network; receiving a transmitted interactive screen at the computer platform of the wireless device, the interactive screen in response to the data download attempt; and displaying the transmitted interactive screen on the graphic display of the wireless device. [c24] 24. The program of claim 23, further directing the wireless device to perform the method comprising: allowing the user to interact with the interactive screen at the wireless device; and sending a signal to the network server from the wireless device indicating user interaction. [c25] 25. The program of claim 23, further directing the wireless device to perform the method comprising: allowing the user to input data with the interactive screen at the wireless device; and sending the input data to the network server. |
<Desc/Clms Page number 1> SYSTEM AND METHOD FOR PROVIDING AN INTERACTIVE SCREEN ON A WIRELESS DEVICE INTERACTING WITH A SERVER BACKGROUND OF THE INVENTION 1. Field of the Invention [0001] The present invention generally relates to wireless networks and computer communications across wireless networks. More particularly, the invention relates to the provision of an interactive screen on the display of a wireless device when the wireless device attempts to access or download a software application or data from a network server wherein the user of the wireless device must interact with the interactive screen in order to access or download the requested application or data. II. Description of the Related Art [0002] Wireless devices, such as cellular telephones, communicate packets including voice and data over a wireless network. Cellular telephones themselves are being manufactured with increased computing capabilities and are becoming tantamount to personal computers and hand-held personal digital assistants ("PDAs"). Some wireless devices, such as select cellular telephones, may have an installed application programming computer platform that allows software developers to create software applications that operate on the wireless device. On the Internet and other open networks, it is known to provide a user of a computer an interactive form when the user seeks to download or access software applications or data, such as an end-user license agreement (EULA), release, or verification form as to age, location or non-commercial status, prior to letting the user download the application. The user then must interact with the form, which sends a confirming signal back to the application download server, and then the user is given access to the application desired downloaded. However, the Internet and most LAN or WAN networks are wire-based or otherwise have inexpensive data connectivity such that bandwidth is easily available to provide interactivity between the browsing computer and the application download server. Thus, the transmission of the end-user license agreement or other verification forms and return of the confirming data does not take up significant network resources. Conversely, in a wireless network environment <Desc/Clms Page number 2> such as cellular telecommunications, any network connection for data transfer is expensive and the use of a user-interactive form to traverse the network prior to application download has traditionally been prohibitive. Consequently, it is desirable to provide an interactive mechanism to a wireless device by which the user of the wireless device must interact prior to accessing data over a network. Such a mechanism needs to account for the limited bandwidth and other characteristics associated with the wireless network. SUMMARY OF THE INVENTION [0005] One aspect of the present invention includes one or more wireless devices where each wireless device has a computer platform and a graphic display, and the graphic display is operated by the resident driver of the computer platform that can be hardware, firmware, or software. Examples of the wireless device include cellular telephones, text pagers, personal digital assistants (PDAs), or other computer platforms with a wireless link to selectively communicate with a wireless network. The system also includes one or more network servers, such as specific application download servers, that are on the wireless network and each network server is selectively in communication with the one or more wireless devices and selectively downloading data thereto, such as software applications, graphics, and text. [0006] If an interactive screen is sent to the wireless device from the second network server that requires the user to input data at the wireless device, then upon the user of the wireless device inputting data on the interactive screen displayed on the graphic screen of the wireless device, the wireless device sends the input data to the second network server, and the second network server sends a signal to the first network server indicating the input of data at the wireless device, and the second server downloading the requested data to the computer platform of the wireless device. Any user input data can be processed the by the receiving network server to determine if the requested download or access is permitted, such as an age verification or other consumer information. [0007] The present invention also provides a method for displaying an interactive screen on the graphic display of a user-interactive wireless devices including attempting to download or access data or applications on the network server from a wireless device across the wireless network, transmitting a interactive screen to the computer platform <Desc/Clms Page number 3> of the wireless device across the wireless network prior to downloading or accessing the requested data, and displaying the interactive screen on the graphic display of the wireless device. Attempting to download data to the wireless device from the network server across the wireless network can include attempting to download a specific software application to the wireless device, or can include downloading simple data. The method can further include interacting with the interactive screen at the wireless device, sending a signal to the network server from the wireless device indicating the interaction, and downloading or accessing the requested data on the network server. Transmitting an interactive screen to the wireless device across the wireless network can include transmitting an interactive screen from a first network server that the wireless device requested the application or data from, or can include transmitting an interactive screen from a second network server to the wireless device across the wireless network [0009] If the system is embodied with an interactive screen that allows user input of data at the wireless device, the method further includes inputting data on the interactive screen displayed on the graphic screen of the wireless device, sending the inputted data from the wireless device to the network server, processing the input data at the network server, and selectively downloading or accessing the requested data on the network server. And if the system has a second network server transmitting the interactive screen to the wireless device, the method further includes interacting with the interactive screen displayed on the graphic screen of the wireless device, sending a signal from the wireless device to the second network server indicating the interaction, sending a signal from the second network server to the first network server indicating the interaction at the wireless device, and downloading or accessing the requested data on the first network server. [0010] An embodiment also includes a wireless device that can perform the above function in providing an interactive screen to the wireless device, and interacting with the network server (s) to access or download data resident the applications or data made available to the wireless device. Because the inventive method is executable on the computer platform of the wireless device, the invention include a program, in a computer readable medium, that directs a wireless device having a computer platform and a graphic display to perform the steps of the method. <Desc/Clms Page number 4> Objects, advantages, and features of the present invention will become apparent after review of the hereinafter set forth Brief Description of the Drawings, DetailedDescription of the Invention, and the Claims. BRIEF DESCRIPTION OF THE DRAWINGS [0012] Fig. 1 is a representative diagram of a wireless network and the computer hardware and wireless devices that can be used within the system to provide an interactive screen to the wireless devices. Fig. 2 is a block diagram of the hardware components of the wireless network providing communication between different wireless devices, an application download server, a separate interactive screen server, and their respective databases. [00141 Fig. 3A is a perspective view of the graphic display of a cellular telephone displaying a EULA to the user upon the user seeking to download an application. [0015] Fig. 3B is a perspective view of the graphic display of a cellular telephone displaying a age-verification to the user upon the end-user seeking to download an age- restricted application, and the user is requested to enter their age on the form. [0016] Fig. 4 is a flowchart illustrating the process executing on the wireless device computer platform to attempt to download and application from a network server, and receiving and displaying an interactive screen to the user, transmitting the interaction data to the network server, and downloading the application. Fig. 5 is a flowchart illustrating the process executing on the application download server receiving a download request from the wireless device in Fig. 4, transmitting an interactive screen to the wireless device, and awaiting the user to properly interact with the interactive screen before allowing the wireless device to download the requested application. DETAILED DESCRIPTION OF THE INVENTIONIntroduction [0018] Systems and methods are anticipated that provide for the downloading of software applications to a wireless device. Software applications can come pre-loaded at the time the wireless device is manufactured, or the user may later request that additional programs be downloaded over cellular telecommunication carrier networks, where the programs are executable on the wireless device. As a result, users of wireless <Desc/Clms Page number 5> devices can customize their wireless devices with programs, such as games, printed media, stock updates, news, or any other type of information or program available for download from application download servers through the wireless network. In one scenario, if the user of the wireless device desires to download and use a software application using the wireless network, the user will typically either call a service provider or contact the service provider through other means, such as through an Internet access, and the service provider will either transmit the application to the wireless device across the wireless network or allow the user access a network site where the application is downloadable or accessible. To connect to the application download server, the wireless device bridges a communication connection to the wireless network, such as a cellular network, and then attempts to contact an application download server where the desired software application is resident. Once the wireless device contacts the application download server, an initial connection is made and the application download server determines what application are available to the wireless device and sends the appropriate information, such as a menu, for display on the wireless device so the user can learn of the available applications. Once access is provided to the downloadable applications, the user of the wireless device can download any of the available applications. The present invention provides systems and methods for providing an interactive screen on the graphic display of a wireless device when the wireless device attempts to download or access data or applications on a network server, such as an application download server, across a wireless network. The interactive screen allows the limited access of individual applications and data on the network server. The interactive screen can be transmitted to the wireless device from first network server that the wireless device computer platform is attempting to navigate, or the interactive screen can be transmitted to the wireless device from a second server on the wireless network. The interactive screen can include graphics, text, multimedia components, data entry fields, or hyperlinks, all of which are displayable and interactive on the graphic display of the wireless device, and the system requires the end-user to properly interact with the screen in order to download or access the requested applications or data. [0021] Examples of the interactive screen are EULAs which require the end-user to agree to certain terms before being allowed to download a software application, or a verification form that requires the end-user to input data in order to have the requested <Desc/Clms Page number 6> access to the applications or data on the network server. Once the user of the wireless device interacts with the interactive screen displayed on the graphic display of the wireless device in the proper predefined manner, the wireless device sends a signal to the first or second network server indicating the proper interaction, and the first network server will then allow the access or download of the requested application or data to the computer platform of the wireless device. When a second network server has provided the interactive screen to the wireless device, the second network server can also receive the interaction data from the wireless device and relay the interaction data to the first network server whereby the first network server then allows the download of the requested data to the computer platform of the wireless device. [0022] It is therefore one object of the'present inventive system and method to provide an interactive screen that can be displayed to the user of a wireless device seeking to download or access a specific application and data on a network server, such as an application download server. The interactive screen the operator of the network server the ability to selectively control the access the user of the wireless device has to the network server resident applications, and force users to enter EULAs or input data before being allowed to access the applications and data. With the use of a separate network server that can provide the interactive screen to the wireless device and store the interaction records, the system can conserve bandwidth and resources of the wireless network while controlling access to the applications and data of other network servers. The present invention thus provides an advantage in that it gives an operator of a network server the ability to have wireless device users execute agreements or verify facts prior to granting the user the ability to download or access applications and data resident on the application download or network server without significant use of the bandwidth and resources of the wireless network and network servers. Exemplary Embodiments of the Present Invention [0023] With reference to the figures in which like numerals represent like elements throughout, Fig. 1 illustrates an embodiment of a system 10 for providing subscribed software applications to one or more wireless devices, such as cellular telephone 12, in communication across a wireless network 14 with at least one network server, such as application download server 16, that selectively downloads or provided access to software applications or other data to the wireless devices across a wireless <Desc/Clms Page number 7> communication portal or other data access to the wireless network 14. As shown here, the wireless device can be a cellular telephone 12, with a graphics display 13, a personal digital assistant 18 with PDA screen 19, a pager 20 with a graphics display 21, which is shown here as a two-way text pager, or even a separate computer platform 22 that has a wireless communication portal and a display 23, and may otherwise have a wired connection 24 to a network or the Internet. The system 10 can thus be performed on any form of remote computer module including a wireless communication portal, including without limitation, wireless modems, PCMCIA cards, access terminals, personal computers, access terminals, telephones without a display or keypad, or any combination or sub-combination thereof. The application download server 16 is shown here on a local server-side network 26 with other computer elements in communication with the wireless network 14, such as a database 28 with stored applications and data that contains software applications and data that are accessible and downloadable to the wireless devices 12,18, 20,22. There is also shown a second network server which is an interactive screen server 32 and with stored interaction database 30. In such embodiment, the interactive screen server 32 transmits the interactive screen to the wireless device 12,18, 20,22 as below described, and the stored interaction records database 30, which can be resident on the interactive screen server 32, stores the individual records for the interactions with the wireless devices that the interactive screen was provided to, the data input by the end- user, and any other interaction related data. Through the separate interactive screen server 32 and stored interaction records database 30, many other network servers, such as application download server 16, can have the system 10 provide the interactive screens to control access to network server resident applications and data without significant use of the network server resources. However, interactive screen server 32 and stored interaction record database 30 are not necessary as server-side functions can be performed on one server, such as application download server 16. Further, a computer server-side computer platform can provide separate services and processes to the wireless devices 12,18, 20,22 across the wireless network 14. Fig. 2 is a block diagram that more fully illustrates the components of the wireless network 14 and interrelation of the elements of the system 10. The wireless network 14 is merely exemplary and can include any system whereby remote modules, such as wireless devices 12,18, 20,22, communicate over-the-air between and among <Desc/Clms Page number 8> each other and/or between and among components of a wireless network 14, including, without limitation, wireless network carriers and/or servers, as well as including a non- wireless network alone or in combination with a wireless network. The application download server 16 and the stored applications database 28, interactive screen server 32, and stored interaction records database 30, will be present on the cellular data network with any other components that are needed to provide cellular telecommunication services. The application download server 16, interactive screen server 32, and/or other screen servers communicate with a carrier network 40, through a data link, such as the Internet, a secure LAN, WAN, or other network. The carrier network 40 controls messages (generally being data packets) sent to a messaging service controller ("MSC") 42. The carrier network 40 communicates with the MSC 42 by a network, the Internet and/or POTS ("plain ordinary telephone system"). Typically, the network or Internet connection between the carrier network 40 and the MSC 42 transfers data, and the POTS transfers voice information. The MSC 42 is connected to multiple base stations ("BTS") 44. In a similar manner to the carrier network, the MSC 42 is typically connected to the BTS 44 by both the network and/or Internet for data transfer and POTS for voice information. The BTS 44 ultimately broadcasts messages wirelessly to the wireless devices, such as cellular telephone 12, by short messaging service ("SMS"), or other over-the-air methods known in the art. The wireless device, such as cellular telephone 12, has a computer platform 50 that can receive and execute software applications and display data transmitted from the application download server 16. The computer platform 50 also allows the wireless device to interact with data and applications resident on network servers. The computer platform 50 may include, among other components, a display driver 52 that drives the graphics display 13 and renders images on the graphics display 13 based upon graphics data received at the computer platform 50. The computer platform 50 also includes an application-specific integrated circuit ("ASIC") 54, or other processor, microprocessor, logic circuit, or other data processing device. The ASIC 52 or other processor executes the application programming interface ("API") layer 56 that interfaces with any resident programs in the memory 58 of the wireless device. The memory can be comprised of read-only or random-access memory (RAM and ROM), EPROM, EEPROM, flash cards, or any memory common to computer platforms. The computer platform 50 also includes a local database 60 that can hold the software applications not actively used in <Desc/Clms Page number 9> memory 58, such as the software applications downloaded from the application download server 16. The local database 60 is typically comprised of one or more flash memory cells, but can be any secondary or tertiary storage device as known in the art, such as magnetic media, EPROM, EEPROM, optical media, tape, or soft or hard disk. The wireless device, such as cellular telephone 12, can access and download many types of applications, such as games and stock monitors, or simply data such as news and sports-related data. The downloaded data can be immediately displayed on the display or stored in the local database 60 when not in use. The software applications can be treated as a regular software application resident on the wireless device 12,18, 20,22, and the user of the wireless device can selectively upload stored resident applications from the local database 60 to memory 58 for execution on the API 56. The end-user of the wireless device 12, 18, 20,22 can also selectively delete a software application from the local database 60. As shown in Figs. 3A and 3B, the system 10 displays an interactive screen 15,17 on the graphic display 13 of a wireless device, such as cellular telephone 12, upon the wireless device attempting to access or download data from a network server, such as application download server 16 across the wireless network 14. The system 10 transmits an interactive screen to the computer platform 50 of the wireless device, either from the server containing the request application or data or from a second server such as interactive screen server 32. The interactive screen 15,17 will appear to the user on the graphic display prior to the network server downloading or allowing access to the requested data or application. The operator of the network server can thus control the access of the wireless device 12,18, 20, 22 to individual applications and data through using the interactive screen 15,17. As an example, in Fig. 3A, an end-user license agreement (EULA) interactive screen 15 is displayed to the end-user on the graphic display 13 when the user seeks to download a software application from application download server 16. The user must indicate agreement with the EULA in order to download the application, and can interact with the EULA 15 on the API 56 of the computer platform. Typical APIs provide a movable cursor on the display that can activate icons as is well known in the art, and other graphic-user interfaces can be used such as a touch screen and stylus that is common in PDA interfaces. Whatever the end- user inputs in response to the EULA is signaled back to the application download server <Desc/Clms Page number 10> 16, either directly from the cellular telephone 12 or indirectly from a signal sent by interactive screen server 32 indicating the user interaction. [0029] As another example of an interactive screen, Fig. 3B illustrates an age verification form 17 that requires the user to input their age prior to being granted access to age-restricted material on the network server. The user thus enters his/her age in response the screen and the cellular telephone transmits the input data to an appropriate network server, such as application download server 16 or interactive screen server 32. Some processing can occur either at the server where the data is requested or at the interactive screen server 32 to determine if the input age meets the criteria. If the interactive screen server 32 processes the data, it can transmit an affirmative or negative signal to the requested-data server to authorize the access of the wireless device. [0030] While the interactive screen 15,17 can be transmitted to the wireless device 12,18, 20,22 from the network server that the wireless device attempted to access or download data from, one embodiment includes the use of another network server, such as interactive screen server 32 and an associated stored interaction records database 30, to conserve the resources on pure application servers such as application download server 16. Thus, upon a wireless device 12, 18, 20,22 attempting to download or access data or an application on a first network server across the wireless network 14, such as application download server 16, the interactive screen is transmitted to the wireless device 12,18, 20,22 from a second network server, such as interactive screen server 32, across the wireless network 14. In such embodiment, once the user of the wireless device 12,18, 20,22 interacts with the interactive screen displayed on the graphic display 13,19, 21,23 of the wireless device 12,18, 20,22, the wireless device sends the signal indicating the interaction to the second network server (interactive screen server 32), and the second network server sends a signal to the first network server (application download server 16) indicating the interaction at the wireless device such that the first network server is now allowed to provide access or download the requested data or application to the computer platform 50 of the wireless device 12,18, 20,22. If the ; interactive screen requires input of data, such as verification form 17 in Fig. 3B, once the end-user of the wireless device 12,18, 20,22 inputs data on the interactive screen, the wireless device 12,18, 20,22 sends the input data to the second network server (interactive screen server 32), the second network server again sends a signal to the first network server (application download server 32) indicating the input of data at the <Desc/Clms Page number 11> wireless device 12,18, 20,22, and that the first network server can provide access or downloading the requested data or application. The use of the interactive screen server 32 as a second network server allow faster provision of the interactive screen and storage of interaction records, especially with a stored interaction records database 30, than would be possible with all functionality occurring on a single network server, such as application download server 16. The increase in system 10 speed translates to decreased data transfer time across the wireless network 14, which conserves the expensive bandwidth of the wireless network 14. The interactive screen can be provided to the wireless device 12,18, 20,22 at any interval during the wireless device-network server interaction. The system 10 can transmit the interactive screen to the wireless device to block access to a specific application, a dataset, or even a file level on the network server (data may be held on a network server in a file structure such as in Windows, UNIX, and LINUX). Further, the transmission of the interactive screen can occur at any time an application or data is sought accessed or downloaded by a wireless device, or if a one-time EULA was necessary, a record of the wireless device 12,18, 20,22 execution of the EULA can be stored, for example on stored interaction records database 30. A comparison can be made by a network server, such as the interactive screen server 32, when a wireless device seeks to download an application or data and if the wireless device has a EULA stored, then the interaction screen provision is unnecessary and the system 10 can let the download proceed. [0032] In one exemplary embodiment, the process executed on the computer platform 50 of the wireless device 12,18, 20,22 is shown in the flowchart of Fig. 4. The wireless device, such as cellular telephone 12, bridges a connection to the wireless network 14, as shown at step 72, such as a cellular network, and then connects to a network server, such as application download server 16, as shown at step 72. At some point while connected to the application download server 16, the wireless device will request to download an application, as shown at step 74, or will seek to otherwise access data that has limited access. Thus, after the request is made at step 74, a decision is made as to whether an interactive screen 15,17 has been received at the computer platform 50 of the wireless device, as shown by decision 50. If an interactive screen 15,17 has not been received, then the process proceeds to determine if the requested application has been received at decision 84. Otherwise, if the interactive screen 15,17 has been <Desc/Clms Page number 12> received at decision 76, the interactive screen 15,17 is displayed on the graphic display 13,19, 21,23 of the wireless device 12,18, 20,22 as shown at step 78. After the interactive screen 15,17 is displayed, a decision is made as to whether the user has interacted with the interactive screen 15,17, as shown at decision 80, or in other words, the wireless device waits until the user interacts with the interactive screen 15,17 so it can send a signal and/or data back to the interactive screen transmitting network server, such as application download server 16 or interactive screen server 32. If the user has not interacted with the interactive screen 15,17 at decision 80, the process reenters decision 80 in a wait-state until the user does interact with the interactive screen 15,17 or exits the download request. If the user has interacted with the interactive screen 15,17 at decision 80, then the interaction data or a signal is transmitted from the wireless device 12,18, 20,22 to the appropriate network server, as shown at step 82. A decision is then made as to whether the request application has been received at the computer platform 50 of the wireless device 12,18, 20,22 (or that the requested access has been granted), as shown at decision 84. If the application has not been received (or access has not been granted) at decision 84, then the process is terminated as the download (or access) was unsuccessful. If the application was successfully received at decision 84, the application is installed at the wireless device 12,18, 20,22 as shown at step 86. If the request was for access to data or applications on the network server, then the wireless device 12,18, 20,22 will have access to the data or applications. With reference to Fig. 5, an exemplary embodiment of the process executing on the application download server 16 (or other type of network server) is shown in a flowchart. A connection with the wireless device 12, 18,20, 22 is entered, as shown at step 90. At some point, the application download server 16 will received a request from the wireless device 12,18, 20,22 to download an application or access resident data, as shown at step 92. Once the request is received at the application download server 16, it is determined if interaction with the wireless device user is required, as shown at decision 94. The determination can be made based upon any criteria that the operator of the network server chooses, such as the owner or the wireless device, the nature of the subject matter of the application or data requested accessed or downloaded. If an interaction with the wireless device user is not required at decision 94, then the requested application is downloaded to the wireless device 12,18, 20,22, as shown at step 102. If interaction is required at decision 94, then an interactive screen 15,17 is <Desc/Clms Page number 13> transmitted to the wireless device 12,18, 20,22, necessitating that the user interact with the interactive screen 15,17 before the requested download (or access) is permitted. A determination is then made as to whether the wireless device user has properly interacted with the interactive screen 15,17, such as affirmatively entering a EULA (Fig. 3A) or entering a correct age (Fig. 3B), as shown at decision 98. The interaction signal or data can be sent either directly from the wireless device 12,18, 20,22 requesting the application and displaying the interactive screen 15,17, or can be a signal or data sent from a second network server, such as interactive screen server 32 which originally received the interaction signal or data from the wireless device. If the wireless device user has not properly interacted with the interactive screen 15,17 at the wireless device 12,18, 20,22, at decision 98, a notice of refusal to download the application to the wireless device 12,18, 20,22 (or denial of access) is returned to the requesting wireless device, as shown at step 100. If the wireless device user has properly interacted with the interactive screen 15,17 at the wireless device 12,18, 20,22, at decision 98, then the requested application is downloaded to the wireless device (or access to the requested application granted), as shown at step 102. The system 10 thus provides a method for displaying an interactive screen 15,17 on the graphic display 13,19, 21,23 of a user-interactive wireless device 12,18, 20,22 including attempting to download or access data on a network server, such application download server 16, across the wireless network 14, transmitting a interactive screen 15,17 to the computer platform 50 of the wireless device 12, 18, 20,22 across the wireless network 14 prior to downloading or accessing the requested data or application, and displaying the interactive screen 15,17 on the graphic display 13,19, 21,23 of the wireless device 12, 18, 20,22. The method can also include interacting with the interactive screen 15,17 at the wireless device 12,18, 20,22, sending a signal to the network server (application download server 16 or interactive screen server 32) from the wireless device 12,18, 20,22 indicating the interaction, and downloading or accessing the requested data or application at the network server with the computer platform 50 of the wireless device 12,18, 20,22. If the interactive screen 15,17 allows wireless device user input of data, the method further comprises the steps of inputting data on the interactive screen 15,17 displayed on the graphic display 13,19, 21,23 of the wireless device 12,18, 20,22, sending the inputted data from the wireless device 12, 18, 20,22 to the network server, processing the input data at the network server, and selectively <Desc/Clms Page number 14> downloading or granting access to the requested data or application at the network server. Transmitting an interactive screen 15,17 to the wireless device 12,18, 20,22 across the wireless network 14 can include transmitting an interactive screen 15,17 to the wireless device 12,18, 20,22 from a first network server (such as application download server 16) that the wireless device requested to download or access and application or data, or can be transmitting an interactive screen 15,17 from a second network server (such as an interactive screen server 32) across the wireless network 14. If the interactive screen server 32 is used to provide the interactive screen 15,17 to the wireless device, the method can include the steps of interacting with the interactive screen 15,17 displayed on the graphic display 13,19, 21,23 of the wireless device, sending a signal from the wireless device to the second network server indicating the interaction, sending a signal from the second network server (such as interactive screen server 32) to the first network server (such as application download server 16) indicating the interaction at the wireless device 12,18, 20,22, and downloading or accessing the requested data or application resident at the first network server to the computer platform 50 of the wireless device 12,18, 20,22. And if the interactive screen 15,17 allows wireless device user input of data at the wireless device 12,18, 20,22, the method can further include inputting data on the interactive screen (such as verification form 17 in Fig. 3B) displayed on the graphic display 13,19, 21,23 of the wireless device 12,18, 20,22, sending the input data from the wireless device to the second network server (such as interactive screen server 32), sending a signal from the second network server to the first network server (such as application download server 16) indicating the input of data at the wireless device 12,18, 20,22, and downloading or accessing the requested data or application at the network server to the computer platform 50 of the wireless device 12,18, 20,22. [0039] The invention further includes a wireless device 12, 18, 20,22 including a computer platform 50 and a graphic display 13,19, 21,23 thereon, where the wireless device 12,18, 20,22 in selective communication to one or more network servers across a wireless network 14 with each network server selectively downloading data and applications to the wireless device 12,18, 20,22. Upon the wireless device 12,18, 20,22 attempting to download or access data on a network server across the wireless network 14, the computer platform 50 of the wireless device receives an interactive screen 15,17 <Desc/Clms Page number 15> transmitted across the wireless network 14, and the wireless device 12,18, 20,22 displaying the transmitted interactive screen 15,17 on the graphic display 13,19, 21,23 thereof. If the interactive screen 15,17 allows user input of data at the wireless device 12,18, 20,22, the wireless device then allows the user to input data on the interactive screen 15,17 displayed on the graphic display 13,19, 21,23 of the wireless device 12, 18, 20,22, and the wireless device sending the inputted data to the appropriate network server (such as application download server 16 and interactive screen server 32). Another embodiment includes a program resident in a computer readable medium, where the program directs a wireless device having a computer platform to perform the inventive steps of the method. The computer readable medium can be the memory 58 of the computer platform 50 of the cellular telephone 12, or other wireless device, or can be in a local database, such as local database 60 of the cellular telephone 12. Further, the computer readable medium can be in a secondary storage media that is loadable onto a wireless device computer platform, such as a magnetic disk or tape, optical disk, hard disk, flash memory, or other storage media as is known in the art. In the context of Figs. 4 and 5, the method may be implemented, for example, by operating portion (s) of the wireless network 14 to execute a sequence of machine- readable instructions, such as wireless platform 50, the application download server 16, and interactive screen server 32. The instructions can reside in various types of signal- bearing or data storage primary, secondary, or tertiary media. The media may comprise, for example, RAM (not shown) accessible by, or residing within, the components of the wireless network 14. Whether contained in RAM, a diskette, or other secondary storage media, the instructions may be stored on a variety of machine-readable data storage media, such as DASD storage (e. g. , a conventional"hard drive"or a RAID array), magnetic tape, electronic read-only memory (e. g., ROM, EPROM, or EEPROM), flash memory cards, an optical storage device (e. g. CD-ROM, WORM, DVD, digital optical tape), paper"punch"cards, or other suitable data storage media including digital and analog transmission media. While the foregoing disclosure shows illustrative embodiments of the invention, it should be noted that various changes and modifications could be made herein without departing from the scope of the invention as defined by the appended claims. <Desc/Clms Page number 16> Furthermore, although elements of the invention may be described or claimed in the singular, the plural is contemplated unless limitation to the singular is explicitly stated. |
A fabrication system utilizes a protocol for removing germanium from a top surface of a wafer. An exposure to a gas, such as a gas containing the hydrochloric acid can remove germanium from the top surface. The protocol can allow shared equipment to be used in both Flash product fabrication lines and strained silicon (SMOS) fabrication lines. The protocol allows better silicidation in SMOS devices. |
1. A method of manufacturing an integrated circuit in an SMOS process, the method comprising:providing a substrate, the substrate including a layer including germanium and a strained silicon layer; providing a gate structure above the strained silicon layer; providing a hydrochloric acid ambient; and annealing the substrate in the hydrochloric acid ambient at a temperature of between 650[deg.] C. and 750[deg.] C. to deplete germanium from a top surface of the strained silicon layer. 2. The method of claim 1, wherein the steps of providing a hydrochloric acid ambient and annealing are performed before the gate structure is provided.3. A method of manufacturing an integrated circuit in an SMOS process, the method comprising:providing a substrate, the substrate including a layer including germanium and a strained silicon layer; providing a gate structure above the strained silicon layer; providing a hydrochloric acid ambient; and annealing the substrate to deplete germanium from a top surface of the strained silicon layer; wherein the steps of providing a hydrochloric acid ambient and annealing are performed after a source and drain are implanted into the strained silicon layer. 4. The method of claim 3, further comprising providing a layer of silicide material above the strained silicon layer after the steps of providing a hydrochloric acid ambient and annealing are performed.5. The method of claim 1, wherein the strained silicon layer is approximately 500 Angstroms thick.6. The method of claim 1, further comprising providing a silicide layer after the annealing step.7. A method of depleting germanium from a top surface of an IC substrate in a chamber, the method comprising:providing a hydrochloric acid ambient in the chamber; and annealing the IC substrate in the chamber at a temperature between 650[deg.] C. and 750[deg.] C. to cause the hydrochloric acid to react with the germanium. 8. A method of depleting germanium from a top surface of an IC substrate in a chamber, the method comprising:providing a hydrochloric acid ambient in the chamber; and annealing the IC substrate in the chamber to cause the hydrochloric acid to react with the germanium; wherein the providing and annealing steps are performed after a gate structure is formed on the IC substrate. 9. The method of claim 7, wherein the providing and annealing steps are performed before a gate structure is formed on the IC substrate.10. The method of claim 8, wherein the providing and annealing steps are performed a second time after the gate is formed on the IC substrate.11. The method of claim 10, wherein the IC substrate includes a silicon-germanium layer and a strained silicon layer at the top surface.12. A method of depleting germanium from a top surface of an IC substrate in a chamber, the method comprising:providing a hydrochloric acid ambient in the chamber; annealing the IC substrate in the chamber to cause the hydrochloric acid to react with the germanium; and providing a silicide layer after the annealing step. 13. The method of claim 7, further comprising evacuating the chamber.14. The method of claim 7, wherein the chamber includes a vacuum.15. The method of claim 7, wherein the germanium reacts to form germanium chloride.16. The method of claim 7, wherein the chamber is part of an etching device, and further comprising etching a dielectric material and a conductive material to form a gate structure.17. A method of manufacturing a transistor on an integrated circuit in an SMOS process, the method comprising:providing a gate structure on a top surface of a strained silicon layer above a silicon germanium layer; providing a gas including HCl; and annealing in the gas including HCl at a temperature to remove germanium from the top surface. 18. The method of claim 17, wherein the temperature is approximately 700[deg.] C.19. The method of claim 18, wherein the annealing is a laser annealing step.20. The method of claim 19, wherein the method is utilized in a Flash device production process.21. The method of claim 17, wherein the step of annealing at a temperature to remove germanium from the top surface is performed at a temperature of between 650[deg.] C. and 750[deg.] C. |
FIELD OF THE INVENTIONThe present invention relates generally to integrated circuit (IC) fabrication. More particularly, the present invention relates to a system for and a method of depleting a top surface of an IC substrate.BACKGROUND OF THE INVENTIONSMOS processes are utilized to increase transistor (MOSFET) performance by increasing the carrier mobility of silicon, thereby reducing resistance and power consumption and increasing drive current, frequency response and operating speed. Strained silicon is typically formed by growing a layer of silicon on a silicon germanium substrate or layer. Germanium can also be implanted, deposited, or otherwise provided to silicon layers to change the lattice structure of the silicon and increase carrier mobility.The silicon germanium lattice associated with the germanium substrate is generally more widely spaced than a pure silicon lattice, with spacing becoming wider with a higher percentage of germanium. Because the silicon lattice aligns with the larger silicon germanium lattice, a tensile strain is created in the silicon layer. The silicon atoms are essentially pulled apart from one another. Relaxed silicon has a conductive band that contains six equal valance bands. The application of tensile strength to the silicon causes four of the valance bands to increase in energy and two of the valance bands to decrease in energy. As a result of quantum effects, electrons effectively weigh 30 percent less when passing through the lower energy bands. Thus, lower energy bands offer less resistance to electron flow.In addition, electrons meet with less vibrational energy from the nucleus of the silicon atom, which causes them to scatter at a rate of 500 to 1,000 times less than in relaxed silicon. As a result, carrier mobility is dramatically increased in strained silicon compared to relaxed silicon, providing an increase in mobility of 80 percent or more for electrons and 20 percent or more for holes. The increase in mobility has been found to persist for current fields up to 1.5 megavolt/centimeter. These factors are believed to enable device speed increase of 35 percent without further reduction of device size, or a 25 percent reduction in power consumption without reduction in performance.The use of germanium in SMOS processes can cause germanium contamination problems for IC structures, layers and equipment. In particular, germanium outgassing or outdiffusion can contaminate various components associated with the fabrication equipment and integrated circuit structures associated with the processed wafer. Germanium outgassing can be particularly problematic at the very high temperatures and ambient environments associated with integrated circuit fabrication. For example, conventional IC fabrication processes can utilize temperatures of approximately 1000[deg.] C., which enhance germanium outgassing. Germanium outgassing can also negatively affect the formation of thin films. In addition, germanium outdiffusion can cause germanium accumulation or "pile up" at the interface of layers.High levels of germanium at the surface of a wafer can adversely affect the formation of silicide layers. In particular, high concentration of germanium in a top surface of a substrate can adversely affect the formation of silicide layers above the source and drain regions. The germanium concentration at the top surface can be exacerbated by the fabrication steps associated with source and drain regions and gate structures.Germanium contamination of IC equipment is becoming a more serious issue as IC fabrication processes explore the advantages of the higher carrier mobility of strained silicon (SMOS) devices. IC fabrication equipment that tends to become contaminated with germanium can include deposition chambers, furnaces, diffusion equipment, etching tools, etc. The quartzware associated with such equipment is particularly susceptible to germanium contamination.Germanium contamination is particularly problematic when equipment is used in both non-germanium and germanium fabrication lines. Shared equipment must be purged of germanium contamination before it is used in non-germanium processes, because such contamination is particularly damaging to metals used during conventional IC fabrication. Further, high levels of germanium contamination can be problematic even for strained silicon (SMOS) processes.Flash devices are particularly sensitive to low level germanium contamination, because Flash technology uses IC structures and processes that are incompatible with germanium. For example, germanium contamination may cause data retention problems for the Flash memory cell. It is nevertheless desirous to use equipment associated with the Flash fabrication line with germanium containing products (e.g., SMOS products).Thus, there is a need for an efficient process for decontaminating a wafer surface. Further, there is a need for a system and a method which reduces germanium contamination. Even further, there is a need for a method of removing germanium from a strained silicon layer. Yet further, there is a need for a process which reduces the adverse effects of germanium on silicidation processes. Further, there is a need for a decontamination process that allows shared equipment to be used in both a Flash production line and a germanium production line.SUMMARY OF THE INVENTIONAn exemplary embodiment relates to a method of manufacturing an integrated circuit in an SMOS process. The method includes providing a substrate which includes a layer including germanium and a strained silicon layer. The method also includes providing a gate structure above the strained silicon layer and providing a hydrochloric acid ambient. The method also includes annealing the substrate to deplete a top surface of the strained silicon layer of the germanium.Another exemplary embodiment relates to a method of depleting germanium from a top surface of an IC substrate in a chamber. The method includes providing a hydrochloric acid ambient in the chamber and annealing the IC substrate in the chamber to cause the hydrochloric acid to react with the germanium.Yet another exemplary embodiment relates to a method of manufacturing a transistor on an integrated circuit in an SMOS process. The method includes providing a gate structure on a top surface of a strained silicon layer above a silicon germanium layer, providing a gas including HCl and annealing at a temperature. In one embodiment the temperature is approximately 700[deg.] C.BRIEF DESCRIPTION OF THE DRAWINGSExemplary embodiments will hereafter be described with reference to the accompanying drawings, wherein like numerals denote like elements, and:FIG. 1 is a general schematic block diagram of a fabrication system including a chamber and an IC substrate;FIG. 2 is a flow diagram showing a depletion process for the fabrication system illustrated in FIG. 1 in accordance with an exemplary embodiment;FIG. 3 is a cross-sectional view schematic drawing of a portion of an IC substrate illustrated in FIG. 1, the IC substrate including a strained silicon layer above a silicon germanium substrate;FIG. 4 is a cross-sectional view of the portion illustrated in FIG. 3, showing a depletion step;FIG. 5 is a cross-sectional view of the portion illustrated in FIG. 4, showing a lithographic exposure step for a photoresist layer above a gate conductor layer and a gate dielectric layer;FIG. 6 is a cross-sectional view of the portion illustrated in FIG. 5, showing a selective patterning step for the photoresist layer;FIG. 7 is a cross-sectional view of the portion illustrated in FIG. 6, showing a selective etching step for the gate conductor layer and the gate dielectric layer;FIG. 8 is a cross-sectional view of the portion illustrated in FIG. 7, showing another depletion step; andFIG. 9 is a cross-sectional view of the portion illustrated in FIG. 8, showing a silicidation step.DETAILED DESCRIPTION OF PREFERRED EXEMPLARY EMBODIMENTSFIGS. 1 through 9 illustrate a method of manufacturing an integrated circuit (IC) in accordance with an exemplary embodiment. The method illustrated in FIGS. 1 through 9 reduces germanium outgassing and outdiffusion problems associated with silicon germanium layers on IC structures. The process includes at least one germanium depletion step and can be used as a part of any process utilizing germanium or other substance prone to outgassing at high temperatures. Advantageously, germanium is depleted from a top surface of the IC substrate or layers above the IC substrate.With reference to FIG. 1, fabrication system or equipment 20 is preferably a fabrication tool or fabrication equipment associated with a germanium fabrication process, such as, an SMOS process. In one embodiment, system 20 can be etching equipment including a dry etching source 30. In another embodiment, fabrication system 20 can be a deposition chamber, a diffusion chamber, an annealing furnace, or another device for processing a substrate associated with a portion 12 of an integrated circuit. Quartzware associated with system 20 is particularly susceptible to germanium contamination.System 20 can include a chamber within which portion 12 is provided. The chamber can generally include a stage 35 or a pedestal for holding portion 12.In one embodiment, system 20 can be utilized in a fabrication line associated with both a germanium process and a non-germanium process. During operation in the germanium process, system 20 can become contaminated with germanium and should be decontaminated before use in the non-germanium process.With reference to FIG. 2, a process 100 can be utilized to deplete portion 12 (e.g., the substrate associated with portion 12) of germanium. Preferably, process 100 depletes germanium from a top surface of the substrate associated with portion 12 of germanium in a step 52.After the surface is depleted in step 52, process 100 forms gate structures above the top surface of the substrate associated with portion 12 in a step 54. In a step 56, the surface of the substrate associated with portion 12 is depleted to remove germanium. In a step 58, silicide layers can be formed. The silicide layers are preferably formed above source and drain regions on either side of the gate structures formed in step 54. Depletion of germanium at steps 52 and 56 allows suitable suicide layers to be formed.Steps 52 and 56 of process 100 can be performed to convert germanium on or near the top surface of the substrate for portion 12 to germanium oxide or germanium chloride. Germanium oxide and germanium chloride are volatile molecules which can be more easily removed from the chamber. Removing germanium from the substrate by process 100 can reduce germanium contamination associated with SMOS processes.In one embodiment, process 100 utilizes depletion step 52 before gate formation and depletion step 56 after gate formation. Alternatively, only one of steps 52 or 56 can be performed without departing from the scope of the invention.At a step 52, the chamber associated with system 20 is provided with a gaseous media. In one embodiment, a hydrochloric acid (HCl) ambient is provided in the chamber and portion 12 is subjected to a furnace anneal at a temperature of 700[deg.] C. (e.g., in a range of 650[deg.] C. to 750[deg.] C.). Preferably, the HCl atmosphere getters the germanium from the top surface to form a gas of germanium chloride which can be evacuated from the chamber. Preferably, the chamber is a vacuum chamber. In one embodiment, HCl is provided at a temperature of approximately 700[deg.] C. and a pressure of 100 millitorr.In another alternative, a laser technology anneal rather than a furnace anneal is utilized. The laser technology anneal is preferably performed at a temperature of 700[deg.] C. at 0.19 joules/cm<2 > of radiant fluence for between approximately 10 and 100 nanoseconds.In yet another embodiment, a mixture of hydrochloric acid (HCl) gas and oxygen (O2) gas is provided to the chamber of system 20 in step 52. Step 56 can utilize the same parameters as step 52. In one embodiment, an HCl gas is used in one of steps 52 and 56 and an HCl and O2 gas is used in the other of steps 52 and 56.Referring to FIGS. 3 through 9, a cross-sectional view of a portion 12 of an integrated circuit (IC) is illustrated. Portion 12 is subjected to process 100 (FIG. 2) to form an IC. The IC can include a transistor with a gate structure and silicided source and drain region as explained below. Portion 12 includes a strained silicon layer 16 provided over a semiconductor substrate 14 or a germanium containing layer or substrate. Substrate 14 can be provided above a substrate 13.Substrate 13 is optional and portion 12 can be provided with substrate 14 as the bottom-most layer. Substrate 13 can be the same material or a different material than substrate 14. In one embodiment, substrate 13 is a semiconductor substrate such as a silicon substrate upon which substrate 14 has been grown.Portion 12 can be any type of semiconductor device, or portion thereof, made from any of the various semiconductor processes such as a complementary metal oxide semiconductor (CMOS) process, a bipolar process, or any other semiconductor process. Portion 12 may be an entire IC or a portion of an IC and may include a multitude of electronic components.Substrate 14 is preferably a silicon germanium or other semiconductor material including germanium, and can be doped with P-type dopants or N-type dopants. Substrate 14 can be an epitaxial layer provided on a semiconductor or an insulative base, such as substrate 13. Furthermore, substrate 14 is preferably a composition of silicon germanium (Si1-x Gex, where X is approximately 0.2 and is more generally in the range of 0.1-0.4). Substrate 14 can be grown or deposited.In one embodiment, substrate 14 is grown above substrate 13 by chemical vapor deposition (CVD) using disilane (Si2H6) and germane (GeH4) as source gases with a substrate temperature of approximately 650[deg.] C., a disilane partial pressure of approximately 30 mPa and a germane partial pressure of approximately 60 mPa. Growth of silicon germanium material may be initiated using these ratios, or, alternatively, the partial pressure of germanium may be gradually increased beginning from a lower pressure or zero pressure to form a gradient composition. Alternatively, a silicon layer can be doped by ion implantation with germanium or by another process to form substrate 14. Preferably, substrate 14 is grown by epitaxy to a thickness of less than approximately 5000 Angstroms (and preferably between approximately 1500 and 4000 Angstroms).A strained silicon layer 16 is formed above substrate 14 by an epitaxial process. Preferably, layer 16 is grown by CVD at a temperature of approximately 600[deg.] C. Layer 16 can be a pure silicon layer and have a thickness of approximately 500 Angstroms. According to alternative embodiments, layer 16 has a thickness of between approximately 50 and 150 Angstroms.With reference to FIGS. 1-9, process 100 is described with respect to portion 12. At step 52, portion 12 is depleted and the removal of germanium from a top surface of layer 16 is represented by arrows 19. Preferably, arrows 19 represent the changing of germanium to a gas state which is evacuated from the chamber in FIG. 4.In FIGS. 5-7, portion 12 is subjected to a gate formation process to form gate structures in accordance with step 54. A gate dielectric layer 18 is provided below a gate conductor layer 22. Preferably, gate dielectric layer 18 is a silicon dioxide layer, such as a 5-20 Angstrom thermally grown silicon dioxide layer, and layer 22 is a polysilicon layer, which may be either doped or undoped. Alternative materials for layers 18 and 22 are possible including any of a variety of known semiconductor, metal, high-k gate dielectric, and other IC materials.A photoresist layer 24 provided above layer 22 is lithographically patterned in accordance with a mask 28. In FIG. 6, photoresist layer 24 is selectively etched to leave a feature 34 representative of a gate structure. In FIG. 7, layers 18 and 22 are etched to leave a gate structure 38 associated with feature 34. Any removal process can be utilized to form gate structure 38.In FIG. 8, portion 12 is subjected to a second depletion step 56. Depletion step 56 is performed after gate structure 38 is formed. In this way, germanium which has traveled to the top surface of layer 16 can be depleted. Germanium can travel to the top surface of layer 16 during fabrication steps associated with gate structure 38. For example, activation steps associated with source and drain region and gate structure 38 can cause germanium to diffuse to the top surface of portion 12. Preferably, layer 16 is depleted to a level approximately 100 to 400 Angstroms below a top surface of layer 16 in steps 52 and 56.In FIG. 9, a silicide layer 46 is provided above layer 16. Silicide layer 46 may be tungsten silicide, cobalt silicide, nickel silicide, titanium silicide, or any of a variety of other silicide materials. According to an exemplary embodiment, silicide layer 46 is provided by depositing a layer of metal (e.g., a refractory metal) and heating at an elevated temperature (e.g., between approximately 550 and 650[deg.] C.) to form a silicide material. Other silicidation methods may be used in alternative embodiments.Referring to FIG. 1, a substrate associated with a portion 12 of an integrated circuit is shown in a fabrication system 20 that is preferably used in both a Flash production line and in an SMOS production line. System 20 can be exposed to germanium during SMOS processes associated with the SMOS production line. The exposure to germanium can be due to germanium outgassing, germanium deposition, germanium implantation, or other germanium-based processes or techniques.The substrate can be a semiconductor substrate such as silicon, gallium arsenide, germanium, or other substrate material. The substrate can include one or more layers of material and/or features such as lines, interconnects, vias, doped portions, etc., and can further include devices such as transistors, microactuators, microsensors, capacitors, resistors, diodes, etc. The substrate can be an entire IC wafer or part of an IC wafer. The substrate can be part of an integrated circuit such as a memory, a processing unit, an input/output device, etc.Steps 52 and 56 can be performed a number of times or cycled to ensure depletion of germanium. In one embodiment, the temperature associated with the annealing can be cycled from a low temperature to a high temperature to ensure depletion of portion 12 and the conversion of germanium to germanium chloride or germanium oxide.It is understood that although the detailed drawings, specific examples, and particular values given provide exemplary embodiments of the present invention, the exemplary embodiments are for the purpose of illustration only. The method and apparatus in the aforementioned embodiments are not limited to the precise details and descriptions disclosed. For example, although particular IC structures are described, other types of structures can also be depleted. Various changes may be made to the details disclosed without departing from the scope of the invention which is defined by the following claims. |
Embodiments of techniques and systems for biometric-data-based media encryption are described. In embodiments, an encryption key may be created for a recipient user based at least in part on biometric data of the recipient user. This encryption key may be maintained on a key maintenance component and used by a sharing user to encrypt a media file for access by the recipient user. One or more access policies associated with recipient user may be encrypted in the encrypted media file as well. In embodiments, the media file may be encrypted for use by multiple recipient users. When a recipient user desires to access the encrypted media file, a decryption key may be generated in real time based on contemporaneously captured biometric data and used to provide access to the encrypted media file. Other embodiments may be described and claimed. |
Claims What is claimed is: 1. A method for decrypting an encrypted media file, comprising: receiving a request for a decryption key to decrypt an encrypted media file, wherein the request is generated in response to a user's request to access the encrypted media fie, and wherein the media file is encrypted using an encryption key generated based on previously provided biometric data of the user; generating, in response to the request, the decryption key based at least in part on real-time contemporaneously captured biometric data of the user; and providing the decryption key for use to decrypt the encrypted media file. 2. The method of claim 1, further comprising: decrypting the encrypted media file using the provided decryption key. 3. The method of claim 2, wherein decrypting the media file comprises: decrypting metadata associated with the encrypted media file using the decryption key; and decrypting media data from the media file based at least in part on the decrypted metadata. 4. The method of claim 3, wherein: decrypting metadata comprises decrypting a symmetric media encryption key; and decrypting media data comprises decrypting media data using the symmetric media encryption key. 5. The method of claim 4, wherein: the metadata associated with the encrypted media file comprises a first encrypted symmetric media encryption key encrypted with the encryption key generated based on previously provided biometric data of the user; and the media file further comprises one or more other encrypted symmetric media encryption keys that are respectively encrypted with encryption keys generated based on previously provided biometric data of other users. 6. The method of claim 3, wherein: the decrypted metadata comprises an access policy associated with the user; and decrypting media data comprises selectively allowing access to media data based at least in part on the access policy associated with the user. 7. The method of claim 1, further comprising: performing real-time contemporaneous capture of biometric data of the user. 8. The method of claim 1, wherein the decryption and encryption keys form a private/public key pair. 9. The method of claim 8, further comprising: capturing biometric data from the user for use as the previously-provided biometric data; and generating the public/private key pair at least in part based on the previously- provided biometric data. 10. An apparatus for decrypting an encrypted media file, the apparatus comprising: one or more computer processors; and a decryption key generation component configured to be operated by the one or more computer processors to: receive a request for a decryption key to decrypt an encrypted media file, wherein the request is generated in response to a user's request to access the encrypted media file, and wherein the media file is encrypted using an encryption key generated based on previously provided biometric data of the user; generate, in response to the request, a decryption key based at least in part on real-time contemporaneously captured biometric data of the user; and provide the decryption key for use to decrypt the encrypted media file. 11. The apparatus of claim 10, further comprising a media decryption component configured to be operated by the one or more computer processors to decrypt the encrypted media file using the provided decryption key. 12. The apparatus of claim 10, wherein the decryption key and encryption keys form a private/public key pair. 13. The apparatus of any of claims 10-12, further comprising a biometric data capture component configured to capture biometric data of the user. 14. A method for decrypting an encrypted media file, comprising: obtaining an encryption key generated based on previously provided biometric data of a user; encrypting the media file to produce an encrypted media file such that the encrypted media file may be decrypted using a decryption key generated based on contemporaneously captured biometric data of the user; and provisioning the encrypted media file to be accessed by the user. 15. The method of claim 14, wherein encrypting the media file comprises encrypting the media file using a public encryption key that is part of a public/private key pair generated based on previously provided biometric data of the user. 16. The method of claim 15, wherein encrypting the media file comprises: encrypting media data using a symmetric media encryption key; encrypting the symmetric media encryption key using the public encryption key; and including the encrypted symmetric media encryption key in the encrypted media file. 17. The method of claim 15, wherein: the public encryption key comprises a first public encryption key; the encrypted symmetric media encryption key comprises a first encrypted symmetric media encryption key; and encrypting the media file further comprises: encrypting the symmetric media encryption key using a second public encryption key generated based on previously provided biometric data of an other user to produce a second encrypted symmetric media encryption key, and including the second encrypted symmetric media encryption key in the encrypted media file. 18. The method of claim 15, wherein encrypting the media file comprises: encrypting an access policy associated with the user using the public encryption key; and including the access policy associated with the user in the encrypted media file. 19. The method of claim 14, wherein provisioning the media file to be accessed by the user comprises provisioning the media file to be accessed on a media sharing service or transmitting the media file to the user. 20. An apparatus for decrypting an encrypted media file, the apparatus comprising: one or more computer processors; and a media encryption component configured to be operated by the one or more computer processors to: obtain an encryption key generated based on previously provided biometric data of a user; encrypt the media file to produce an encrypted media file such that the encrypted media file may be decrypted using a decryption key generated based on contemporaneously captured biometric data of the user; and provision the encrypted media file to be accessed by the user. 21. The apparatus of claim 20, wherein encrypt the media file comprises: encrypt the media data using a symmetric media encryption key; encrypt the symmetric media encryption key using a public encryption key that is part of a public/private key pair generated based on previously provided biometric data of the user; and include the encrypted symmetric media encryption key in the encrypted media file. 22. The apparatus of any of claims 20 or 21, wherein encrypt the media file comprises: encrypt an access policy associated with the user using a public encryption key that is part of a public/private key pair generated based on previously provided biometric data of the user; and include the access policy associated with the user in the encrypted media file. 23. The apparatus of any of claims 20 or 21, wherein obtain an encryption key comprises obtain an encryption key from a key maintenance component. 24. One or more computer readable media having instructions thereon that, when executed by one or more processing devices of a computing device, cause the computing device to perform the method of any of claims 1-9 or 14-19. 25. An apparatus comprising means for performing the method of any of claims 1-9 or 14-19. |
MEDIA ENCRYPTION BASED ON BIOMETRIC DATA Cross Reference to Related Application The present application claims priority to U.S. Patent Application No. 13/562,046, filed July 30, 2012, entitled "MEDIA ENCRYPTION BASED ON BIOMETRIC DATA," the entire contents of which is hereby incorporated by reference in its entirety. Background Online sharing of images, and other media files, continues to provide difficulties for content creators and consumers. In particular, it is difficult for users to share images online and feel confident that they remain secure. For example, many images shared in conventional techniques can be copied indefinitely by users. Additionally, many image- sharing sites must be trusted to not abuse the access they have to the images they host. In some techniques, images and other media files may be protected using passwords. However, these passwords may be hard to remember for users and can require manual setup and encoding for multiple users. Brief Description of the Drawings Embodiments will be readily understood by the following detailed description in conjunction with the accompanying drawings. To facilitate this description, like reference numerals designate like structural elements. Embodiments are illustrated by way of example, and not by way of limitation, in the figures of the accompanying drawings. Figure 1 is a block diagram illustrating an example biometric-data-based media- sharing system, in accordance with various embodiments. Figure 2 illustrates an example biometric-data-based media sharing process of the biometric-data-based media-sharing system, in accordance with various embodiments. Figure 3 illustrates an example encryption and decryption key generation process of the biometric-data-based media-sharing system, in accordance with various embodiments. Figure 4 illustrates an example biometric data capture process of the biometric- data-based media-sharing system, in accordance with various embodiments. Figure 5 illustrates an example media sharing process of the biometric-data-based media-sharing system, in accordance with various embodiments. Figure 6 illustrates an example media access process of the biometric-data-based media-sharing system, in accordance with various embodiments. Figure 7 illustrates an example computing environment suitable for practicing the disclosed embodiments, in accordance with various embodiments. Detailed Description Embodiments of techniques and systems for biometric-data-based media encryption are described herein. In embodiments, an encryption key may be created for a recipient user based at least in part on biometric data of the recipient user. This encryption key may be maintained on a key maintenance component and used by a sharing user to encrypt a media file for access by the recipient user. One or more access policies associated with recipient user may be encrypted in the encrypted media file as well. In embodiments, the media file may be encrypted for use by multiple recipient users. When a recipient user desires to access the encrypted media file, a decryption key may be generated in real time based on contemporaneously captured biometric data and used to provide access to the encrypted media file. Other embodiments are also described. In the following detailed description, reference is made to the accompanying drawings which form a part hereof wherein like numerals designate like parts throughout, and in which is shown by way of illustration embodiments that may be practiced. It is to be understood that other embodiments may be utilized and structural or logical changes may be made without departing from the scope of the present disclosure. Therefore, the following detailed description is not to be taken in a limiting sense, and the scope of embodiments is defined by the appended claims and their equivalents. Various operations may be described as multiple discrete actions or operations in turn, in a manner that is most helpful in understanding the claimed subject matter. However, the order of description should not be construed as to imply that these operations are necessarily order dependent. In particular, these operations may not be performed in the order of presentation. Operations described may be performed in a different order than the described embodiment. Various additional operations may be performed and/or described operations may be omitted in additional embodiments. For the purposes of the present disclosure, the phrase "A and/or B" means (A), (B), or (A and B). For the purposes of the present disclosure, the phrase "A, B, and/or C" means (A), (B), (C), (A and B), (A and C), (B and C), or (A, B and C). The description may use the phrases "in an embodiment," or "in embodiments," which may each refer to one or more of the same or different embodiments. Furthermore, the terms "comprising," "including," "having," and the like, as used with respect to embodiments of the present disclosure, are synonymous. As may be used herein, the term "module" may refer to, be part of, or include an Application Specific Integrated Circuit ("ASIC"), an electronic circuit, a processor (shared, dedicated, or group) and/or memory (shared, dedicated, or group) that execute one or more software or firmware programs, a combinational logic circuit, and/or other suitable components that provide the described functionality. Referring now to Figure 1, embodiments of a biometric-data-based media-sharing system 100 ("BMS 100") are illustrated. In various embodiments, the BMS 100 may be configured to facilitate a sharing user 120 to share a media file with a recipient user 110. In various embodiments, the BMS 100 may facilitate the sharing of the media file using at least encryption keys that are based on biometric data obtained from the recipient user 110. By doing so, in various embodiments the BMS 100 may facilitate secured sharing of media files between the sharing user 120 and the recipient user 100. In various embodiments, the recipient user, wanting to receive access to protected media, may perform a key-generation process where he or she has biometric data captured. The BMS 100 may then generate an encryption key based at least in part on the captured biometric data. Later, when the sharing user 120 wants to share a media file, he or she can use the biometric based generated encryption key to encrypt the media file. The encrypted media file may then be uploaded to a media sharing service, such as a media sharing website or social network. Later, when the recipient user 110 wishes to access the media file, he or she may, in various embodiments, allow the BMS 100 to capture biometric data contemporaneously with his or her attempt to access the encrypted media file. In various embodiments, a decryption key may then be generated based on this contemporaneously captured biometric data and used to decrypt the media file. In various embodiments, the contemporaneous capture of biometric data and generation of the decryption key may allow the recipient user to access the protected media while lessening the need for memorizing or storing passwords. In various embodiments, once used, the decryption key may be discarded. In alternate embodiments, the sharing user 120 may encrypt the media file for access by multiple recipient users 1 10, using one encryption key that is in turn encrypted into multiple versions using corresponding biometric encryption keys of the recipient users 110. Such an encrypted media file may further include per-user access policies. In various embodiments, regardless whether the encrypted media file is for single or multiple users, the BMS 100 may include user access components 1 15, which may be configured to be operated on a computing device accessed by or under control of a recipient user 100. In various embodiments, the user access components 1 15 may include one or more components configured to operate in software and/or hardware in order to facilitate access of shared media by the recipient user 1 10 based on biometric data of the recipient user 110. In one example, the user access components 115 may include a biometric data capture component 130 that may be configured to capture biometric data from a recipient user 110. In various embodiments, the biometric data capture component may be configured to capture biometric data from an image of a recipient user 110. For example, in various embodiments, the biometric data capture component 130 may be configured to receive (or cause to be obtained) an image of a recipient user 1 10's face. The biometric data capture component 130 may then, in various embodiments extract biometric feature data from the image, such as the size, location, and/or orientation of various facial features. In another embodiment, the biometric data capture component 130 may be configured to receive (or cause to be obtained) fingerprint data from a recipient user 1 10. In various embodiments, the biometric data capture component 130 may then provide this biometric data to other components of the user access components 115 of the BMS 100 to facilitate sharing of media files. In various embodiments, a key generation component 140 may be configured to receive biometric data from the biometric data capture component 130 and use the biometric data to generate encryption and/or decryption keys for use by the BMS 100 in facilitating sharing of media files. In various embodiments, the key generation component 140 may generate one or more private/public key pairs based on biometric data obtained from the biometric data capture component 130. In various embodiments, the key generation component 140 may be configured to determine if the key generation component 140 has received sufficient biometric data from the biometric data capture component 130. In some embodiments, if the key generation component 140 has not received sufficient biometric data, the key generation component 140 may request additional biometric data from the biometric data capture component before generating public/private key pairs. In some embodiments, private/public key pairs may be generated based on techniques developed by Rivest, Shamir and Ademan, also known as "RSA" techniques. In other embodiments, other key generation techniques may be used. In various embodiments, the key generation component 140 may be configured to provide the public key of the private/public key pair to other components be used for encryption and/or to use the private key of the private/public key pair as a decryption key. In various embodiments, however, the key generation component 140 may also be configured to not release the private key of the private/public key pair to users in order to protect the key. In some embodiments, the key generation component 140 may be configured to keep the private key secret even from the recipient user 110. In various embodiments, one or more symmetric keys may be generated by the key generation component 140 instead of public/private key pairs. In various embodiments, the key generation component 140 may be configured to send an encryption key associated with the recipient user 1 10 to a key maintenance component 150. In various embodiments, the key generation component 140 may be configured to send the public key of a private/public key pair to the key maintenance component 150 as the encryption key. In various embodiments, the key generation component 140 may be configured to send only the public key of the private/public key pair to the key maintenance component 150, avoiding knowledge of the private key by the key maintenance component 150. In various embodiments, the key maintenance component 150 may include, for example, a server, database, and/or other storage to store the received encryption key and to provide it for later use, such as when the sharing user 120 seeks to share a media file. In various embodiments, the key maintenance component 150 may be configured to maintain and provide multiple encryption keys to sharing user 120 for multiple recipient users 110. In some embodiments, the key maintenance component 150 may be associated with a media sharing service, such as the illustrated media sharing service 170. Particular embodiments of the media sharing service 170 are described below. In various embodiments, a media encryption component 160 may be configured to be operated under control of the sharing user 120 to encrypt media files for protected access by the recipient user 1 10. Thus, in various embodiments, the media encryption component 160 may be configured to obtain an encryption key associated with the recipient user 110 from the key maintenance component 150. In various embodiments, the media encryption component 160 may also be configured to receive a media file for encryption. In various embodiments, the received media file may include one or more of, for example, an image, an audio file, a video file, a MIDI file, a PDF, and/or other types of media files. In various embodiments, the media encryption component 160 may also be configured to receive one or more access policies associated with the recipient user 1 10. In various embodiments, as described earlier, the media encryption component 160 may be configured to encrypt a media file such that it may be accessed by multiple recipient users 1 10. In various embodiments, the media encryption component 160 may be configured to include access policies for multiple recipient users 110 in the media file. In various embodiments, the media encryption module 160 may be configured to encrypt the media file received from the sharing user 120 using a (user agnostic) symmetric media encryption key. The media encryption component 160 may be configured to then encrypt this symmetric media encryption key and include the symmetric media encryption key, in encrypted form, in the encrypted media file for decryption by the recipient user 110. In various embodiments, different encrypted versions of the symmetric media encryption key may be generated using the encryption keys of the recipient users 110 received from the key maintenance component 150. In various embodiments, in order to provide multiple recipient users 110 with access to a media file, the media encryption component 160 may encrypt the symmetric media encryption key multiple times with multiple encryption keys obtained from the key maintenance component 150. Thus, any one recipient user 110 may, if he or she can provide the correct biometric-data-based decryption key, decrypt and recover the symmetric media encryption key and thus be able to obtain access to the media file, using the recovered symmetric media encryption key. In various embodiments, this access may be mediated by access policies associated with the user that are included in the encrypted media file. In various embodiments, after encrypting the media file, the sharing user 120 may share the encrypted media file on a media sharing service 170. In various embodiments, the media sharing service 170 may include a social network; in other embodiments, the media sharing service 170 may include a media sharing website, or an other website. In various embodiments, the sharing user 120 may cause the media encryption component 160 to send the encrypted media file to the media sharing service 170. In various embodiments, the sharing user 120 may obtain the encrypted media file from the media encryption component 160 and may then send the encrypted media file to the media sharing service 170 themselves. As discussed above, in various embodiments, the recipient user 1 10 may later desire access to the encrypted media file. The recipient user 110 may then cause the media decryption component 180 of the user access components 115 to obtain the encrypted media file. In various embodiments, the media decryption component 180 may directly obtain the encrypted media file from the media sharing service. In other embodiments, the recipient user 1 10 may obtain the encrypted media file from the media sharing service 170 and may provide the encrypted media file to the media decryption component themselves. In yet other embodiments, the recipient user 1 10 may obtain the encrypted media file via another conduit, such as by being sent the encrypted media file directly from the sharing user 120. In various embodiments, the media decryption component 180 may be configured to decrypt the received encrypted media file, using a contemporaneously obtained biometric based decryption key. In various embodiments, the media decryption component 180 may contemporaneously obtain the biometric -based decryption key from the key generation component 140 of the user access components 1 15. In various embodiments, the key generation component 140 may be configured to generate, in realtime, a decryption key based at least in part on contemporaneously captured biometric data of the recipient user 110. In various embodiments, the biometric capture component 130 may be configured to perform this contemporaneous capture of biometric data and to provide the captured biometric data to the key generation component 140 for real-time generation of the biometric -based decryption key. In various embodiments, the media decryption component 180 may also be configured to check one or more access policies included in the received encrypted media file to determine if the recipient user may access media encrypted in the encrypted media file. In various embodiments, the media decryption component 180 may be configured to allow or deny particular requested accesses to the encrypted media file by the recipient user 110 based on the access policies. The media decryption component 180 may thus, in various embodiments, be configured to provide a decrypted media file to the recipient user 110 after decrypting the encrypted media file. In various embodiments, user access components 1 15 may be provided to corresponding computing devices (not shown) of recipient users 110. In some embodiments, user access components 1 15 may be provided to a shared computing device (not shown) for use by multiple recipient users 110. In various embodiments, both single or multi-user arrangements may be provided. While the foregoing embodiments have been described with the encryption keys and media files being provided to the sharing user 120 and recipient users 1 10 through key maintenance service 150 and media sharing service 170 respectively, in alternate embodiments, the encryption keys and/or the media files may be exchanged between the sharing user 120 and the recipient users 110 directly. Figure 2 illustrates an example biometric-data-based media sharing process 200 of the biometric-data-based media-sharing system, in accordance with various embodiments. It may be recognized that, while the operations of process 200 are arranged in a particular order and illustrated once each, in various embodiments, one or more of the operations may be repeated, omitted, or performed out of order. The process may begin at operation 210, where, in various embodiments, the BMS 100 may facilitate generation of encryption and/or decryption keys for sharing media files with the recipient user 1 10. Particular embodiments of operation 210 are described below with reference to process 300 of Figure 3. Next, at operation 220, the sharing user 120 may, in various embodiments, share encrypted media, such as with the recipient user 1 10. Particular embodiments of operation 220 are described below with reference to process 500 of Figure 5. Next, at operation 230 the recipient user may, in various embodiments, attempt to access the shared encrypted media. Particular embodiments of operation 230 are described below with reference to process 600 of Figure 6. The process may then end. Figure 3 illustrates an example encryption and/or decryption key generation process 300 of the biometric-data-based media-sharing system, in accordance with various embodiments. In various embodiments, process 300 may include one or more embodiments of operation 210 of process 200. It may be recognized that, while the operations of process 300 are arranged in a particular order and illustrated once each, in various embodiments, one or more of the operations may be repeated, omitted, or performed out of order. The process may begin at operation 310, where, in various embodiments, the biometric data capture component 130 may capture biometric data from the recipient user 110 to be used to generate encryption and decryption keys. Particular embodiments of operation 310 are described below with reference to process 400 of Figure 4. Next, at operation 320, the key generation component 140 may generate encryption and/or decryption keys based at least in part on the biometric data captured at operation 310. In various embodiments, the key generation component 140 may generate a private/public key pair at operation 310. In some embodiments, the private/public key pair may be generated at operation 320 using RSA techniques, as described above. In other embodiments, the key generation component 140 may generate a symmetric key rather than a private/public key pair, or other types of encryption and/or decryption keys. In various embodiments where a private/public key pair is generated, the public key may be used as the encryption key, and/or the private key may be used as the decryption key. Next, at operation 330, the key generation component 140 may provide the encryption key generated at operation 320 to the key maintenance component 150. The process may then end. Figure 4 illustrates an example biometric data capture process 400 of the biometric-data-based media-sharing system, in accordance with various embodiments. In various embodiments, process 400 may include one or more embodiments of operation 310 of process 300. It may be recognized that, while the operations of process 400 are arranged in a particular order and illustrated once each, in various embodiments, one or more of the operations may be repeated, omitted, or performed out of order. The process may begin at operation 410, where the biometric data capture component 130 may receive a biometric data source. In some embodiments, the biometric data source may include an image of the recipient user 110. For example, in such an embodiment, the biometric data capture component 130 may direct a camera to capture an image of the recipient user. In other embodiments, the biometric data source may include a different source, such as, for example, a fingerprint image, a retinal image, an iris image, video of movement of the user, a silhouette, etc. Next, at operation 420, the biometric data capture component 130 may retrieve first pieces of biometric data from the received biometric data source. In various embodiments, the types of biometric data retrieved may be based, at least in part, on the type of the received biometric data source. For example, in some embodiments, when the biometric data source includes an image of a face, the pieces of biometric data may include data representing size, orientation, spacing, and/or location of one or more facial features which may be identified in the image. In another example, in some embodiments, when the biometric data source includes a fingerprint image, the pieces of biometric data may include data representing size, orientation, spacing, and/or location of one or more fingerprint ridge features which may be identified in the image. Next, at decision operation 425, the biometric data capture component 130 may determine if there are sufficient pieces of biometric data retrieved to generate encryption and/or decryption keys. In various embodiments, the biometric data capture component 130 may communicate with the key generation component 140 in order to determine if sufficient pieces of biometric data have been received. If sufficient pieces have not been retrieved, then at operation 430, an additional piece of biometric data may be retrieved and the biometric data capture component may return to decision operation 425 to determine if there are now sufficient pieces of biometric data retrieved to generate encryption and/or decryption keys. However, if sufficient pieces have been retrieved, then, in various embodiments, at operation 440, the pieces of biometric data may be provided for key generation. In various embodiments, the pieces may thus be stored for retrieval by the key generation component 140 or may be provided directly to the key generation component 140. The process may then end. Figure 5 illustrates an example media sharing process 500 of the biometric-data- based media-sharing system, in accordance with various embodiments. In various embodiments, process 500 may include one or more embodiments of operation 220 of process 200. It may be recognized that, while the operations of process 500 are arranged in a particular order and illustrated once each, in various embodiments, one or more of the operations may be repeated, omitted, or performed out of order. The process may begin at operation 510, where the media encryption component 160 may receive a media file to be encrypted, such as from the sharing user 120. As discussed above, in various embodiments, the received media file may include one or more of, for example, an image, an audio file, a video file, a MIDI file, a PDF, and/or other types of media files. Next, at operation 520, the media encryption component 160 may encrypt the received media file with a symmetric encryption key to create encrypted media data. In various embodiments, the symmetric encryption key may or may not be associated with one or more of the sharing user 120, the received media file, and/or the receiving user 1 10. Next, at operation 530 the media encryption component 160 may determine an access policy for the media file after encryption. In various embodiments, the access policy may be associated with one or more of, for example: the received media file, the sharing user 120, the receiving user 110, the type of media being encrypted, rights provided by a creator of the media, and/or other considerations. In various embodiments, the access policy may direct access for one or more of, for example, viewing the media, listening to the media, sharing the media, storing the media, copying the media, editing the media, etc. At operation 540, the media encryption component 160 may then obtain an encryption key associated with the recipient user 1 10. As discussed above, in various embodiments, the encryption key may be a public key of a private/public key pair generated at operation 320 of process 300. In various embodiments, the encryption key may be obtained from the key maintenance component 150. Next, at operation 550, in various embodiments the media encryption component 150 may encrypt the symmetric encryption key used to encrypt the media file at operation 520 with the encryption key obtained from the key maintenance component 150. Additionally, in various embodiments, at operation 550 the media encryption component 150 may encrypt the access policy for the recipient user 110 with the encryption key obtained from the key maintenance component 150. Thus, the media encryption component 160 may generate encrypted metadata, in particular the encrypted symmetric media encryption key and the encrypted access policies, which may be used to decrypt the encrypted media data. This encrypted metadata may then be included in the encrypted media file for provisioning to the media sharing service 170. In various embodiments, instead of encrypting the media file with the symmetric media encryption key and encrypting the symmetric media encryption key with the encryption key received from the key maintenance component 150, the media encryption component 160 may encrypt the media file and/or the access policy/policies directly with the encryption key received from the key maintenance component 150. Next, at decision operation 555, the media encryption component 160 may determine whether there are additional recipient users 1 10 with which the sharing user 120 wishes to share the received media file. If so, the process may repeat at operation 530. If not, then at operation 560, the media encryption component 160 may provide the encrypted media file to the media sharing service 170 for later sharing with the recipient user 1 10. In other embodiments, the media encryption component 160 may provide the encrypted media file to another component, such as a storage device, or may provide the encrypted media file directly to the recipient user 1 10. In some embodiments, the media encryption component may modify a form of the encrypted media file before providing it. For example, the encrypted media file may be printed as a photo in an encoded form which may be unintelligible to the recipient user without decryption. This form may allow the recipient user to scan the printed photo into an encrypted digital file and then access the encrypted media file such as described herein. The process may then end. Figure 6 illustrates an example media access process 600 of the biometric-data- based media-sharing system, in accordance with various embodiments. In various embodiments, process 600 may include one or more embodiments of operation 230 of process 200. It may be recognized that, while the operations of process 600 are arranged in a particular order and illustrated once each, in various embodiments, one or more of the operations may be repeated, omitted, or performed out of order. The process may begin at operation 610, where the media decryption component 180 of the user access components 1 15 may receive the encrypted media file. In some embodiments, at operation 610, the encrypted media file may be converted from a different form (e.g., scanning the printed encoded photo described above) in order to receive the encrypted media file. In various embodiments, the media decryption component 180 may also receive a type of access (such as viewing, editing, storing, etc.) desired by the recipient user 110 at operation 610. Next, at operation 620, the biometric data capture component 130 may contemporaneously capture biometric data from the recipient user 1 10 to use in generating in real-time a decryption key. Particular embodiments of operation 620 are described above with reference to process 400 of Figure 4. Next, at operation 630, the key generation component 140 may compute a decryption key using the captured biometric data. In various embodiments, the key generation component 140 may generate a private/public key pair at operation 630 and use the private key as the decryption key. In some embodiments, the private/public key pair may be generated at operation 630 using RSA techniques, as described above. In various embodiments, the private key generated at operation 630 is identical to the private key generated at operation 320 of process 300. Next, at operation 640, the media decryption component 180 may decrypt one or more access policies and/or a symmetric media encryption key using the decryption key generated at operation 630. At operation 650, in various embodiments, the decrypted policy may be reviewed to determine if the access requested by the recipient user 1 10 is permitted according to the one or more decrypted access policies. At operation 655, in various embodiments, the media decryption component may determine whether the requested access is allowed. If the access is allowed, then at operation 660, the media decryption component 180 may decrypt the media data in the encrypted media file and provide access to the media. If not, then at operation 670, the media decryption component may deny access to the media. In other embodiments, where media data is encrypted directly with the encryption key received from the key maintenance component 150, then at operation 640 the media data may be decrypted using the decryption key determined at operation 630. In such embodiments, the media decryption component 180 may still determine if access is allowed and provide selective access at operations 650, 655, 660, and 670. The process may then end. In various embodiments, as described earlier, once used, the decryption key may be discarded. Figure 7 illustrates, for one embodiment, an example computing device 700 suitable for practicing embodiments of the present disclosure. As illustrated, example computing device 700 may include control logic 708 coupled to at least one of the processor(s) 704, system memory 712 coupled to system control logic 708, non-volatile memory (NVM)/storage 716 coupled to system control logic 708, and one or more communications interface(s) 720 coupled to system control logic 708. In various embodiments, the one or more processors 704 may be a processor core. System control logic 708 for one embodiment may include any suitable interface controllers to provide for any suitable interface to at least one of the processor(s) 704 and/or to any suitable device or component in communication with system control logic 708. System control logic 708 may also interoperate with a display 706 for display of information, such as to as user. In various embodiments, the display may include one of various display formats and forms, such as, for example, liquid-crystal displays, cathode- ray tube displays, and e-ink displays. In various embodiments, the display may include a touch screen. System control logic 708 for one embodiment may include one or more memory controller(s) to provide an interface to system memory 712. System memory 712 may be used to load and store data and/or instructions, for example, for system 700. In one embodiment, system memory 712 may include any suitable volatile memory, such as suitable dynamic random access memory ("DRAM"), for example. System control logic 708, in one embodiment, may include one or more input/output ("I/O") controller(s) to provide an interface to NVM/storage 716 and communications interface(s) 720. NVM/storage 716 may be used to store data and/or instructions, for example. NVM/storage 716 may include any suitable non-volatile memory, such as flash memory, for example, and/or may include any suitable non-volatile storage device(s), such as one or more hard disk drive(s) ("HDD(s)"), one or more solid-state drive(s), one or more compact disc ("CD") drive(s), and/or one or more digital versatile disc ("DVD") drive(s), for example. The NVM/storage 716 may include a storage resource physically part of a device on which the system 700 is installed or it may be accessible by, but not necessarily a part of, the device. For example, the NVM/storage 716 may be accessed over a network via the communications interface(s) 720. System memory 712, NVM/storage 716, and system control logic 708 may include, in particular, temporal and persistent copies of biometric-data-based media sharing logic 724. The biometric-data-based media sharing logic 724 may include instructions that when executed by at least one of the processor(s) 704 result in the system 700 practicing one or more aspects of the user access components 1 15, key maintenance service 150, and/or media sharing service 170, described above. Communications interface(s) 720 may provide an interface for system 700 to communicate over one or more network(s) and/or with any other suitable device. Communications interface(s) 720 may include any suitable hardware and/or firmware, such as a network adapter, one or more antennas, a wireless interface 722, and so forth. In various embodiments, communication interface(s) 720 may include an interface for system 700 to use NFC, optical communications (e.g., barcodes), BlueTooth or other similar technologies to communicate directly (e.g., without an intermediary) with another device. In various embodiments, the wireless interface 722 may interoperate with radio communications technologies such as, for example, WCDMA, GSM, LTE, and the like. Depending on whether computing device 700 is employed to host user access components 115, key maintenance service 150, and/or media sharing service 170, the capabilities and/or performance characteristics of processors 704, memory 712, and so forth may vary. In various embodiments, when used to host user access components 1 15, computing device 700 may be, but not limited to, a smartphone, a computing tablet, a ultrabook, an e-reader, a laptop computer, a desktop computer, a set-top box, a game console, or a server. In various embodiments, when used to host key maintenance service 150 and/or media sharing service 170, computing device 700 may be, but not limited to, one or more servers known in the art. For one embodiment, at least one of the processor(s) 704 may be packaged together with system control logic 708 and/or biometric-data-based media sharing logic 724. For one embodiment, at least one of the processor(s) 704 may be packaged together with system control logic 708 and/or biometric-data-based media sharing logic 724 to form a System in Package ("SiP"). For one embodiment, at least one of the processor(s) 704 may be integrated on the same die with system control logic 708 and/or biometric- data-based media sharing logic 724. For one embodiment, at least one of the processor(s) 704 may be integrated on the same die with system control logic 708 and/or biometric - data-based media sharing logic 724 to form a System on Chip ("SoC"). The following paragraphs describe examples of various embodiments. In various embodiments, an apparatus for decrypting an encrypted media file may include one or more computer processors. The apparatus may also include a decryption key generation component configured to be operated by the one or more computer processors. The decryption key generation component may be configured to receive a request for a decryption key to decrypt an encrypted media file. The request may be generated in response to a user's request to access the encrypted media fie. The media file may be encrypted using an encryption key generated based on previously provided biometric data of the user. The decryption key generation component may also be configured to generate, in response to the request, a decryption key based at least in part on real-time contemporaneously captured biometric data of the user. The decryption key generation component may also be configured to provide the decryption key for use to decrypt the encrypted media file. In various embodiments, the apparatus may further include a media decryption component configured to be operated by the one or more computer processors to decrypt the encrypted media file using the provided decryption key. In various embodiments, the decryption key and encryption keys may form a private/public key pair. In various embodiments, the apparatus may further include a biometric data capture component configured to capture biometric data of the user. In various embodiments, the biometric data capture component may include an image capture component. In various embodiments, the image capture component may be configured to be operated to capture biometric data from an image of the user's face. In various embodiments, the biometric data capture component may include a fingerprint capture component. In various embodiments, an apparatus for decrypting an encrypted media file may include one or more computer processors. The apparatus may include a media encryption component configured to be operated by the one or more computer processors to obtain an encryption key generated based on previously provided biometric data of a user. The media encryption component may also be configured to encrypt the media file to produce an encrypted media file such that the encrypted media file may be decrypted using a decryption key generated based on contemporaneously captured biometric data of the user. The media encryption component may also be configured to provision the encrypted media file to be accessed by the user. In various embodiments, the media encryption key may encrypt the media file through encryption of the media data using a symmetric media encryption key, encryption of the symmetric media encryption key using a public encryption key that is part of a public/private key pair generated based on previously provided biometric data of the user, and inclusion of the encrypted symmetric media encryption key in the encrypted media file. In various embodiments, the media encryption key may encrypt the media file through encryption of an access policy associated with the user using a public encryption key that is part of a public/private key pair generated based on previously provided biometric data of the user and inclusion of the access policy associated with the user in the encrypted media file. In various embodiments, the media encryption key may obtain an encryption key from a key maintenance component. Computer-readable media (including non-transitory computer-readable media), methods, systems and devices for performing the above-described techniques are illustrative examples of embodiments disclosed herein. Additionally, other devices in the above-described interactions may be configured to perform various disclosed techniques. Although certain embodiments have been illustrated and described herein for purposes of description, a wide variety of alternate and/or equivalent embodiments or implementations calculated to achieve the same purposes may be substituted for the embodiments shown and described without departing from the scope of the present disclosure. This application is intended to cover any adaptations or variations of the embodiments discussed herein. Therefore, it is manifestly intended that embodiments described herein be limited only by the claims. Where the disclosure recites "a" or "a first" element or the equivalent thereof, such disclosure includes one or more such elements, neither requiring nor excluding two or more such elements. Further, ordinal indicators (e.g., first, second or third) for identified elements are used to distinguish between the elements, and do not indicate or imply a required or limited number of such elements, nor do they indicate a particular position or order of such elements unless otherwise specifically stated. |
Particular embodiments described herein provide for a wearable electronic device. Once particular implementation of a wearable electronic device may include a pluraility of touch display screens in which each touch display screen is configured to display one or more images and includes a touch input device configured to receive a user interaction. The wearable elecronic device may further include a control module in communication with the plurality of touch display screens. The control module includes a processor configured to receive a first interaction from a first touch display screen of the plurality of display screens, and send a first message including first information indicative of the first interaction and a first display screen identifier associated with the first touch display screen to a second electronic device. |
CLAIMS:1. A wearable electronic device, comprising:a plurality of touch display screens, each touch display screen configured to display one or more images and including a touch input device configured to receive a user interaction; anda control module in communication with the plurality of touch display screens, the control module including a processor configured to:receive a first interaction from a first touch display screen of the plurality of display screens; andsend a first message including first information indicative of the first interaction and a first display screen identifier associated with the first touch display screen to a second electronic device.2. The wearable electronic device of Claim 1, further including a strap portion, wherein the plurality of touch display screens are at least partially disposed upon the strap portion.3. The wearable electronic device of any of Claims 1-2, wherein the first message further includes a first device identifier associated with the wearable electronic device.4. The wearable electronic device of any of Claims 1-2, wherein the second electronic device includes a second wearable electronic device.5. The wearable electronic device of any of Claims 1-2, wherein the second electronic device is configured to provide a first presentation of the first interaction using a first display screen of the second electronic device.6. The wearable electronic device of Claim 5, wherein the first touch display screen of the wearable electronic device is associated with the first display screen of the second electronic device.7. The wearable electronic device of any of Claims 1-2, wherein the processor is further configured to:receive a second interaction from a second touch display screen of the plurality of display screens; andsend a second message including second information indicative of the second interaction and a second display screen identifier associated with the second touch display screen to the second electronic device.8. The wearable electronic device of Claim 7, wherein the second electronic device is configured to provide a first presentation of the first interaction using a first display screen of the second electronic device, and provide a second representation of the second interaction using a second display screen of the second electronic device.9 The wearable electronic device of any of Claims 1-2, wherein the first interaction includes a pattern of interactions provided to a plurality of the touch display screens of the wearable electronic device, and wherein the first information of the first message is indicative of the pattern of touch inputs.10. The wearable electronic device of any of Claims 1-2, wherein the processor is further configured to:receive a third message including third information indicative of a third interaction provided to a third display screen of the second electronic device, and a third display screen identifier associated with the third display screen of the second electronic device.11. The wearable electronic device of Claim 10, wherein the processor is further configured to: provide a third presentation of the third interaction using a third touch display screen of the plurality of touch display screens, wherein the third touch display screen of the wearable electronic device is associated with the third display screen of the second electronic device.12. At least one computer readable storage medium comprising instructions, wherein the instructions when executed by at least one processor cause the at least one processor to:receive a first interaction from a first touch display screen of a plurality of display screens of a wearable electronic device, wherein each touch display screen is configured to display one or more images and including a touch input device configured to receive a user interaction; andsend a first message including first information indicative of the first interaction and first display screen identifier associated with the first touch display screen to a second electronic device.13. The medium of Claim 12, wherein the second electronic device is configured to provide a first presentation of the first interaction using a first display screen of the second electronic device.14. The medium of Claim 13, wherein the first touch display screen of the wearable electronic device is associated with the first display screen of the second electronic device.15. The medium of Claim 12, wherein the first interaction includes a pattern of interactions provided to a plurality of the touch display screens of the wearable electronic device, and wherein the first information of the first message is indicative of the pattern of touch inputs.16. The medium of any of Claims 12-15, wherein the instructions, when executed by the at least one processor, further cause the at least one processor to: receive a second interaction from a second touch display screen of the plurality of display screens; andsend a second message including second information indicative of the second interaction and a second display screen identifier associated with the second touch display screen to the second electronic device.17. The medium of any of Claims 12-15, wherein the instructions, when executed by the at least one processor, further cause the at least one processor to receive a third message including third information indicative of a third interaction provided to a third display screen of the second electronic device, and a third display screen identifier associated with the third display screen of the second electronic device.18. The medium of Claim 17, wherein the instructions, when executed by the at least one processor, further cause the at least one processor to provide a third presentation of the third interaction using a third touch display screen of the plurality of touch display screens, wherein the third touch display screen of the wearable electronic device is associated with the third display screen of the second electronic device.19. A method comprising: receiving a first interaction from a first touch display screen of a plurality of display screens of a wearable electronic device, wherein each touch display screen is configured to display one or more images and including a touch input device configured to receive a user interaction; andsending a first message including first information indicative of the first interaction and first display screen identifier associated with the first touch display screen to a second electronic device.20. The method of Claim 19, wherein the second electronic device is configured to provide a first presentation of the first interaction using a first display screen of the second electronic device.21. The method of Claim 20, wherein the first touch display screen of the wearable electronic device is associated with the first display screen of the second electronic device.22. The method of Claim 19, wherein the first interaction includes a pattern of interactions provided to a plurality of the touch display screens of the wearable electronic device, and wherein the first information of the first message is indicative of the pattern of touch inputs.23. The method of any of Claims 18-22, further comprising:receiving a second interaction from a second touch display screen of the plurality of display screens; andsending a second message including second information indicative of the second interaction and a second display screen identifier associated with the second touch display screen to the second electronic device.24. The method of any of Claims 23, further comprising receiving a third message including third information indicative of a third interaction provided to a third display screen of the second electronic device, and a third display screen identifier associated with the third display screen of the second electronic device.25. The method of Claim 24, further comprising providing a third presentation of the third interaction using a third touch display screen of the plurality of touch display screens, wherein the third touch display screen of the wearable electronic device is associated with the third display screen of the second electronic device. |
MULTI-SCREEN WEARABLE ELECTRONIC DEVICE FOR WIRELESS COMMUNICATIONTECHNICAL FIELD[0001] Embodiments described herein generally relate to a multi-screen wearable electronic device for wireless communication.BACKGROUND[0002] End users have more electronic device choices than ever before. A number of prominent technological trends are currently afoot (e.g., mobile electronic devices, smaller electronic devices, increased user connectivity, etc.), and these trends are changing the electronic device landscape. One of the technological trends currently afoot is electronic devices that can be worn by users, sometimes referred to as wearable electronic devices. Wearable electronic devices can be worn on a user's wrist, arm, ankle, etc. Electronic devices such as mobile phones provide features for typing and sending messages; however, this often requires the user to tediously type messages using a small interactive keyboard on the mobile phone. Although wearable electronic devices are quickly becoming a member of the technological ecosystem, interactions between device and user have yet to become streamlined and generally suffer from the same limitations as mobile phones for communicating messages.BRIEF DESCRIPTION OF THE DRAWINGS[0003] Embodiments are illustrated by way of example and not by way of limitation in the FIGURES of the accompanying drawings, in which like references indicate similar elements and in which:[0004] FIGURES 1A-1C are simplified views illustrating a wearable electronic device for multi-screen communication in accordance with one embodiment of the present disclosure; [0005] FIGURE 2 illustrates an embodiment of an example procedure for multiscreen communication using wearable electronic device;[0006] FIGURE 3 is a simplified block diagram illustrating example logic that may be used to execute activities associated with wearable electronic device 10 in accordance with one embodiment;[0007] FIGURE 4 is a simplified block diagram illustrating an embodiment of a communication system for wireless communication between a first wearable electronic device and a second wearable electronic device;[0008] FIGURES 5A-5C are simplified illustrating a wearable electronic device for multi-screen communication in accordance with another embodiment of the present disclosure;[0009] FIGURE 6 illustrates an embodiment of an example procedure for multiscreen communication using the wearable electronic device of FIGURES 5A-5C;[0010] FIGURES 7A-7E illustrate example interactions of a user of the wearable electronic device in accordance with various embodiments; and[0011] FIGURE 8 is a simplified flow diagram illustrating potential operations for the wearable electronic device in accordance with one embodiment of the present disclosure.[0012] The FIGURES of the drawings are not necessarily drawn to scale, as their dimensions can be varied considerably without departing from the scope of the present disclosure.DETAILED DESCRIPTION OF EXAMPLE EMBODIMENTSOVERVIEW[0013] Example embodiments described herein provide for a wearable electronic device, such as an electronic bracelet, that includes a circuit board coupled to a plurality of electronic components (which may include any type of components, elements, circuitry, etc.). One particular implementation of a wearable electronic device may include a plurality of touch display screens in which each touch display screen is configured to display one or more images and includes a touch input device configured to receive a user interaction. The wearable electronic device may further include a control module in communication with the plurality of touch display screens. The control module includes a processor configured to receive a first interaction from a first touch display screen of the plurality of display screens, and send a first message including first information indicative of the first interaction and a first display screen identifier associated with the first touch display screen to a second electronic device.[0014] In a particular embodiment, the wearable electronic device includes a strap portion, wherein the plurality of touch display screens are at least partially disposed upon the strap portion. In another embodiment, the first message further includes a first device identifier associated with the wearable electronic device. In still another embodiment, the second electronic device includes a second wearable electronic device. In another embodiment, the second electronic device is configured to provide a first presentation of the first interaction using a first display screen of the second electronic device.[0015] In another embodiment, the first touch display screen of the wearable electronic device is associated with the first display screen of the second electronic device. In still another embodiment, the processor is further configured to receive a second interaction from a second touch display screen of the plurality of display screens, and send a second message including second information indicative of the second interaction and a second display screen identifier associated with the second touch display screen to the second electronic device.[0016] In still another embodiment, the second electronic device is configured to provide a first presentation of the first interaction using a first display screen of the second electronic device, and provide a second representation of the second interaction using a second display screen of the second electronic device. In another embodiment, the first interaction includes a pattern of interactions provided to a plurality of the touch display screens of the wearable electronic device, and wherein the first information of the first message is indicative of the pattern of touch inputs. [0017] In another embodiment, the processor is further configured to receive a third message including third information indicative of a third interaction provided to a third display screen of the second electronic device, and a third display screen identifier associated with the third display screen of the second electronic device. In still another embodiment, the processor is further configured to provide a third presentation of the third interaction using a third touch display screen of the plurality of touch display screens, wherein the third touch display screen of the wearable electronic device is associated with the third display screen of the second electronic device.[0018] Another particular implementation of a wearable electronic device includes a plurality of touch display screens in which each touch display screen configured to display one or more images and including a touch input device configured to receive a user interaction. The wearable electronic device further includes a control module in communication with the plurality of touch display screens. The control module includes logic, at least a portion of which is partially implemented in hardware, the logic configured to receive a first interaction from a first touch display screen of the plurality of display screens, and send a first message including first information indicative of the first interaction and a first display screen identifier associated with the first touch display screen to a second electronic device.[0019] A particular implementation of at least one computer readable storage medium comprises instructions, wherein the instructions when executed by at least one processor cause the at least one processor to receive a first interaction from a first touch display screen of a plurality of display screens of a wearable electronic device, wherein each touch display screen is configured to display one or more images and including a touch input device configured to receive a user interaction, and send a first message including first information indicative of the first interaction and first display screen identifier associated with the first touch display screen to a second electronic device.EXAMPLE EMBODIMENTS[0020] The following detailed description sets forth example embodiments of apparatuses, methods, and systems relating to configurations for a wearable electronic device for measuring. Features such as structure(s), function(s), and/or characteristic(s), for example, are described with reference to one embodiment as a matter of convenience; various embodiments may be implemented with any suitable one or more of the described features.[0021] FIGURE 1A is a simplified orthographic view illustrating a wearable electronic device 10 for multi-screen communication in accordance with one embodiment of the present disclosure. Wearable electronic device 10 can include a strap portion 12 having a first touch display screen 14a and a second touch display screen 14b disposed at least partially on an upper surface of strap portion 12. Wearable electronic device 10 further includes a control module 16 disposed at least partially within or upon a surface of strap portion 12. Control module 16 is in communication with each of first touch display screen 14a and second display screen 14b.[0022] In at least one embodiment, strap portion 12 may be of a semi-rigid construction to allow strap portion 12 to be wrapped around a wrist of a user. In still other embodiments, strap portion 12 may include clasp portion at opposing ends of strap portion 12 are configured to be coupled together to allow wearable electronic device 10 to be worn around a wrist of a user. In one or more embodiments, one or more of strap portion 12, control module 16, first touch display screen 14a and second touch display screen 14b are composed of a flexible or semi-flexible material to allowing bending to facilitate wearing of wearable electronic device 10 around the wrist or other body portion of the user.[0023] In one or more embodiments, strap portion 12 may be of a solid unibody construction (as shown in FIGURES 1A-1C) or may include links, chains, cables, weaves, combinations thereof or the like. The ornamental design and material construction of strap portion 12 can be adjusted in any manner to suit any designer, manufacturer and/or vendor without departing from the scope of the embodiments described in the present disclosure.[0024] In one or more embodiments, first touch display screen 14a and second touch display screen 14b is a screen that can be a liquid crystal display (LCD) screen, transparent LCD screen, light-emitting diode (LED) display screen, transparent LED display screen, organic light-emitting diode (OLED) display screen, transparent LED display screen or any other suitable display screen system. In one or more embodiments, one or more of first touch display screen 14a and second touch display screen 14b include a touch input device, which may include a capacitive or resistive touchscreen layer over the screen of first touch display screen 14a and/or second touch display screen 14b.[0025] FIGURE IB is a simplified top plan view of wearable electronic device 10 in which first touch display screen 14a and second touch display screen 14b are shown disposed on the top surface of strap portion 12 so that they may be visible when wearable electronic device 10 is being worn by the user. FIGURE 1C illustrates a simplified bottom view of wearable electronic device 10 showing a bottom surface of strap portion 12.[0026] In one or more embodiments, control module 16 of wearable electronic device 10 may further include a wireless communication module configured to communicate interactions of first touch display screen 14a and/or second touch display screen 14b by a user of wearable electronic device 10 with other wireless electronic devices such as another wearable electronic device associated with another user as will be further described herein.[0027] FIGURE 2 illustrates an embodiment of an example procedure for multiscreen communication using wearable electronic device 10. In the embodiment illustrated in FIGURE 2, wearable electronic device 10 is worn upon a wrist 20 of a user. In a particular embodiment, first touch display screen 14a is configured to a screen with a larger viewing area than that of second touch display screen 14b. In particular embodiments, the larger first touch display screen 14a may be more suitable for viewing information and for interaction than second touch display screen 14b due to the larger area, but may have a higher power consumption than that of second touch display screen 14b. In a particular embodiment, first touch display screen 14a and second touch display screen 14b may be constructed of either the same or different screen technologies. For example, in one embodiment, first touch display screen 14a may be an OLED screen and second touch display screen 14b may be an "e-ink" display. In accordance with various embodiments, first touch display screen 14a and second touch display screen 14b may operate as independent displays and/or mirrored displays to display information to the user. In a particular embodiment, the information displayed on second touch display screen 14b may include a subset of the information displayed on first touch display screen 14a.[0028] In accordance with a particular embodiment, the user may move content from the display to another utilizing a swipe motion across one of the displays. For example, when information is currently being displayed by first touch display screen 14a, the user may swipe across first touch display screen 14a towards second touch display screen 14b. In response to receiving the touch input indicative of the swipe motion across first touch display screen 14a, control module 16 may cause the information currently being displayed by first touch display screen 14a or a subset of the information currently being displayed by first touch display screen 14a to be displayed by second touch display screen 14b. Further, in particular embodiments, control module 16 may cause first touch display screen 14a to display different information than previously displayed or to deactivate first touch display screen 14a to conserve power consumption.[0029] Similarly, when information is currently being displayed by second touch display screen 14b, the user may swipe across second touch display screen 14b towards first touch display screen 14a. In response to receiving the touch input indicative of the swipe motion across second touch display screen 14b, control module 16 may cause the information currently being displayed by second touch display screen 14a or a subset of the information currently being displayed by first touch display screenl4a to be displayed by second touch display screen 14b. In response to receiving the touch input indicative of the swipe motion across second touch display screen 14b, control module 16 may cause the information currently being displayed by second touch display screen 14b or information in addition to that currently being displayed by second touch display screen 14b to be displayed by first touch display screen 14a. Further, in particular embodiments, control module 16 may cause second touch display screen 14b to display different information than previously displayed or to deactivate second touch display screen 14b to conserve power consumption.[0030] Accordingly, in one or more embodiments the user is able to view content on wearable electronic device 10 by utilizing the smaller second touch display screen 14b for quick viewing and/or interaction with displayed information and more efficient power consumption, and utilizing the larger first touch display screen 14b for better visibility and interaction experience.[0031] In various embodiments, control module 16 includes a communication module configured to communicate with other wireless electronic devices such as another multi-screen wearable electronic device associated with a second user. In an example operation according to one embodiment, a first user of wearable electronic device 10 may interact with first touch display screen 14a to generate a message to a second user associated with a second wearable electronic device. Upon receiving the message, the second wearable electronic device may be configured to activate a display screen on second wearable electronic device corresponding to first touch display screen 14a and present the message to the second user using the active display screen of the second wearable device. In still other embodiments, the first user may input a pattern of interactions using one or more of first touch display screen 14a and second touch display screen 14b and send a message representative of the pattern of interactions to the second wearable electronic device. The second wearable electronic device may be further configured to replay or reproduce the pattern of interactions using corresponding displays of the second wearable electronic device.[0032] FIGURE 3 is a simplified block diagram illustrating example logic that may be used to execute activities associated with wearable electronic device 10 in accordance with one embodiment. In at least one example embodiment, wearable electronic device 10 can include a touch input device 302, a touch controller 304, a system memory 306, a nonvolatile memory and/or storage 308, a power management controller 310, processor(s) 312, display controller 314, and wireless communication module 316, each of which is coupled to system control logic 318. Display controller 314 is in further communication with first touch display screen 14a and second touch display screen 14b. In one or more embodiments, touch input device 302, touch controller 304, system memory 306, nonvolatile memory and/or storage 308, power management controller 310, processor(s) 312, display controller 314, first touch display screen 14a, second touch display screen 14b, wireless communication module 316, and system control logic 318 may be disposed at least partially within or upon a surface of housing 16. [0033] Hence, the basic building blocks of any wearable electronic device system (e.g., processor, controller, memory, I/O, display, etc.) can be used in conjunction with the teachings of the present disclosure. Certain components could be discrete or integrated into a System on Chip (SoC). In alternate implementations, instead of wearable electronic devices, certain alternate embodiments deal with mobile phones, tablet devices, etc.[0034] System control logic 318, in at least one embodiment, can include any suitable interface controllers to provide for any suitable interface to at least one processor 312 and/or to any suitable device or component in communication with system control logic 318. System control logic 318, in at least one embodiment, can include one or more memory controllers to provide an interface to system memory 306. System memory 306 may be used to load and store data and/or instructions, for example, for wearable electronic device 10. System memory 306, in at least one embodiment, can include any suitable volatile memory, such as suitable dynamic random access memory (DRAM) for example. System memory 306 may store suitable software 320 and/or non-volatile memory and/or storage device(s).[0035] Non-volatile memory and/or storage device(s) 308 may be used to store data and/or instructions, for example within software 322. Non-volatile memory and/or storage device(s) 308 may include any suitable non-volatile memory, such as flash memory for example, and/or may include any suitable non-volatile storage device(s), such as one or more hard disc drives (HDDs), solid state drives (SSDs), etc. for example. In various embodiments, non-volatile memory and/or storage 308 includes a device identifier 324 associated with wearable electronic device 10 to uniquely identify wearable electronic device 10 from among other devices that may be associated with other users.[0036] Power management controller 310 may include power management logic 326 configured to control various power management and/or power saving functions. In at least one example embodiment, power management controller 310 is configured to reduce the power consumption of components or devices of wearable electronic device 10 that may either be operated at reduced power or turned off when one or more components of wearable electronic device is in an inactive state (e.g., not being accessed, etc.). For example, in at least one embodiment, when one or more components of wearable electronic device 10 are in an inactive state, power management controller 310 may perform one or more of the following: power down the unused portion of touch input device 302; allow one or more of processor(s) 312 to go to a lower power state if less computing power is required during times of inactivity; power down one or more of first touch display screen 14a and second touch display screen 14b, and shutdown any devices and/or components that may be unused when wearable electronic device 10 is in an inactive state. System control logic 318, in at least one embodiment, can include one or more I/O controllers to provide an interface to any suitable input/output device(s).[0037] For at least one embodiment, at least one processor 312 may be packaged together with logic for one or more controllers of system control logic 318. In at least one embodiment, at least one processor 312 may be packaged together with logic for one or more controllers of system control logic 318 to form a System in Package (SiP). In at least one embodiment, at least one processor 312 may be integrated on the same die with logic for one or more controllers of system control logic 318. For at least one embodiment, at least one processor 312 may be integrated on the same die with logic for one or more controllers of system control logic 318 to form a System on Chip (SoC).[0038] For touch input, touch controller 304 may include touch sensor interface circuitry 328 coupled to one or more touch sensor(s) 330 to detect touch input(s) from the user upon first touch display screen 14a and second touch display screen 14b. Touch sensor interface circuitry 328 may include any suitable circuitry that may depend, for example, at least in part on the touch-sensitive technology used for touch input device 302.[0039] Further for touch control, touch control logic 332 may be coupled to touch sensor interface circuitry 328 to help control touch sensor interface circuitry 328 in any suitable manner to detect touch input from the user. For touch control, touch control logic 332 for at least one example embodiment may also be coupled to system control logic 318 to output in any suitable manner digital touch input data corresponding to one or more touch inputs detected by touch sensor interface circuitry 328. Touch control logic 332 may be implemented using any suitable logic, including any suitable hardware, firmware, and/or software logic (e.g., non-transitory tangible media), that may depend, for example, at least in part on the circuitry used for touch sensor interface circuitry 328. [0040] At least one processor 312 for at least one embodiment may execute any suitable software to process digital touch input data output from touch control logic 332. Suitable software may include, for example, any suitable driver software and/or any suitable application software. Display controller 314 is configured to control the display functions of first touch display screen 14a and second touch display screen 14b.[0041] In one or more embodiments, wearable electronic device 10 can include wireless communication module 316 (e.g., Wi-Fi module, Bluetooth™ module, near field communication (NFC) module, or other wireless communication circuitry) to allow wearable electronic device 10 to communicate with one or more other electronic devices (wearable or not wearable) on a network through a wireless connection. The wireless connection may be any 3G/4G/LTE cellular wireless connection, WiFi/WiMAX connection, Bluetooth™ connection, or some other similar wireless connection. In one or more embodiments, the wireless communication circuitry can be configured to provide for two- way radio communications with another two-way radio capable device. In an embodiment, a plurality of antennas can be provisioned in conjunction with wearable electronic device 10, which may be associated with wireless connection activities. The antennas are reflective of electrical components that can convert electric currents into radio waves or radio signals. Wearable electronic device 10 may include logic to determine a best mode of communication using various signal measurement techniques, including, but not limited to, wireless beacons (to locate one or more Wi-Fi networks), received signal strength indicator (RSSI), link quality indicator (LQJ), measurement reports for one or more 3G/4G/LTE cellular wireless connections, combinations thereof or the like.[0042] In one or more embodiments, wearable electronic device 10 may be configured to operate using a replaceable battery, or in some cases, may be configured to operate using a rechargeable battery, each of which may be housed in housing portion 16. In some embodiments, wearable electronic device 10 may include charging contacts, which can be used in combination with a charging device to facilitate charging a rechargeable battery within wearable electronic device 10. Virtually any means may be used to provide power and/or charging for wearable electronic device 10, and, thus, are clearly within the scope of the present disclosure. [0043] Referring now to FIGURE 4, FIGURE 4 is a simplified block diagram illustrating an embodiment of a communication system 400 for wireless communication between a first wearable electronic device 10a and a second wearable electronic device 10b. Communication system 400 includes wearable electronic device 10a, one or more networks 402, a server 404, and second wearable electronic device 10b. In the embodiment illustrated in FIGURE 4, first wearable electronic device 10a includes first touch display screen 14a and second touch display screen 14b, and second wearable electronic device 10b includes a third touch display screen 14a' and a fourth touch display screen 14b'. In accordance with one or more embodiments, first touch display screen 14a of first wearable electronic device 10a is associated with an corresponds to third touch display screen 14a', and second touch display screen 14b of first wearable electronic device 10a is associated with an corresponds with fourth touch display screen 14b'. In at least one embodiment, first wearable electronic device 10a is in communication with network(s) 402 via a first wireless connection, and second wearable electronic device 10b is in communication with network(s) 402 via a second wireless connection. In particular embodiments, one or more of the first wireless connection and second wireless connection may be any 3G/4G/LTE cellular wireless, WiFi/WiMAX connection, Bluetooth™ or some other similar wireless connection. In one or more embodiments, first wearable electronic device 10a is associated with a first user, and second wearable electronic device 10b is associated with a second user.[0044] Network(s) 402 may be a series of points or nodes of interconnected communication paths for receiving and transmitting packets of information that propagate through network(s) 402. Network(s) 402 offers a communicative interface and may include any local area network (LAN), wireless local area network (WLAN), metropolitan area network (MAN), Intranet, Extranet, WAN, virtual private network (VPN), cellular network or any other appropriate architecture or system that facilitates communications in a network environment. Network(s) 402 can comprise any number of hardware or software elements coupled to (and in communication with) each other through a communications medium. [0045] Server 404 is in communication with network(s) 402. In one or more embodiments, server 404 is configured to receive one or more messages transmitted by first wearable electronic device 10a indicative of a touch interaction with one or more of first touch display screen 14a and second touch display 14b, and send the one or more messages to second wearable electronic device 10b. Similarly, server 404 may be configured to receive one or more messages from second wearable electronic device 10b indicative of a touch interaction with one or more of third touch display screen 14a' and fourth tough display 14b'[0046] In example operations associated with FIGURE 4, the first user may interact with one or more of first touch display screen 14a and second touch display screen 14b of first wearable electronic device 10a. For example, the first user may type characters or input a sequence of touch inputs to one or more of first touch display screen 14a and second touch display screen 14b. First wearable electronic device 10a may then send a first message to second wearable electronic device 10b including information indicative of the interaction input and a display screen identifier for each of first touch display screen 14a and second touch display screen 14b that received an interaction input from the first user. In a particular embodiment, the first message is sent from first wearable electronic device 10a to second wearable electronic device 10b via server 404.[0047] In response to receiving the first message from first wearable electronic device 10a, third touch display screen 14a' of second wearable electronic device 10b may present a first representation of the input interactions provided to first touch display screen 14a of first wearable electronic device 10a. In addition, fourth touch display screen 14b' of second wearable electronic device 10b may present a second representation of the input interactions provided to second touch display screen 14b of first wearable electronic device 10a.[0048] In accordance with various embodiments, the second user of second wearable electronic device 10b may receive one or more input interactions to one or more of third touch display screen 14a' and fourth touch display screen 14b', and second wearable electronic device 10b may send a second message to first wearable electronic device 10a including information indicative of the interaction input and a display screen identifier for each of third touch display screen 14a' and fourth touch display screen 14b' that received an interaction input from the second user.[0049] In response to receiving the second message from second wearable electronic device 10b, third touch display screen 14a of first wearable electronic device 10a may present a third representation of the input interactions provided to third touch display screen 14a' of second wearable electronic device 10b. In addition, second touch display screen 14b of first wearable electronic device 10a may present a fourth representation of the input interactions provided to third touch display screen 14b' of second wearable electronic device 10b.[0050] FIGURE 5A is a simplified orthographic view illustrating a wearable electronic device 50 for multi-screen communication in accordance with another embodiment of the present disclosure. Wearable electronic device 50 can include a strap portion 52 having a plurality of touch display screens 54a-54l disposed at least partially on an upper surface of strap portion 52. Wearable electronic device 50 further includes a control module 56 disposed at least partially within or upon a surface of strap portion 52. Control module 56 is in communication with each of the plurality of touch display screens 54a-54l.[0051] In at least one embodiment, strap portion 52 may be of a semi-rigid construction to allow strap portion 52 to be wrapped around a wrist of a user. In still other embodiments, strap portion 52 may include clasp portion at opposing ends of strap portion 52 are configured to be coupled together to allow wearable electronic device 50 to be worn around a wrist of a user. In one or more embodiments, one or more of strap portion 52, control module 56, and touch display screens 54a-54l are composed of a flexible or semi- flexible material to allowing bending to facilitate wearing of wearable electronic device 50 around the wrist or other body portion of the user.[0052] In one or more embodiments, strap portion 52 may be of a solid unibody construction (as shown in FIGURES 5A-5C) or may include links, chains, cables, weaves, combinations thereof or the like. The ornamental design and material construction of strap portion 52 can be adjusted in any manner to suit any designer, manufacturer and/or vendor without departing from the scope of the embodiments described in the present disclosure.[0053] In one or more embodiments, each of touch display screens 54a-54l is a screen that can be a liquid crystal display (LCD) screen, transparent LCD screen, light- emitting diode (LED) display screen, transparent LED display screen, organic light-emitting diode (OLED) display screen, transparent LED display screen or any other suitable display screen system. In one or more embodiments, one or more of touch display screens 54a- 541 include a touch input device, which may include a capacitive or resistive touch screen layer over the screen of touch display screens 54a-54l.[0054] Although the embodiment illustrated in FIGURE 5A shows touch display screens 54a-54l having a relatively random pattern of screen sizes and placements, in other embodiments the touch display screens may be of a uniform size and uniform grid pattern placement. In still other embodiments the sizes and/or placements of the touch display screens may be partially random and partially uniform. Additionally, although the width of the touch display screens are shown as uniform in FIGURE 5A, in some embodiments the widths of the touch display screens may include, for example, single width touch display screens adjacent to double width touch display screens, triple width touch display screens, or any other desired widths. In still other embodiments, the touch display screens may be arranged in single columns, double columns, three columns, etc. according to the desired size and/or capabilities of wearable electronic device 50.[0055] FIGURE 5B is a simplified top plan view of wearable electronic device 50 in which touch display screens 54a-54l are shown disposed on the top surface of strap portion 52 so that they may be visible when wearable electronic device 50 is being worn by the user. FIGURE 5C illustrates a simplified bottom view of wearable electronic device 50 showing a bottom surface of strap portion 52.[0056] In one or more embodiments, control module 56 of wearable electronic device 50 may further include a wireless communication module configured to communicate interactions of touch display screen 54a-54l by a user of wearable electronic device 50 with other wireless electronic devices such as another wearable electronic device associated with another user as will be further described herein. [0057] FIGURE 6 illustrates an embodiment of an example procedure for multiscreen communication using wearable electronic device 50. In the embodiment illustrated in FIGURE 6, wearable electronic device 50 is worn upon a wrist 60 of a user. In various embodiments, control module 56 includes a communication module configured to communicate with other wireless electronic devices such as another multi-screen wearable electronic device associated with a second user. In at least one embodiment, example logic that may be used to execute activities associated with wearable electronic device 50 may be similar to that described with respect to FIGURE 3 except that first touch display screen 14a and second touch display screen 14b may be replaced with touch display screens 54a-54l.[0058] In an example operation according to one embodiment, a first user of wearable electronic device 50 may interact with one or more of touch display screens 54a- 541 to create, send, view and/or reply to abstracted messages from or to another user which may activate only certain screens of the transmitting and receiving device instead of all screens of the device. In one or more embodiments, the first user of wearable electronic device 50 may interact with one or more of touch display screens 54a-54m, for example to create a single or multi-screen message, pattern, and/or design. Wearable electronic device 10 may then send a first message indicative of the message, pattern, or design to a second wearable electronic device associated with a second user having a plurality of touch display screens corresponding to touch display screens 54a-54l of wearable electronic device 50. In a particular embodiment, the second wearable electronic device includes a touch display screen that corresponds and is associated with touch display screens. In response to receiving the first message, the second wearable electronic device may activate only the screens of the second wearable electronic device that correspond to those of touch display screens 54a-54l that were used to generate the message, pattern, and/or design.[0059] Similarly, wearable electronic device 50 may be configured to receive a message indicative of one or more touch inputs from the second user to the touch display screen of the second wearable device from the second wearable device. In response, wearable electronic device 50 may be configured to activate and present a representation of the interactions of the second user using the corresponding touch display screens 54a- 541 of wearable electronic device 50 in order to replay or reproduce the interactions of the second user. The interaction with targeted screens instead of all of the screens of wearable electronic device 50 may increase the efficiency of use and may allow for a more creative and enjoyable user experience.[0060] FIGURES 7A-7E illustrate example interactions of a user of wearable electronic device 50 in accordance with various embodiments. In FIGURE 7 A, a finger 70 of a user presses touch display screen 54d of wearable electronic device 50 cause to powering on of wearable electronic device 50. In FIGURE 7B, finger 70 of the user draws a first design 72 of a first color across touch display screen 54d, touch display screen 54b, and touch display screen 54c. In FIGURE 7C, finger 70 of the user selects touch display screen 54e currently displaying a second color in order to designate the second color as the current color. In FIGURE 7D, finger 70 of the user draws a second design 74 of the second color across touch display screen 54c. During the drawings of first design 72 and second design 74, wearable electronic device 50 may send one or more messages indicative of the designs to a second wearable electronic device that may be configured to replay the designs using corresponding touch display screens of the second wearable electronic device. In a particular embodiment, the second wearable electronic device may display a representation of the first design 72 and second design 74 almost instantaneously with the drawing of the first design 72 and second design 74 by the user of wearable electronic device 50.[0061] FIGURE 7E illustrates an example interaction between a first wearable electronic device 50 in communication with a second wearable electronic device. FIGURE 7E illustrates a strap portion 52' and touch display screens 54b'-54f of the second wearable electronic device. FIGURE 7E shows first design 72 and second design 74 in the process of being drawn on touch display screens 54b-54d of the first wearable electronic device and communicated to the second wearable electronic device. Upon receiving the communication, the second wearable electronic device displays a representation of first design 72 as a first representation 72' and a representation of second design 74 as a second representation 74' upon touch display screens 54b'-54d' as first design 72 and second design 74 are being drawn. In an alternative embodiment, the first wearable electronic device may send the message indicative of first design 72 and second design 74 after completion of one or more of first design 72 and second design 74.[0062] In at least one embodiment, the first wearable electronic device may communicate directly with the second wearable electronic device. In another embodiment, the first wearable electronic device may communicate with the second wearable electronic device via a server. In another embodiment, the first wearable electronic device may be tethered to a first communication device such as a first smartphone, and the second wearable electronic device may be tethered to a second communication device such as a second smartphone. In such an embodiment, the first wearable electronic device may communicate with the first communication device, the first communication device may communicate with the second communication device, and the second communication device may communicate with the second wearable electronic device. In still another embodiment, the first communication device may communicate with the second communication device via a server.[0063] Referring now to FIGURE 8, FIGURE 8 is a simplified flow diagram 800 illustrating potential operations for wearable electronic device 10/50 in accordance with one embodiment of the present disclosure. In 802, control module 16 receives a first interaction from a first touch display screen of the plurality of touch display screens of the wearable electronic device. Each touch display screen is configured to display one or more images and includes a touch input device configured to receive an interaction from a first user associated with the wearable electronic device. In 804, control module 16 sends a first message including first information indicative of the first interaction and may include a first display screen identifier associated with the first touch display screen to an electronic device associated with a second user. In a particular embodiment, the wearable electronic device 10/50 may include a strap portion, wherein the plurality of touch display screens are at least partially disposed upon the strap portion. In particular embodiments, the first message further includes a first device identifier associated with the wearable electronic device. In other particular embodiments, the electronic device associated with the second user includes another wearable electronic device. [0064] In accordance with various embodiments, the electronic device associated with the second user is configured to provide a first presentation of the first interaction using a first display screen of the electronic device associated with the second user. In still other embodiments, the first touch display screen of the wearable electronic device is associated with the first display screen of the electronic device associated with the second user.[0065] In 806, control module 16 receives a second interaction from a second touch display screen of the plurality of display screens. In 808, control module 16 sends a second message including second information indicative of the second interaction and a second display screen identifier associated with the second touch display screen to an electronic device associated with the second user. In accordance with various embodiments, the electronic device associated with the second user is configured to provide a first presentation of the first interaction using a first display screen of the electronic device associated with the second user, and provide a second representation of the second interaction using a second display screen of the electronic device associated with the second user. In at least one embodiment, the first interaction includes a pattern of interactions provided to a plurality of the touch display screens of the wearable electronic device, and the first information of the first message is indicative of the pattern of touch inputs.[0066] In 810, control module 16 receives a third message including third information indicative of a third interaction provided to a third display screen of the electronic device associated with the second user, and may include a third display screen identifier associated with the third display screen of the electronic device associated with the second user. In 812, the wearable electronic device provides a third presentation of the third interaction using a third touch display screen of the plurality of touch display screens and the operations end. In one or more embodiments, the third touch display screen of the wearable electronic device is associated with the third display screen of the device associated with the second user.[0067] The example means and method described above are only a few of the many means and methods that may be used to communicate using wearable communication devices 10 and 50. Virtually any other means could be used, and, thus are clearly within the scope of the present disclosure.[0068] Note that in some example implementations, the functions outlined herein may be implemented in conjunction with logic that is encoded in one or more tangible, non-transitory media (e.g., embedded logic provided in an application-specific integrated circuit (ASIC), in digital signal processor (DSP) instructions, software [potentially inclusive of object code and source code] to be executed by a processor, or other similar machine, etc.). In some of these instances, memory elements can store data used for the operations described herein. This can include the memory elements being able to store software, logic, code, or processor instructions that are executed to carry out the activities described herein. A processor can execute any type of instructions associated with the data to achieve the operations detailed herein. In one example, the processors could transform an element or an article (e.g., data) from one state or thing to another state or thing. In another example, the activities outlined herein may be implemented with fixed logic or programmable logic (e.g., software/computer instructions executed by a processor) and the elements identified herein could be some type of a programmable processor, programmable digital logic (e.g., a field programmable gate array (FPGA), a DSP, an erasable programmable read only memory (EPROM), electrically erasable programmable read-only memory (EEPROM)) or an ASIC that can include digital logic, software, code, electronic instructions, or any suitable combination thereof.[0069] Program instructions may be used to cause a general-purpose or special- purpose processing system that is programmed with the instructions to perform the operations described herein. Alternatively, the operations may be performed by specific hardware components that contain hardwired logic for performing the operations, or by any combination of programmed computer components and custom hardware components. The methods described herein may be provided as a computer program product that may include one or more non-transitory, tangible, machine readable media having stored thereon instructions that may be used to program a processing system or other electronic device to perform the methods. The term "machine readable medium" used herein shall include any medium that is capable of storing or encoding a sequence of instructions for execution by the machine and that cause the machine to perform any one of the methods described herein. The term "non-transitory machine readable medium" shall accordingly include, but not be limited to, memories* such as solid-state memories, optical and magnetic disks. Furthermore, it is common in the art to speak of software, in one form or another (e.g., program, procedure, process, application, module, logic, and so on) as taking an action or causing a result. Such expressions are merely a shorthand way of stating that the execution of the software by a processing system causes the processor to perform an action or produce a result.[0070] It is imperative to note that all of the specifications, dimensions, and relationships outlined herein (e.g., width, length, thickness, materials, etc.) have only been offered for purposes of example and teaching only. Each of these data may be varied considerably without departing from the spirit of the present disclosure, or the scope of the appended claims. The specifications apply only to one non-limiting example and, accordingly, they should be construed as such. In the foregoing description, example embodiments have been described. Various modifications and changes may be made to such embodiments without departing from the scope of the appended claims. The description and drawings are, accordingly, to be regarded in an illustrative rather than a restrictive sense.[0071] Numerous other changes, substitutions, variations, alterations, and modifications may be ascertained to one skilled in the art and it is intended that the present disclosure encompass all such changes, substitutions, variations, alterations, and modifications as falling within the scope of the appended claims. In order to assist the United States Patent and Trademark Office (USPTO) and, additionally, any readers of any patent issued on this application in interpreting the claims appended hereto, Applicant wishes to note that the Applicant: (a) does not intend any of the appended claims to invoke paragraph six (6) of 35 U.S.C. section 112 as it exists on the date of the filing hereof unless the words "means for" or "step for" are specifically used in the particular claims; and (b) does not intend, by any statement in the specification, to limit this disclosure in any way that is not otherwise reflected in the appended claims. EXAMPLE EMBODIMENT IMPLEMENTATIONS[0072] The following examples pertain to embodiments in accordance with this Specification. Note that all optional features of the apparatuses and systems described above may also be implemented with respect to the method or process described herein and specifics in the examples may be used anywhere in one or more embodiments.[0073] Example 1 is a wearable electronic device, comprising: a plurality of touch display screens, each touch display screen configured to display one or more images and including a touch input device configured to receive a user interaction; and a control module in communication with the plurality of touch display screens, the control module including a processor configured to: receive a first interaction from a first touch display screen of the plurality of display screens; and send a first message including first information indicative of the first interaction and a first display screen identifier associated with the first touch display screen to a second electronic device.[0074] In Example 2, the subject matter of Example can optionally include a strap portion, wherein the plurality of touch display screens are at least partially disposed upon the strap portion.[0075] In Example 3, the subject matter of any of Examples 1-2 can optionally include wherein the first message further includes a first device identifier associated with the wearable electronic device.[0076] In Example 4, the subject matter of any of Examples 1-3 can optionally include wherein the second electronic device includes a second wearable electronic device.[0077] In Example 5, the subject matter of any of Examples 1-4 can optionally include wherein the second electronic device is configured to provide a first presentation of the first interaction using a first display screen of the second electronic device.[0078] In Example 6, the subject matter of Example 5 can optionally include wherein the first touch display screen of the wearable electronic device is associated with the first display screen of the second electronic device.[0079] In Example 7, the subject matter of any of Examples 1-6 can optionally include wherein the processor is further configured to: receive a second interaction from a second touch display screen of the plurality of display screens; and send a second message including second information indicative of the second interaction and a second display screen identifier associated with the second touch display screen to the second electronic device.[0080] In Example 8, the subject matter of Example 7 can optionally include wherein the second electronic device is configured to provide a first presentation of the first interaction using a first display screen of the second electronic device, and provide a second representation of the second interaction using a second display screen of the second electronic device.[0081] In Example 9, the subject matter of any of Examples 1-8 can optionally include wherein the first interaction includes a pattern of interactions provided to a plurality of the touch display screens of the wearable electronic device, and wherein the first information of the first message is indicative of the pattern of touch inputs.[0082] In Example 10, the subject matter of any of Examples 1-9 can optionally include wherein the processor is further configured to: receive a third message including third information indicative of a third interaction provided to a third display screen of the second electronic device, and a third display screen identifier associated with the third display screen of the second electronic device.[0083] In Example 11, the subject matter of Example 10 can optionally include wherein the processor is further configured to: provide a third presentation of the third interaction using a third touch display screen of the plurality of touch display screens, wherein the third touch display screen of the wearable electronic device is associated with the third display screen of the second electronic device.[0084] Example 12 is a wearable electronic device comprising a plurality of touch display screens, each touch display screen configured to display one or more images and including a touch input device configured to receive a user interaction, and a control module in communication with the plurality of touch display screens, the control module including logic, at least a portion of which is partially implemented in hardware, the logic configured to: receive a first interaction from a first touch display screen of the plurality of display screens; and send a first message including first information indicative of the first interaction and a first display screen identifier associated with the first touch display screen to a second electronic device.[0085] In Example 13, the subject matter of Example 12 can optionally include wherein the second electronic device is configured to provide a first presentation of the first interaction using a first display screen of the second electronic device.[0086] In Example 14, the subject matter of any of Examples 12-13 can optionally include wherein the first touch display screen of the wearable electronic device is associated with the first display screen of the second electronic device.[0087] In Example 15, the subject matter of any of Examples 12-14 can optionally include wherein the logic is further configured to: receive a second interaction from a second touch display screen of the plurality of display screens; and send a second message including second information indicative of the second interaction and a second display screen identifier associated with the second touch display screen to the second electronic device.[0088] In Example 16, the subject matter of Example 15 can optionally include wherein the second electronic device is configured to provide a first presentation of the first interaction using a first display screen of the second electronic device, and provide a second representation of the second interaction using a second display screen of the second electronic device.[0089] In Example 17, the subject matter of any of Examples 12-16 can optionally include wherein the first interaction includes a pattern of interactions provided to a plurality of the touch display screens of the wearable electronic device, and wherein the first information of the first message is indicative of the pattern of touch inputs.[0090] In Example 18, the subject matter of any of Examples 12-17 can optionally include wherein the logic is further configured to: receive a third message including third information indicative of a third interaction provided to a third display screen of the second electronic device, and a third display screen identifier associated with the third display screen of the second electronic device.[0091] In Example 19, the subject matter of Example 18 can optionally include wherein the logic is further configured to: provide a third presentation of the third interaction using a third touch display screen of the plurality of touch display screens, wherein the third touch display screen of the wearable electronic device is associated with the third display screen of the second electronic device.[0092] Example 20 is at least one computer readable storage medium comprising instructions, wherein the instructions when executed by at least one processor cause the at least one processor to: receive a first interaction from a first touch display screen of a plurality of display screens of a wearable electronic device, wherein each touch display screen is configured to display one or more images and including a touch input device configured to receive a user interaction; and send a first message including first information indicative of the first interaction and first display screen identifier associated with the first touch display screen to a second electronic device.[0093] In Example 21, the subject matter of Example 20 can optionally include wherein the second electronic device is configured to provide a first presentation of the first interaction using a first display screen of the second electronic device.[0094] In Example 22, the subject matter of Example 21 can optionally include wherein the first touch display screen of the wearable electronic device is associated with the first display screen of the second electronic device.[0095] In Example 23, the subject matter of any of Examples 20-22 can optionally include wherein the instructions, when executed by the at least one processor, further cause the at least one processor to: receive a second interaction from a second touch display screen of the plurality of display screens; and send a second message including second information indicative of the second interaction and a second display screen identifier associated with the second touch display screen to the second electronic device.[0096] In Example 24, the subject matter of any of Examples 20-23 can optionally include wherein the instructions, when executed by the at least one processor, further cause the at least one processor to receive a third message including third information indicative of a third interaction provided to a third display screen of the second electronic device, and a third display screen identifier associated with the third display screen of the second electronic device. [0097] In Example 25, the subject matter of Example 24 can optionally include wherein the instructions, when executed by the at least one processor, further cause the at least one processor to provide a third presentation of the third interaction using a third touch display screen of the plurality of touch display screens, wherein the third touch display screen of the wearable electronic device is associated with the third display screen of the second electronic device.[0098] Example 26 is a method comprising: receiving a first interaction from a first touch display screen of a plurality of display screens of a wearable electronic device, wherein each touch display screen is configured to display one or more images and including a touch input device configured to receive a user interaction; and sending a first message including first information indicative of the first interaction and first display screen identifier associated with the first touch display screen to a second electronic device.[0099] In Example 27, the subject matter of Example 26 can optionally include wherein the second electronic device is configured to provide a first presentation of the first interaction using a first display screen of the second electronic device.[0100] In Example 28, the subject matter of Example 27 can optionally include wherein the first touch display screen of the wearable electronic device is associated with the first display screen of the second electronic device.[0101] In Example 29, the subject matter of any of Examples 26-28 can optionally include receiving a second interaction from a second touch display screen of the plurality of display screens; and sending a second message including second information indicative of the second interaction and a second display screen identifier associated with the second touch display screen to the second electronic device.[0102] In Example 30, the subject matter of any of Examples 26-29 can optionally include receiving a third message including third information indicative of a third interaction provided to a third display screen of the second electronic device, and a third display screen identifier associated with the third display screen of the second electronic device. [0103] In Example 31, the subject matter of Example 30 can optionally include providing a third presentation of the third interaction using a third touch display screen of the plurality of touch display screens, wherein the third touch display screen of the wearable electronic device is associated with the third display screen of the second electronic device.[0104] Example 32 is an apparatus comprising means for performing the method of any one of Examples 26-31.[0105] In Example 33, the subject matter of Example 32 can optionally include wherein the means for performing the method comprise a processor and a memory.[0106] In Example 34, the subject matter of Example 33 can optionally include wherein the memory comprises machine readable instructions, that when executed cause the apparatus to perform the method of any one of Examples 33-36.[0107] In Example 35, the subject matter of any one of Examples 32-34 can optionally include wherein the apparatus is a computing system.[0108] Example 36 is at least one computer readable medium comprising instructions that, when executed, implement a method or realize an apparatus as described in any one of Examples 1-19 or 26-31.[0109] Example 37 is an apparatus comprising: means for receiving a first interaction from a first touch display screen of a plurality of display screens of a wearable electronic device, wherein each touch display screen is configured to display one or more images and including a touch input device configured to receive a user interaction; and means for sending a first message including first information indicative of the first interaction and first display screen identifier associated with the first touch display screen to a second electronic device.[0110] In Example 38, the subject matter of Example 37 can optionally include wherein the second electronic device is configured to provide a first presentation of the first interaction using a first display screen of the second electronic device.[0111] In Example 39, the subject matter of Example 38 can optionally include wherein the first touch display screen of the wearable electronic device is associated with the first display screen of the second electronic device. [0112] In Example 40, the subject matter of any of Examples 37-39 can optionally include means for receiving a second interaction from a second touch display screen of the plurality of display screens; and means for sending a second message including second information indicative of the second interaction and a second display screen identifier associated with the second touch display screen to the second electronic device.[0113] In Example 41, the subject matter of Example 40 can optionally include means for receiving a third message including third information indicative of a third interaction provided to a third display screen of the second electronic device, and a third display screen identifier associated with the third display screen of the second electronic device.[0114] In Example 42, the subject matter of Example 41 can optionally include means providing a third presentation of the third interaction using a third touch display screen of the plurality of touch display screens, wherein the third touch display screen of the wearable electronic device is associated with the third display screen of the second electronic device. |
A memory sub-system configured to dynamically determine input/output sizes of write commands based on a media physical layout of a memory sub-system. The memory sub-system can identify, dynamically in response to write commands being selected for execution in media units of the memory sub-system, a portion of a media layout that maps from logical addresses identified by the write commands in the logical address space to physical addresses of memory units in the media units. Based on the media layout, an input/output size for a next write command is identified and transmitted to the host system in a response. The host system generates the next write command and configures the amount of data to be written through the next write command based on based on the input/output size identified in the response. |
1.A method comprising:receiving a write command from the host system in the memory subsystem;identifying a first input/output size for a next write command from the host system based on the physical layout of the media;A response configured to identify at least the first input/output size is transmitted from the memory subsystem to the host system, wherein the host system is configured to be based on the first input/output identified in the response output size to generate the next write command; andThe next write command is received in the memory subsystem, instructing the memory subsystem to write into the memory subsystem an amount of data configured according to the first input/output size.2.The method of claim 1, further comprising:dynamically generating and storing a portion of the physical layout of the media in response to selecting the write command for execution in a media unit of the memory subsystem, the portion from the logical address space created by the write command the identified logical address is mapped to the physical address of the memory unit in the media unit;wherein the response is configured to include a status of a first write command processed in the memory subsystem.3.The method of claim 2, further comprising:It is determined that the first write command has a second input/output size that is different from the first input/output size, wherein the response is configured to indicate that the second input/output size is incorrect.4.3. The method of claim 3, wherein the next write command is transmitted from the host system to the memory subsystem to replace the first write command.5.2. The method of claim 2, wherein the first input/output size is determined as a size of data that can be written to one of the media units in an atomic write operation based on the media physical layout .6.2. The method of claim 2, wherein the first input/output size is determined as a minimum size of next available memory pages, each of which may be based on the media physical layout One of the media units is written in an atomic write operation.7.6. The method of claim 6, wherein the minimum size is based on a pattern of programming data in a next available memory page atomically programmable in one of the media units.8.7. The method of claim 7, wherein the mode is one of a plurality of modes supported in the memory subsystem; and the plurality of modes comprise:Single Level Cell (SLC) mode;Multilevel Cell (MLC) mode;Three Level Cell (TLC) mode; andFour-level cell (QLC) mode.9.9. The method of claim 8, wherein the next available memory page is a NAND flash memory page programmable via a multi-pass programming technique.10.9. The method of claim 9, wherein the NAND flash memory page includes a plurality of planes of NAND memory cells.11.The method of claim 10, wherein the portion of the media physical layout includes a mapping between logical block addressing (LBA) addresses in a namespace and blocks of NAND memory in separate integrated circuit dies; And the input/output size is determined based on an entry of a page map that identifies the pattern of the next available page in a block of NAND memory cells.12.A non-transitory computer storage medium storing instructions that, when executed in a memory subsystem, cause the memory subsystem to perform a method, the method comprising:receiving a write command from a host system in the memory subsystem;identifying a first input/output size for a next write command from the host based on the physical layout of the media;A response configured to identify at least the first input/output size is transmitted from the memory subsystem to the host system, wherein the host system is configured to be based on the first input/output identified in the response output size to generate the next write command; andThe next write command is received in the memory subsystem, instructing the memory subsystem to write into the memory subsystem an amount of data configured according to the first input/output size.13.The non-transitory computer storage medium of claim 12, wherein the method further comprises:dynamically generating and storing a portion of the physical layout of the media in response to selecting the write command for execution in a media unit of the memory subsystem, the portion from the logical address space created by the write command the identified logical address is mapped to the physical address of the memory unit in the media unit;wherein the logical address space is defined in a namespace of the memory subsystem; the namespace is configured with a plurality of regions; and the write command is configured to write in the plurality of regions simultaneously.14.A memory subsystem includes:multiple media units capable of simultaneously writing data; andat least one processing device configured to:receiving a first write command from the host system in the memory subsystem;identifying a first input/output size for a second write command from the host based on the physical layout of the media;transmitting a response to the first write command from the memory subsystem to the host system, wherein the response is configured to identify at least the first input/output size, and wherein the host system is configured to generate the second write command based on the first input/output size identified in the response; andThe second write command is received in the memory subsystem, instructing the memory subsystem to write to the memory subsystem an amount of data configured according to the first input/output size.15.15. The memory subsystem of claim 14, wherein the response is configured to include a status of the first write command processed in the memory subsystem; and the processing device is further configured to:dynamically generating and storing a portion of the physical layout of the media in response to selecting the first write command for execution in a media unit of the memory subsystem, the portion starting at a logical address by the first command the logical addresses identified in the space are mapped to the physical addresses of the memory units in the media unit;determining that the first write command has a second input/output size that is different from the first input/output size; andThe response is configured to indicate that the second input/output size is incorrect.16.15. The memory subsystem of claim 14, wherein the first input/output size is determined as data writable in one of the media units in an atomic write operation based on the media physical layout the size of.17.15. The memory subsystem of claim 14, wherein the first input/output size is determined as a minimum size of next available memory pages, each of which can be based on the media The physical layout is written in one of the media units in an atomic write operation.18.18. The memory subsystem of claim 17, wherein the minimum size is based on a pattern of programming data in the next available page atomically programmable in one of the media units; the pattern is at one of a plurality of modes supported in the memory subsystem; and the plurality of modes include:Single Level Cell (SLC) mode;Multilevel Cell (MLC) mode;Three Level Cell (TLC) mode; andFour-level cell (QLC) mode.19.18. The memory subsystem of claim 17, wherein the next available page is a NAND flash memory page programmable via a multi-pass programming technique; and the NAND flash memory page includes multiple planes of NAND memory cells.20.19. The memory subsystem of claim 19, wherein the portion of the media physical layout comprises a logical block addressing (LBA) address in a namespace and a block of NAND memory in a separate integrated circuit die and the input/output size is determined based on an entry of a page map that identifies the pattern of the next available page in a block of NAND memory cells. |
I/O size control between host system and memory subsystemrelated applicationsThis application claims a provisional US patent filed on May 6, 2019 and entitled "Input/Output Size Control between a Host System and a MemorySub-System" Application No. 62/844,067 and filed on May 1, 2020 and titled "Input/Output Size Control between a Host System and a Memory Sub-System" Priority to US Patent Application Serial No. 16/865,247, the entire disclosure of which is hereby incorporated by reference herein.technical fieldAt least some embodiments disclosed herein relate generally to memory systems, and more specifically, but are not limited to, input/output size control between a host system and a memory subsystem.Background techniqueThe memory subsystem may include one or more memory devices that store data. The memory devices may be, for example, non-volatile memory devices and volatile memory devices. In general, a host system may utilize a memory subsystem to store data at and retrieve data from a memory device.Description of drawingsEmbodiments are illustrated by way of example and not limitation in the figures of the accompanying drawings, in which like references indicate like elements.1 illustrates an example computing system including a memory subsystem, according to some embodiments of the present disclosure.2 shows an input/output size manager that controls the granularity of input/output between the host system and the memory subsystem.3 shows an example of a memory subsystem with dynamic data placement and input/output size control.4 illustrates an example of a data structure configured to support dynamic data placement and input/output size control.Figure 5 shows a method of input/output size control.6 is a block diagram of an example computer system in which embodiments of the present disclosure may operate.detailed descriptionAt least some aspects of the present disclosure relate to input/output size control for a host system to write data into a memory subsystem. The memory subsystem may be a storage device, a memory module, or a mixture of storage devices and memory modules. Examples of memory devices and memory modules are described below in conjunction with FIG. 1 . In general, a host system may utilize a memory subsystem that includes one or more components (eg, memory devices) that store data. The host system can provide data for storage at the memory subsystem, and can request data to be retrieved from the memory subsystem.Traditionally, the host system can send write commands to the memory subsystem to write data at a fixed predetermined size or granularity. For example, the data to be stored into the memory subsystem via each write command from the host system is for the same, fixed, predetermined amount/size of data. However, in some cases, fixed input/output sizes can result in considerable performance loss, increased lifetime of data buffered in the memory subsystem, and/or programming of the memory subsystem with alternative, less efficient data method.At least some aspects of the present disclosure address the above and other deficiencies via an input/output size control mechanism implemented between the host system and the memory subsystem. For example, based on the current state of the media layout used to place data in the media of the memory subsystem, the input/output size controller may determine the preferred size of input/output for the next write command. The preferred size is equal to the amount of data that the memory subsystem can program into the media unit in a single atomic operation. For example, the memory subsystem may have NAND ("NAND") flash memory. Using single-pass programming techniques, atomic write operations in NAND devices can program/store data into single-plane pages, dual-plane pages, quad-plane pages, or multi-plane pages. Using multi-pass programming techniques, atomic write operations in NAND devices can program/store data to pages in SLC (Single Level Cell) mode, pages in MLC (Multi Level Cell) mode, TLC (Triple Level Cell) A page in a schema or a page in a QLC (quad-level cell) schema. Pages programmed in atomic write operations may have different sizes in different modes. For example, using a multi-pass programming approach, an SLC page may have a size of 64 kilobytes (KB), a TLC page may have a size of 128KB, and a QLC page may have a size of 64KB. When data pages for different write streams of different programming modes are interleaved in a NAND device, the host system may not be able to predict the size of the next write command that will fit in the write stream. The memory subsystem may determine the preferred input/output size based on the state of the media layout, and communicate the size to the host system (eg, via a state field in response to the current command). The input/output size provided in the response can be used to configure the next write command. In some cases, when the input/output size of the write command from the host system is not preferred (eg, does not match the preferred size for the next write operation), the memory subsystem may throw an error with the preferred size The status is communicated to the host system to cause the host system to adjust its write commands to the preferred size.FIG. 1 illustrates an example computing system 100 including a memory subsystem 110 in accordance with some embodiments of the present disclosure. Memory subsystem 110 may include media such as one or more volatile memory devices (eg, memory device 102 ), one or more non-volatile memory devices (eg, memory device 104 ), or a combination of these.Memory subsystem 110 may be a storage device, a memory module, or a mixture of storage devices and memory modules. Examples of storage devices include solid state drives (SSDs), flash drives, universal serial bus (USB) flash drives, embedded multimedia controller (eMMC) drives, universal flash storage (UFS) drives, secure digital (SD) cards, and hard disks hard drive (HDD). Examples of memory modules include dual inline memory modules (DIMMs), small outline DIMMs (SO-DIMMs), and various types of non-volatile dual inline memory modules (NVDIMMs).Computing system 100 may be a computing device such as a desktop computer, a laptop computer, a web server, a mobile device, a vehicle (eg, an airplane, drone, train, car, or other vehicle), Internet of Things enabled (IoT) devices, embedded computers (eg, computers included in vehicles, industrial equipment, or networked commercial devices), or computing devices including memory and processing means.Computing system 100 may include host system 120 coupled to one or more memory subsystems 110 . FIG. 1 illustrates one example of a host system 120 coupled to a memory subsystem 110 . As used herein, "coupled to" or "coupled with" generally refers to a connection between components, which may be an indirect communicative connection or a direct communicative connection (eg, without intervening components), whether wired or wireless , including connections such as electrical connections, optical connections, magnetic connections, and the like.Host system 120 may include a processor chipset (eg, processing device 118) and a software stack executed by the processor chipset. A processor chipset may include one or more cores, one or more caches, a memory controller (eg, controller 116 ) (eg, NVDIMM controller), and a storage protocol controller (eg, Peripheral Component Interconnect Express (Peripheral Component Interconnect Express, PCIe) controller, Serial Advanced Technology Attachment (Serial Advanced Technology Attachment, SATA) controller). Host system 120 uses memory subsystem 110 to, for example, write data to and read data from memory subsystem 110 .Host system 120 may be coupled to memory subsystem 110 via a physical host interface. Examples of physical host interfaces include, but are not limited to, Serial Advanced Technology Attachment (SATA) interfaces, Peripheral Component Interconnect Express (PCIe) interfaces, Universal Serial Bus (USB) interfaces, Fibre Channel, Serial Attached SCSI (SAS) ), double data rate (DDR) memory bus, small computer system interface (SCSI), dual inline memory module (DIMM) interface (eg, double data rate (DDR) capable DIMM socket), open NAND flash Interface (ONFI), Double Data Rate (DDR), Low Power Double Data Rate (LPDDR), or any other interface. A physical host interface may be used to transmit data between host system 120 and memory subsystem 110 . When memory subsystem 110 is coupled with host system 120 through a PCIe interface, host system 120 may further use an NVM Express (NVMe) interface to access components (eg, memory device 104). The physical host interface may provide an interface for transferring control, address, data, and other signals between memory subsystem 110 and host system 120 . FIG. 1 illustrates a memory subsystem 110 as an example. In general, host system 120 may access multiple memory subsystems via the same communication connection, multiple individual communication connections, and/or a combination of communication connections.The processing device 118 of the host system 120 may be, for example, a microprocessor, a central processing unit (CPU), a processing core of a processor, an execution unit, or the like. In some cases, controller 116 may be referred to as a memory controller, a memory management unit, and/or an initiator. In one example, controller 116 controls communications over a bus coupled between host system 120 and memory subsystem 110 . In general, the controller 116 may send commands or requests to the memory devices 102 , 104 to the memory subsystem 110 for desired access. Controller 116 may additionally include interface circuitry for communicating with memory subsystem 110 . Interface circuitry may translate responses received from memory subsystem 110 into information for host system 120 .The controller 116 of the host system 120 may communicate with the controller 115 of the memory subsystem 110 to perform operations such as reading data, writing data, or erasing data, and other such operations, at the memory devices 102 , 104 . In some cases, the controller 116 is integrated within the same package of the processing device 118 . In other cases, the controller 116 is separate from the packaging of the processing device 118 . Controller 116 and/or processing device 118 may include hardware such as one or more integrated circuits (ICs) and/or discrete components, buffer memory, cache memory, or combinations thereof. Controller 116 and/or processing device 118 may be a microcontroller, special purpose logic circuitry (eg, field programmable gate array (FPGA), application specific integrated circuit (ASIC), etc.), or another suitable processor.The memory devices 102, 104 may include any combination of different types of non-volatile memory components and/or volatile memory components. Volatile memory devices (eg, memory device 102) may be, but are not limited to, random access memory (RAM), such as dynamic random access memory (DRAM) and synchronous dynamic random access memory (SDRAM).Some examples of non-volatile memory components include "NAND" type flash memory and write-in-place memory, such as three-dimensional intersection ("3D intersection") memory. Cross-point arrays of non-volatile memory can be combined with stackable cross-grid data access arrays for bit storage based on changes in bulk resistance. Additionally, in contrast to many flash-based memories, cross-point non-volatile memory may perform write-in-place operations, where non-volatile memory cells may be programmed without pre-erasing the non-volatile memory cells. The NAND type flash memory includes, for example, two-dimensional NAND (2D NAND) and three-dimensional NAND (3D NAND)Each of memory devices 104 may include one or more arrays of memory cells. One type of memory cell, such as a single level cell (SLC), can store one bit per cell. Other types of memory cells, such as multi-level cell (MLC), three-level cell (TLC), quad-level cell (QLC), and five-level cell (PLC), can store multiple bits per cell. In some embodiments, each of the memory devices 104 may include one or more arrays of memory cells, such as SLC, MLC, TLC, QLC, or any combination of such memory cell arrays. In some embodiments, a particular memory device may include an SLC portion of memory cells, and an MLC portion, a TLC portion, or a QLC portion. The memory cells of memory device 104 may be grouped into pages, which may refer to logical units of the memory device used to store data. For some types of memory (eg, NAND), pages may be grouped to form blocks.Although non-volatile memory devices are described, such as 3D cross-point and NAND-type memories (eg, 2DNAND, 3D NAND), memory device 104 may be based on any other type of nonvolatile memory, such as read only memory (ROM). ), phase change memory (PCM), optional memory, other chalcogenide based memories, ferroelectric transistor random access memory (FeTRAM), ferroelectric random access memory (FeRAM), magnetic random access memory (MRAM), Spin Transfer Torque (STT) - MRAM, Conductive Bridge RAM (CBRAM), Resistive Random Access Memory (RRAM), Oxide-Based RRAM (OxRAM), NOR (NOR) Flash, and Electrically Recognizable Erase programmable read-only memory (EEPROM).Memory subsystem controller 115 (or, for simplicity, controller 115 ) may communicate with memory device 104 to perform operations such as reading data, writing data, or erasing data at memory device 104 and other such operations (eg, , in response to a command scheduled by the controller 116 on the command bus). Controller 115 may include hardware such as one or more integrated circuits (ICs) and/or discrete components, buffer memory, or a combination thereof. The hardware may include digital circuits with dedicated (ie, hard-coded) logic to perform the operations described herein. The controller 115 may be a microcontroller, a special purpose logic circuit (eg, a field programmable gate array (FPGA), an application specific integrated circuit (ASIC), etc.), or another suitable processor.Controller 115 may include a processing device 117 (processor) configured to execute instructions stored in local memory 119 . In the illustrated example, local memory 119 of memory subsystem controller 115 includes embedded memory configured to store instructions for performing operations that control memory subsystem 110 (including processing memory subsystem 110 and the host system) 120) various processes, operations, logic flows and routines.In some embodiments, local memory 119 may contain memory pointers, memory registers for fetched data, and the like. Local memory 119 may also include read only memory (ROM) for storing microcode. Although the example memory subsystem 110 in FIG. 1 has been illustrated as including the memory subsystem controller 115, in another embodiment of the present disclosure, the memory subsystem 110 does not include the controller 115, but may instead rely on external control (eg, provided by an external host or by a processor or controller separate from the memory subsystem).In general, controller 115 may receive commands or operations from host system 120 and may convert the commands or operations into instructions or appropriate commands to achieve the desired access to memory device 104 . The controller 115 may be responsible for other operations, such as wear leveling operations, garbage collection operations, error detection and error correction code (ECC) operations, encryption operations, caching operations, and logical addresses (eg, logical blocks) associated with the memory device 104 . Address translation between addresses (LBAs, namespaces) and physical addresses (eg, physical block addresses). Controller 115 may further include host interface circuitry to communicate with host system 120 via a physical host interface. The host interface circuitry may translate commands received from the host system into command instructions to access memory device 104 and responses associated with memory device 104 into information for host system 120 .Memory subsystem 110 may also include additional circuits or components not illustrated. In some embodiments, memory subsystem 110 may include caches or buffers (eg, DRAM) and address circuitry (eg, row and column decoders) that may receive addresses from controller 115 and respond to addresses Decoding is performed to access memory device 104 .In some embodiments, memory device 104 includes a local media controller 105 that operates in conjunction with memory subsystem controller 115 to perform operations on one or more memory cells of memory device 104 . An external controller (eg, memory subsystem controller 115 ) may manage memory device 104 externally (eg, perform media management operations on memory device 104 ). In some embodiments, memory device 104 is a managed memory device, which is a raw memory device combined with a local controller (eg, local controller 105 ) for media management within the same memory device package. An example of a managed memory device is a managed NAND (MNAND) device.Computing system 100 includes an input/output size manager 113 in memory subsystem 110 that determines a preferred input/output size in media for atomically storing/programming/committing/writing data to memory subsystem 110 . In some embodiments, controller 115 in memory subsystem 110 includes at least a portion of input/output size manager 113 . In other embodiments or combinations, controller 116 and/or processing device 118 in host system 120 includes at least a portion of input/output size manager 113 . For example, controller 115 , controller 116 , and/or processing device 118 may include logic circuitry that implements input/output size manager 113 . For example, the controller 115 or the processing device 118 (processor) of the host system 120 may be configured to execute instructions stored in memory for performing the operations of the input/output size manager 113 described herein. In some embodiments, input/output size manager 113 is implemented in an integrated circuit chip disposed in memory subsystem 110 . In other embodiments, the input/output size manager 113 is part of the operating system, device driver, or application of the host system 120 .The input/output size manager 113 may determine the preferred size for the next write command from the host system from the media physical layout of the mapped logical addresses in the media units/memory devices 102-104. For example, the input/output size manager 113 may determine the preferred size to be 64KB or 128KB based on whether the next page will be programmed in SLC mode, MLC mode, TLC mode or QLC mode. In general, there may be many causes of non-uniformity in page size suitable for atomic write operations. The disclosed techniques to address non-uniformity are not limited to the specific causes of non-uniformity of memory pages available for atomic write operations. The input/output size manager 113 may provide the preferred size to the host system 120 in response to the completed command. In response, host system 120 resizes the next write command issued to memory subsystem 110 . Additional details regarding the operation of the input/output size manager 113 are described below.2 shows an input/output size manager 113 that controls the granularity of input/output between the host system 120 and the memory subsystem 110. For example, the techniques of input/output size control of FIG. 2 may be implemented in computer system 100 of FIG. 1 .In FIG. 2 , host system 120 sends commands 121 , 123 . . . to store data into media 203 of memory subsystem 110 . The command (eg, 121 or 123 ) contains the size of the data to be written into the media 203 (eg, 141 or 143 ) and the logical address (eg, 142 or 144 ) for storing the data in the media 203 .Memory system 110 has a media layout 130 that specifies addresses (eg, 142 and 144) used in commands (123) received in memory subsystem 110 from host system 120 and physical memory locations in memory subsystem's memory media 203 mapping between.In some implementations, media layout 130 is dynamically generated in response to write commands from host system 120 . For example, media 203 may have multiple media units 109A-109N (eg, memory devices 102 and/or 104 illustrated in FIG. 1 ) capable of writing data in parallel. At least some of the parallel streams of write commands from host system 120 may be executed in parallel in memory subsystem 110 when committing dates into memory medium 203 of memory subsystem 110 . However, one media unit can support one write operation at a time. Thus, if two write commands are mapped by media layout 130 to operate on the same media unit (eg, 109A or 109N), an access conflict will occur. Each conflict increases the time that data is buffered in the memory subsystem before the data can be written to the media 203 . To avoid conflicts, media layout 130 may be dynamically determined when media units (eg, 109A and 109N) are determined to be available for execution of write commands.For example, determination of the portion of the media layout for a logical address (eg, 142) used in an incoming write command (eg, 121) may be deferred until the write command (eg, 142) can be executed without conflict 121). When the memory medium 203 is configured on an integrated circuit die (eg, as NAND memory cells), the media layout determination may be based on the identification of the integrated circuit die available to perform write operations during input/output scheduling. The media layout 130 is determined such that the logical addresses of commands to be executed in parallel map to different integrated circuit dies available for simultaneous/parallel operation without conflict. Thus, media access conflicts between write commands from different active streams can be completely avoided.In general, a write stream contains a set of commands to write, fine-tune, and rewrite datasets together as a group. Within the group, data may be written in logical space sequentially, randomly or pseudo-sequentially. Preferably, the data in the group is written into a set of erase blocks, wherein the memory cells in the set of erase blocks store data for the stream, but not data from other streams. The set of erase blocks may be erased to remove data for the stream without erasing data for other streams. In some cases, when the logical addresses of different streams are mapped into the same erase block set, the data of different streams cannot be erased individually, and conflicts may occur. Such conflicts can also be avoided through dynamic media layout techniques.Different write streams may be configured to store data in media 203 in different modes. For example, one write stream may store data in memory cells in media 203 in SLC mode or MLC mode; and another write stream may store data in memory cells in media 203 in TLC mode or QLC mode data. As a result, host system 120 may not be able to predict the preferred size or granularity of data used to configure write commands.Memory subsystem 110 has an input/output size manager 113 that is configured to determine a preferred input/output size or granularity of data for write commands. Input/output size manager 113 is configured to communicate the preferred size to host system 120 via a response (eg, 143 or 145 ) transmitted from memory subsystem 110 to host system 120 .For example, following execution/processing of command 121, response 131 is transmitted from memory subsystem 110 to host system 120. The response 131 is configured to include the preferred size 143 for the next command 123 . After receiving the response 131 , the host system 120 may configure the next command 123 to have a preferred size 143 . After execution/processing of command 123, input/output size manager 113 may provide a preferred size 145 for the next command in response 133 to command 123 transmitted from memory subsystem 110 to host system 120.In some embodiments, when a command (eg, 121 ) received in memory subsystem 110 has an input/output size (eg, 141 ) that is different from the preferred size (eg, 143 ) determined from media layout 130 , the input The output/output size manager 113 may generate a response (eg, 131 ) to a command (eg, 121 ) indicating an error in the input/output size of the command (eg, 121 ), and provide the correct input/output size (eg, 143). In view of the response (eg, 131), host system 120 may modify the command (eg, 121) and generate a replacement command (eg, 123) with the correct size (eg, 143).In alternative embodiments, memory subsystem 110 may execute commands (eg, 121 ) of non-preferred sizes (eg, with reduced performance and/or extended buffer time for the data of command 121 ). The response (eg, 131 ) allows the host system 120 to correct the input/output size for subsequent commands (eg, 123 ).3 shows an example of a memory subsystem with dynamic data placement and input/output size control. For example, the memory subsystem of FIG. 3 may be implemented in the memory subsystem 110 of FIG. 1 using the input/output size manager 113 of FIG. 2 . However, the techniques of FIGS. 1 and 2 are not limited to the implementation of the memory subsystem illustrated in FIG. 3 . For example, collision avoidance techniques may implement a plain block device, a device that supports namespaces, or a device that supports partitioned namespaces (eg, the memory subsystem illustrated in FIG. 3). Accordingly, the disclosure presented herein is not limited to the example of FIG. 3 .In FIG. 3 , namespace 201 is configured on the media storage capacity of memory subsystem 110 . Namespace 201 provides a logical block addressing space that can be used by host system 120 to specify memory locations for read or write operations. Namespace 201 may be allocated over a portion of the media storage capacity of memory subsystem 110 or the entire media storage capacity of memory subsystem 110 . In some cases, multiple namespaces may be allocated on separate, non-overlapping portions of the media storage capacity of memory subsystem 110 .In FIG. 3 , namespace 201 is configured to have a plurality of zones 211 , 213 , . . . , 219 . Each region (eg, 211) in the namespace allows random read access to LBA addresses in the region (eg, 211) and sequential write access to LBA addresses in the region (eg, 211) , but does not allow random write access to random LBA addresses in area (211). Therefore, data is written to the area (eg, 211 ) in a predetermined sequential order in the LBA address space of the namespace 201 .When configuring a zone (eg, 211) in namespace 201, a media layout may be predetermined for the zone (eg, 211) (eg, for simplicity). LBA addresses in a region (eg, 211 ) may be pre-mapped to media 203 of memory subsystem 110 . However, as discussed above, this predetermined media layout can cause media access conflicts when there are multiple parallel write streams. Randomizing the mapping from LBA addresses in a region (eg, 211) to memory locations in media 203 can reduce, but not eliminate, collisions.Preferably, dynamic data placer 153 is configured in memory subsystem 110 to create portions of media layout 130 when scheduling write commands for execution, thereby completely eliminating conflicts. In some embodiments, dynamic data placer 153 is part of input/output size manager 113 .For example, medium 203 of memory subsystem 110 may have multiple integrated circuit dies 205 , . . . , 207 . Each of the integrated circuit dies (eg, 205 ) may have multiple planes 221 , . . . , 223 of memory cells (eg, NAND memory cells). Each of the planes (eg, 221 ) may have multiple blocks 231 , . . . , 233 of memory cells (eg, NAND memory cells). Each of the blocks (eg, 231 ) may have multiple pages 241 , . . . , 243 of memory cells (eg, NAND memory cells). The memory cells in each page (eg, 241 ) are configured to be programmed to store/write/commit data together in an atomic operation; and the memory cells in each block (eg, 231 ) are configured to be in an atomic operation Erase data together.When a write command (eg, 121) for storing data in one area (eg, 211) and another write command (eg, 123) for storing data in another area (eg, 213) ) are scheduled for parallel execution, resulting in two integrated circuit dies (eg, 205 and 207 ) being available for concurrent operation, the dynamic data placer 153 will write to the LBA addresses (eg, 121 and 123 ) of the LBA addresses (eg , 131 and 133) are mapped into pages located in different dies (eg, 205 and 207). Therefore, media access conflicts can be avoided.4 illustrates an example of a data structure configured to support dynamic data placement and input/output size control. For example, the media layout 130 of FIG. 2 or 3 may be implemented using the data structure of FIG. 4 .In FIG. 4, region map 301 is configured to provide media layout information for regions (eg, 211) in a namespace (eg, 201). Zone map 301 may have multiple entries. Each entry in the zone map 301 identifies information about the zone (eg, 211 ), such as the starting LBA address 311 of the zone (eg, 211 ), the block set identifier 313 of the zone (eg, 211 ), the zone (eg, 211 ) 211), the cursor value 315, the state 317 of the zone (eg, 211), and so on.Host system 120 begins writing data in a zone (eg, 211 ) at zone start LBA address 311 . Host system 120 writes data in regions (eg, 211) sequentially in LBA space. After a certain amount of data has been written into the area (eg, 211), the current starting LBA address for writing subsequent data is identified by cursor value 315. Each write command for the zone moves the cursor value 315 to the new starting LBA address for the zone's next write command. State 317 may have a value indicating that the area (eg, 211 ) is empty, full, implicitly open, explicitly open, closed, and the like.In Figure 4, logical-to-physical block mapping 303 is configured to facilitate translation of LBA addresses (eg, 331) to physical addresses in the medium (eg, 203).The logical to physical block map 303 may have multiple entries. The LBA address (eg, 331 ) may be used or translated into an index to an entry in the logical-to-physical block map 303 . The index can be used to find the entry for the LBA address (eg, 331). Each entry in the logical-to-physical block map 303 identifies the physical address of a memory block in the medium (eg, 203 ) for an LBA address (eg, 331 ). For example, the physical address of a memory block in the medium (eg, 203) may include a die identifier 333, a block identifier 335, a page map entry identifier 337, and the like.Die identifier 333 identifies a specific integrated circuit die (eg, 205 or 207 ) in medium 203 of memory subsystem 110 .Block identifier 335 identifies a particular block of memory (eg, NAND flash memory) within an integrated circuit die (eg, 205 or 207 ) identified using die identifier 333 .Page map entry identifier 337 identifies an entry in page map 305 .Page map 305 may have multiple entries. Each entry in page map 305 may include a page identifier 351 that identifies a page of memory cells within a block of memory cells (eg, NAND memory cells). For example, page identifier 351 may include the wordline number of the page and the subblock number of the page in a block of NAND memory cells. Additionally, the entry for the page may contain the programming mode 353 for the page. For example, pages can be programmed in SLC mode, MLC mode, TLC mode, or QLC mode. When configured in SLC mode, each memory cell in the page will store one bit of data. When configured in MLC mode, each memory cell in the page will store two bits of data. When configured in TLC mode, each memory cell in the page will store three bits of data. When configured in QLC mode, each memory cell in a page will store four bits of data. Different pages in an integrated circuit die (eg, 205 or 207) may have different modes for data programming.In FIG. 4, the block set table 307 stores the data control aspects of the dynamic media layout of the region (eg, 211).The block set table 307 may have multiple entries. Each entry in block set table 307 identifies a number/count 371 of integrated circuit dies (eg, 205 and 207) in which data for a region (eg, 211) is stored. For each integrated circuit die (eg, 205 and 207) for the region (eg, 211), the entries of the block set table 307 have a die identifier 373, a block identifier 375, a page map entry identifier 377, and so on.Die identifier 373 identifies a specific integrated circuit die (eg, 205 or 207 ) in media 203 of memory subsystem 110 on which a zone (eg, 211 ) can be stored subsequent data.Block identifier 375 identifies a specific block (eg, 231 or 233) of memory (eg, NAND flash memory) within the integrated circuit die (eg, 205 or 207 ) identified using die identifier 373 at which block Subsequent data of the zone (eg, 211) may be stored in (eg, 231 or 233).Page map entry identifier 337 identifies an entry in page map 305 that identifies a page (eg, 241 or 241 ) that can be used to store subsequent data of a zone (eg, 211 ).Figure 5 shows a method of input/output size control. The method of FIG. 5 may be performed by processing logic, which may include hardware (eg, processing devices, circuitry, special purpose logic, programmable logic, microcode, hardware of the device, integrated circuits, etc.), software (eg, in processing instructions to run or execute on the device) or a combination thereof. In some embodiments, the method of FIG. 5 is performed, at least in part, by the input/output size manager 113 of FIG. 1 or 2 . Although shown in a particular sequence or order, unless otherwise specified, the order of the processes may be modified. Therefore, it is to be understood that the illustrated embodiments are examples only and that the illustrated processes may be performed in a different order, and that some processes may be performed in parallel. Additionally, one or more procedures may be omitted in various embodiments. Therefore, not all procedures are required in every embodiment. Other process flows are also possible.At block 401 , memory subsystem 110 receives a write command from host system 120 . For example, write commands may be received in multiple write streams. For example, each respective stream of the plurality of streams is configured to sequentially write data in a logical address space in one embodiment; and in another embodiment, a stream of the plurality of streams Configured to write data in the logical address space pseudo-sequentially or randomly in one embodiment. Each write stream contains a set of commands annotated to write, fine-tune, rewrite the dataset together as a group. Within the group, data may be written in logical space sequentially, randomly or pseudo-sequentially. Preferably, data in a group is written into a set of erase blocks, wherein the memory cells in the set of erase blocks store data for the stream, but not data from other streams. The set of erase blocks may be erased to remove data for the stream without erasing data for other streams.For example, write commands may be provided in multiple write streams. Each of the write streams is permitted to write sequentially at LBA addresses in regions (eg, 211 ) in the allocated namespace (eg, 201 ) on media 203 of memory subsystem 110 , but is prohibited at LBA addresses Data is written out of order in the space.At block 403, the memory subsystem 110 dynamically identifies a portion of the media layout 130, the portion of the media layout 130 from the logical The logical addresses identified in the address space map to the physical addresses of the memory units in the media units 109A-109N. For example, portions of media layout 130 may be dynamically identified in one embodiment, which may cause non-uniformity in page size for atomic data programming. In other embodiments, the non-uniformity may be caused by the structure and/or data programming scheme and/or sequence in the integrated circuit die.At block 405, the input/output size manager 113 identifies a first input/output size (eg, 123) for the next write command (eg, 123) from the host system 120 based on the media physical layout (eg, page map 305). , 143), wherein the first input/output size corresponds to an atomic unit of data programming in the media unit.At block 407 , memory subsystem 110 transmits to host system 120 a response (eg, 131 ) that is configured to identify at least a first input/output size (eg, 143 ). Host system 120 is configured to generate the next write command (eg, 123) based on the first input/output size (eg, 143) identified in the response.At block 409, the memory subsystem 110 receives the next write command (eg, 123) configured to instruct the memory subsystem to follow the first input/ The amount of data of the output size (eg, 143) is written into the memory subsystem.For example, the response (eg, 131 ) is configured to include the status of the first write command (eg, 121 ) processed in the memory subsystem 110 . If the input/output size manager 113 determines that the first write command (eg, 121) has a second input/output size (eg, 143) that is different from the first input/output size. The input/output size manager 113 may be configured to respond (eg, 131 ) to indicate that the second input/output size (eg, 121 ) is incorrect, which may cause the host system 120 to issue the next write command (eg, 123 ) to replace the first write command (eg, 121). Optionally, memory subsystem 110 may execute the first write command (eg, 121 ) in a non-optimal manner, and send a response (eg, 131 ) to indicate completion of execution of the first write command (eg, 121 ), and The preferred size (eg, 143) is provided in the response (eg, 131) to cause the host system 120 to size subsequent data for the write command (eg, 123) according to the preferred size (eg, 143).The preferred input/output size (eg, 143) may be determined based on the media layout 130 as the size of the data that can be written to one of the media units 109A-109N in an atomic write operation. When memory cells cannot be programmed individually, groups of memory cells can be programmed atomically. For example, when memory cells in a page of memory cells (eg, 241) are programmed in an atomic write operation, the atomic write operation programs all memory cells in the page (eg, 241). Therefore, the preferred size of input/output is the size of the data that can be stored into the entire set of atomically programmable memory cells in a page (eg, 241). When a write command has an input/output size smaller than the preferred size, the storage capacity of the entire set of atomically programmable memory cells in a page (eg, 241 ) is not fully available for the write operation. When a write command has an input/output size larger than the preferred size, the data for the write command will be programmed via multiple atomic write operations. Therefore, some data of a write command may have to be buffered for a longer period of time in order to wait for the next atomic write operation.In some cases, a page of memory cells (eg, 241 ) is a multi-plane page that can be programmed in different modes using multi-pass programming techniques. For example, when in single-level cell (SLC) mode, each memory cell in a page is programmed to store a single bit of data; when in multi-level cell (MLC) mode, each memory cell in a page A cell is programmed to store two bits of data; when in three-level cell (TLC) mode, each memory cell in a page is programmed to store three bits of data; and when in four-level cell (QLC) mode , each memory cell in the page is programmed to store four bits of data. Therefore, the next available multi-plane page may have a different capacity to accept/store data for programming mode. The input/output size manager 113 may determine the preferred size from the programming mode information (eg, 353 ) in the page map 305 illustrated in FIG. 4 .In some cases, different memory cells may each have available pages. Different available pages in different memory cells may have different programming modes, and thus different sizes. Input/output size manager 113 may select the smallest size of the next available memory page as the preferred size (eg, 143 or 145) communicated to host system 120. The reduced preferred size provides the host system 120 with an opportunity to construct the write stream at the smallest possible size.For example, when the first command is scheduled for execution, execution of the second command may be performed on a subset of the memory cells of the media of memory subsystem 110 . Therefore, the subset of memory cells used to execute the second command is not available for the first command. After the first command is dispatched and a portion of the media layout for the logical address used in the first command is determined, the first command may execute concurrently in multiple media units and/or with a second in the remaining media units of memory subsystem 110 The execution progress of the command is executed concurrently.For example, after identifying a number of memory cells (eg, integrated circuit dies) available to execute the next command, the input/output size manager 113 may identify from the block set table 307 the number of memory cells available to store the data for the next command physical address. The physical address can be used to update the corresponding entry in the logical-to-physical block map 303 for the LBA address used in the next command.For example, when the integrated circuit die (eg, 205) contains no write data, the input/output size manager 113 may determine that memory cells in the integrated circuit die (eg, 205) can be written/programmed into area command. From the block set table 307, the input/output size manager 113 locates the entry for the region (eg, 205), locates the block identifier 375 associated with the identifier 373 of the integrated circuit die (eg, 205), and identifies the page map entry 377, and use the die identifier 373, block identifier 375, and page map entry identifier 377 to update the logical to physical block map 303 with the LBA address 331 used in the command for the region (eg, 211). The corresponding field of the entry. Thus, for LBA address 331, a command for a region (eg, 211) can be executed without a media access conflict.In some implementations, the communication channel between the processing device 118 and the memory subsystem 110 comprises a computer network, such as a local area network, a wireless local area network, a wireless personal area network, a cellular communication network, a broadband high-speed always-connected wireless communication connection (eg, current or future generation mobile network links); and the processing device 118 and the memory subsystem may be configured to communicate with each other using data storage management and usage commands similar to those in the NVMe protocol.Memory subsystem 110 may typically have non-volatile storage media. Examples of non-volatile storage media include memory cells formed in integrated circuits and magnetic materials coated on hard disks. Non-volatile storage media can maintain data/information stored therein without consuming power. Memory cells may be implemented using various memory/storage technologies such as NAND logic gates, NOR logic gates, phase change memory (PCM), magnetic random access memory (MRAM), resistive type random access memory, cross-point storage, and memory devices (eg, 3D XPoint memory). Crosspoint memory devices use transistorless memory elements, each of which has memory cells and selectors stacked together in columns. The columns of memory elements are connected by two vertical wire layers, one layer above the column of memory elements and the other layer below the column of memory elements. Each memory element can be individually selected at the intersection of a line on each of the two layers. Crosspoint memory devices are fast and non-volatile, and can be used as a general-purpose memory pool for processing and storage.A controller (eg, 115 ) of the memory subsystem (eg, 110 ) may execute firmware to perform operations in response to communications from the processing device 118 . Generally speaking, firmware is a type of computer program that provides control, monitoring, and data manipulation of engineered computing devices.Some embodiments involving the operation of the controller 115 may be implemented using computer instructions (eg, firmware of the controller 115 ) executed by the controller 115 . In some cases, hardware circuitry may be used to implement at least some of the functions. The firmware may be initially stored in a non-volatile storage medium or another non-volatile device and loaded into volatile DRAM and/or in-processor cache memory for execution by the controller 115 .A non-transitory computer storage medium may be used for the instructions of the firmware of the memory subsystem (eg, 110). When executed by the controller 115 and/or the processing device 117, the instructions cause the controller 115 and/or the processing device 117 to perform the methods discussed above.6 illustrates an example machine of computer system 500 within which a set of instructions for causing the machine to perform any one or more of the methods discussed herein can be executed. In some embodiments, computer system 500 may correspond to contain, be coupled to, or use a memory subsystem (eg, memory subsystem 110 of FIG. 1 ) or may be operable to perform operations of input/output size manager 113 (eg, execute instructions to perform a host system (eg, host system 120 of FIG. 1 ) corresponding to the operations of input/output size manager 113 ) described with reference to FIGS. 1-5 . In alternative embodiments, the machines may be connected (eg, networked) to other machines in a LAN, intranet, extranet, and/or the Internet. Machines may operate at the capacity of a server or client machine in a client-server network environment as a peer machine in a peer-to-peer (or distributed) network environment or as a server or client machine in a cloud computing infrastructure or environment operate.The machine may be a personal computer (PC), tablet PC, set-top box (STB), personal digital assistant (PDA), cellular telephone, network appliance, server, network router, switch or bridge, or capable of (sequentially or otherwise ) any machine that executes a set of instructions specifying actions to be taken by said machine. Additionally, although describing a single machine, the term "machine" should also be considered to encompass any collection of machines that, individually or collectively, execute one (or more) sets of instructions to perform any of the methods discussed herein. or more.The example computer system 500 includes a processing device 502, main memory 504 (eg, read only memory (ROM), flash memory, dynamic random access memory (DRAM) (eg, synchronous DRAM (SDRAM) or Rambus DRAM (RDRAM)), etc.), Static Random Access Memory (SRAM), etc.), and data storage system 518, which communicate with each other via bus 530 (which may include multiple buses).Processing device 502 represents one or more general-purpose processing devices, such as microprocessors, central processing units, and the like. More particularly, the processing device may be a complex instruction set computing (CISC) microprocessor, a reduced instruction set computing (RISC) microprocessor, a very long instruction word (VLIW) microprocessor, or a processor implementing other instruction sets, Or a processor that implements a combination of instruction sets. Processing device 502 may also be one or more special purpose processing devices, such as application specific integrated circuits (ASICs), field programmable gate arrays (FPGAs), digital signal processors (DSPs), network processors, and the like. Processing device 502 is configured to execute instructions 526 for performing the operations and steps discussed herein. Computer system 500 may further include a network interface device 508 to communicate via network 520 .The data storage system 518 may include a machine-readable storage medium 524 (also referred to as a computer-readable medium) on which are stored one or more sets of instructions 526 or embody the methods or functions described herein any one or more of the software. The instructions 526 may also reside wholly or at least partially within the main memory 504 and/or within the processing device 502 during their execution by the computer system 500, the main memory 504 and the processing device 502 also constituting a machine-readable storage medium . Machine-readable storage medium 524 , data storage system 518 , and/or main memory 504 may correspond to memory subsystem 110 of FIG. 1 .In one embodiment, the instructions 526 include instructions to implement functions corresponding to the input/output size manager 113 (eg, the input/output size manager 113 described with reference to FIGS. 1-5). Although machine-readable storage medium 524 is shown as a single medium in example embodiments, the term "machine-readable storage medium" should be considered to encompass a single medium or multiple media that store one or more sets of instructions. The term "machine-readable storage medium" shall also be considered to encompass any medium capable of storing or encoding a set of instructions for execution by a machine and causing the machine to perform any one or more of the methods of the present disclosure. The term "machine-readable storage medium" shall be considered to include, but not be limited to, solid-state memory, optical media, and magnetic media.Portions of the previous detailed description have been presented in terms of algorithms and symbolic representations of operations on data bits within a computer memory. These algorithmic descriptions and representations are the means used by those skilled in the data processing arts to most effectively convey the substance of their work to others skilled in the art. Herein, and generally, an algorithm is conceived as a self-consistent sequence of operations that produce a desired result. Operations are those requiring physical manipulations of physical quantities. Usually, though not necessarily, these quantities take the form of electrical or magnetic signals capable of being stored, combined, compared, and otherwise manipulated. It has proven convenient at times, principally for reasons of common usage, to refer to these signals as bits, values, elements, symbols, characters, terms, numbers, or the like.It should be borne in mind, however, that all of these and similar terms are to be associated with the appropriate physical quantities and are merely convenient labels applied to these quantities. The present disclosure may refer to computer systems that manipulate and transform data represented as physical (electronic) quantities within the registers and memory of a computer system into other data similarly represented as physical quantities within the computer system memory or registers or other such information storage systems or similar actions and processes of electronic computing devices.The present disclosure also relates to apparatus for performing the operations herein. This apparatus may be specially constructed for the required purposes, or it may comprise a general-purpose computer selectively activated or reconfigured by a computer program stored in the computer. Such a computer program may be stored in a computer-readable storage medium, such as, but not limited to, any type of disk, including floppy disks, optical disks, CD-ROMs and magneto-optical disks, read only memory (ROM), random access memory (RAM), EPROMs, EEPROMs, magnetic or optical cards, or any type of media suitable for storing electronic instructions, are each connected to a computer system bus.The algorithms and displays presented herein are not inherently related to any particular computer or other device. Various general-purpose systems may be used with programs in accordance with the teachings herein, or it may prove convenient to construct more specialized apparatus to perform the described methods. The structure of a variety of these systems will be presented as set forth in the description below. Additionally, the present disclosure has not been described with reference to any particular programming language. It should be appreciated that various programming languages may be used to implement the teachings of the present disclosure as described herein.The present disclosure may be provided as a computer program product or software, which may include a machine-readable medium having stored thereon instructions that may be used to program a computer system (or other electronic device) to perform processes in accordance with the present disclosure. A machine-readable medium includes any mechanism for storing information in a form readable by a machine (eg, a computer). In some embodiments, machine-readable (eg, computer-readable) media include machine (eg, computer-readable) storage media such as read-only memory ("ROM"), random-access memory ("RAM"), magnetic disks Storage media, optical storage media, flash memory devices, etc.In this specification, various functions and operations are described as being performed by or caused by computer instructions to simplify the description. However, those of ordinary skill in the art will recognize that such expressions are intended to be functionalities resulting from the execution of computer instructions by one or more controllers or processors (eg, microprocessors). Alternatively or in combination, the functions and operations may be implemented using special purpose circuits, with or without software instructions, such as using application specific integrated circuits (ASICs) or field programmable gate arrays (FPGAs). Embodiments may be implemented using hardwired circuits without software instructions or in conjunction with software instructions. Thus, the techniques are not limited to any specific combination of hardware circuitry and software, nor to any specific source of instructions executed by a data processing system.In the foregoing specification, embodiments of the present disclosure have been described with reference to specific example embodiments thereof. It will be apparent that various modifications may be made to the present disclosure without departing from the broader spirit and scope of the embodiments of the disclosure as set forth in the appended claims. Accordingly, the specification and drawings are to be regarded in an illustrative rather than a restrictive sense. |
A method, apparatus, and system for an architecture for machine learning acceleration is presented. An apparatus includes a plurality of processing elements, each including a tightly-coupled memory, and a memory system coupled to the processing elements. A global synchronization manager is coupled to the plurality of the processing elements and to the memory system. The processing elements do not implement a coherency protocol with respect to the memory system. The processing elements implement direct memory access with respect to the memory system, and the global synchronization manager is configured to synchronize operations of the plurality of processing elements through the TCMs. |
What is claimed is:1. An inference accelerator comprising:a memory system;a plurality of processing elements, each processing element:having a tightly coupled memory (TCM);coupled to the memory system; andadapted to access the memory system; anda global synchronization manager (GSM) module coupled to the plurality of processing elements and to the memory system, the GSM adapted to synchronize operations of the plurality of processing elements and memory system using corresponding synchronization modules of each of the plurality of processing elements.2. The inference accelerator of claim 1, wherein the processing elements do not implement a coherency protocol with respect to the memory system.3. The inference accelerator of claim 1, wherein each processing element further comprises:a vector processor adapted to perform floating point operations;a scalar processor; anda matrix processor adapted to perform floating point operations.4. The inference accelerator of claim 1, wherein:the plurality of processing elements are interconnected by a first network configured to support multicast operations; andeach of the plurality of processing elements is connected to the memory system by a second network separate from the first network.5. The inference accelerator of claim 4, wherein the inference accelerator further comprising a controller connected to the second network.6. The inference accelerator of claim 4, wherein: the GSM is coupled to each of the processing elements via a third network separate from the first network and the second network;each of the processing elements comprises a local sync manager; andthe GSM is configured to provide configuration information to the local sync manager of each processing element of the plurality of processing elements via the third network.7. The inference accelerator of claim 1, wherein the first network is configured to implement zero encoding.8. The inference accelerator of claim 1, wherein:the synchronization modules of the plurality of processing elements used by the GSM to synchronize operations of the plurality of processing elements are the corresponding TCMs;each TCM is adapted to store a set of synchronization variables; andthe GSM is adapted to store and adjust the synchronization variables in the TCMs.9. The inference accelerator of claim 1, wherein the inference accelerator is configured to:transform a neural network model into a directed acyclic graph;transform the directed acyclic graph into computation and data movement operations; andschedule the computation and data movement operations for execution in parallel pipelines by the processing elements, wherein the computation and data movement operations are dispatched using dispatch scaling.10. The inference accelerator of claim 9, wherein:the plurality of processing elements is interconnected by a first network configured to perform multicast operations; andthe scheduling of computation and data movement operations includes the replication of data sets using multicast operations on the first network.11. An apparatus comprising the inference accelerator of claim 1, further comprising a plurality of interconnected additional inference accelerators configured substantially the same as the inference accelerator and connected to the inference accelerator.12. A method for an inference accelerator having a plurality of processing elements, a memory system coupled to each of the processing elements, and a global synchronization manager (GSM) module coupled to the plurality of processing elements and to the memory system, wherein each processing element comprises a tightly coupled memory (TCM), the method comprising:accessing, by each processing element, the memory system; andsynchronizing, by the GSM, operations of the plurality of processing elements and memory system using corresponding synchronization modules of each of the plurality of processing elements.13. The method of claim 12, wherein the processing elements do not implement a coherency protocol with respect to the memory system.14. The method of claim 12, wherein:each processing element further comprise a vector processor, a scalar processor, and a matrix processor; andthe method further comprises:performing floating point operations by the vector processor; and performing floating point operations by the matrix processor.15. The method of claim 12, wherein:the plurality of processing elements are interconnected by a first network;the method further comprises performing multicast operations by the first network; andeach of the plurality of processing elements is connected to the memory system by a second network separate from the first network.16. The method of claim 15, wherein: the GSM is coupled to each of the processing elements via a third network separate from the first network and the second network;each of the processing elements comprises a local sync manager;the method further comprises providing, by the GSM, configuration information to the local sync manager of each processing element of the plurality of processing elements via the third network.17. The method of claim 12, further comprising implementing zero encoding by the first network.18. The method of claim 12, wherein:the synchronization modules of the plurality of processing elements used by the GSM to synchronize operations of the plurality of processing elements are the corresponding TCMs;each TCM is adapted to store a set of synchronization variables; andthe method further comprises storing and adjusting a set of synchronization variables of the TCM of one of the plurality of processing elements.19. The method of claim 12, further comprising:transforming a neural network into a directed acyclic graph;transforming the directed acyclic graph into computation and data movement operations; andscheduling the computation and data movement operations for execution in parallel pipelines by the processing elements, wherein the computation and data movement operations are dispatched using dispatch scaling.20. The method of claim 19, wherein:the plurality of processing elements is interconnected by a first network configured to perform multicast operations; andthe scheduling of computation and data movement operations includes replicating data sets using multicast operations on the first network.21. An apparatus including a means for inference acceleration; the inference- acceleration means comprising:a means for memory storage and retrieval;a plurality of means for processing, each means for processing:having a means for tightly coupling memory (TCM);coupled to the means for memory storage and retrieval; and adapted to access the means for memory storage and retrieval; and a means for global synchronization management (GSM) coupled to the plurality of means for processing and to the memory means and adapted to synchronize operations of the plurality of means for processing and memory means, using corresponding synchronization modules of each of the plurality of means for processing. |
METHOD, APPARATUS, AND SYSTEM FOR AN ARCHITECTURE FOR MACHINE UEARNING ACCEUERATIONCUAIM OF PRIORITY UNDER 35 U.S.C. §119[0001] The present Application claims priority to U.S. Patent Application No. 16/556,094, entitled “METHOD, APPARATUS, AND SYSTEM FOR AN ARCHITECTURE FOR MACHINE LEARNING ACCELERATION” filed on August 29, 2019, and U.S. Provisional Patent Application No. 62/724,051 entitled“METHOD, APPARATUS, AND SYSTEM FOR AN ARCHITECTURE FOR MACHINE LEARNING ACCELERATION” filed August 29, 2018, assigned to the assignee hereof and hereby expressly incorporated by reference herein.BACKGROUND[0002] Artificial Neural Networks (ANNs) are used to perform an increasing number and variety of tasks, such as, for example, object recognition, speech recognition, speech generation, providing recommendations, and predicting user behavior. Performing these tasks may be referred to as inferencing using an ANN model. To provide useful inferences, an ANN model needs to be designed and trained for the particular task. The ANN design establishes parameters such as the number of layers of the ANN model and the characteristics of each layer. The training of the ANN uses training data, inferencing using the ANN model, feedback based on evaluation of the inference, and backpropagation to adjust the weights of the ANN model in response to the feedback. After numerous training cycles of inferencing and backpropagation, the resultant model may provide satisfactory results in response to new input data. Note that many ANNs have multiple hidden layers between an input layer and an output layer and may consequently be referred to as Deep Neural Networks (DNNs).[0003] To provide a satisfactory user experience, not only do the inference results need to be correct, but they also need to be provided fairly quickly - often within a fraction of a second (response latency within service level agreement). To do this, service providers use large arrays of inference accelerators located“in the cloud” - that is, communicatively coupled to, and located remotely from, a client device. [0004] Client computer devices may include, for example, computers, automobiles, smartphones, smart wearable devices, and intemet-of-things (IoT) devices. The so-called cloud may comprise a plurality of interconnected servers located at a data center and may be managed by a cloud provider entity such as, for example, Amazon.com, Inc. of Seattle, WA or Facebook, Inc., of Menlo Park, CA. Each host server comprises a plurality of interconnected inference accelerators, which may be provided by an inference-accelerator provider entity. Each accelerator comprises processor and memory components.[0005] The cloud may support many millions of neural network applications. A neural network application running on a client computer device communicates with the cloud to receive inference acceleration and/or assistance. For example, a speech- translation neural-network application (NNA) may transmit a raw or encoded audio snippet to the cloud for rapid translation and provision of the translation in response to the NNA. A media-recommendation program that recommends, e.g., songs or videos - where the media may comprise many millions, or even billions, of options hosted by the cloud provider in the cloud - may communicate with the cloud to have the cloud perform an inference to generate a recommendation for provision to a user of the client computer device.[0006] In the data center context, various heterogenous architectures have been employed to handle machine learning workloads. For example, cloud compute may use server-class central processing units (CPUs) or graphics processing units (GPUs) and may adapt their workloads to those architecture. However, these architectures may not be tailored to the specific characteristics of machine learning algorithms, with the effect that their performance is not as efficient as desired, and/or they consume more power to achieve a given level of performance than would be desirable. As there may be many millions of NNAs accessing the inference accelerators of the cloud at any one time, efficient inference accelerators would be beneficial for reducing power usage and/or reducing inference time.[0007] Thus, it would be desirable to provide an inference- accelerator computing architecture that is scalable to cloud computing and data center application, while providing improved performance per watt when compared to existing server-class CPU and GPU-based solutions. SUMMARY OF THE DISCLOSURE[0008] In one aspect, an apparatus includes a plurality of processing elements, each including a tightly-coupled memory (TCM), and a memory system coupled to the processing elements. A global synchronization manager is coupled to the plurality of the processing elements and to the memory system. The processing elements do not implement a coherency protocol with respect to the memory system. The processing elements implement direct memory access with respect to the memory system, and the global synchronization manager is configured to synchronize operations of the plurality of processing elements through the TCMs.[0009] In another aspect, an apparatus includes a plurality of processing elements and a first network coupling each processing elements of the plurality of processing elements to the other processing elements of the processing elements, the first network configured to perform multicast operations. The apparatus further includes a memory system and a second network, separate from the first network, coupling each processing element of the plurality of processing elements to the other processing elements of the plurality of processing elements and to the memory system.[0010] In yet another aspect, a method comprises transforming a neural network into a directed acyclic graph by a compiler and transforming the directed acyclic graph into computation and/or data movement operations by the compiler. The method further comprises statically scheduling the computation and/or data movement operations for execution in parallel pipelines by the compiler. The computation and/or data movement operations may be dispatched in a plurality of portions in accordance with dispatch scaling.[0011] Some advantages of the disclosed aspects may include providing a scalable architecture for cloud computing that provides improved interconnection between processing elements, and a compiler that produces a more efficient mapping of neural network operations onto available hardware.BRIEF DESCRIPTION OF THE FIGURES[0012] FIG. 1 is block diagram of an exemplary inference accelerator in accordance with an embodiment of the disclosure. [0013] FIG. 1A is a simplified schematic diagram of an exemplary implementation of the processing element of FIG. 1.[0014] FIG. 2 is a flow diagram of an exemplary operation of a compiler for an inference accelerator in accordance with an embodiment of the disclosure.DETAILED DESCRIPTION[0015] With reference now to the drawing figures, several exemplary aspects of the present disclosure are described. The word“exemplary” is used herein to mean“serving as an example, instance, or illustration.” Any aspect described herein as“exemplary” is not necessarily to be construed as preferred or advantageous over other aspects.[0016] FIG. 1 is a simplified schematic diagram of an exemplary inference accelerator 100 in accordance with an embodiment of the disclosure. The inference accelerator 100 comprises a system-on-chip (SoC) 190 coupled to a first double data rate (DDR) dynamic random-access memory (DRAM) 122 and a second DDR DRAM 126. The SoC 190 comprises a first processing element 102, a second processing element 104, a third processing element 106, and a fourth processing element 108. The processing elements 102, 104, 106, and 108 are coupled together via a compute network-on-chip (NoC) 142. Note that the terms“NoC” and“network” may be used interchangeably herein. Note that inference accelerators in accordance with this disclosure are not limited to any particular number of processing elements and alternative implementations may have more or fewer than four processing elements.[0017] The SoC 190 further comprises a first memory interface 112, a second memory interface 114, a third memory interface 116, a fourth memory interface 118, and a PCI Express (PCIe) block 134, all coupled to each other and to the processing elements 102, 104, 106, and 108 via a system/memory (sys/mem) NoC 144. The PCIe block 134 is an interface used by the inference accelerator 100 to receive the inputs for inferences (e.g., images, videos, audio clips, or other data tensors) received by the host server and to provide results back to the host server. The system-on-chip 190 further comprises a management controller 132, which is coupled to the PCIe block 134, the memory controllers 112, 114, 116, and 118, and the processing elements 102, 104, 106, and 108 via the system/memory NoC 144. Note that, in some implementations, the compute network 142 may also connect to the PCIe block 134 and/or the memory controllers 112, 114, 116, and 118.[0018] Further, a global synchronization manager (GSM) module 136 is coupled to the PCIe block 134 and a local sync manager (see FIG. 1A) in each processing element 102, 104, 106, and 108 via a private NoC 146. It should be noted that in alternative implementations, one or more of the compute NoC 142, sys/mem NoC , and private NoC 146 may be replaced by a corresponding simple bus, or other communication fabric, other than a NoC. It should also be noted that in some alternative embodiments, the compute NoC and sys/mem NoC may be combined into a single combined compute/system/memory NoC. It should be further noted that some alternative implementations of the inference accelerator 100 do not include a private NoC and, instead, the GSM 136 communicates with other elements (e.g., the processing elements 102, 104, 106, and 108) via other means (e.g., sys/mem NoC 144).[0019] The processing elements 102, 104, 106, and 108 may be neural processing units (NPUs), neural signal processors (NSPs), digital signal processors (DSPs), or any other suitable type of processor (e.g., CPUs or GPUs). In some homogenous embodiments (where the processing elements 102, 104, 106, and 108 are substantially the same) each of the processing elements 102, 104, 106, and 108 may include scalar, vector, matrix processing capabilities (e.g., multiplication, convolution, point-wise addition, point-wise multiplication), and data-movement capabilities (e.g., load, store, and direct memory access (DMA)). In some alternative embodiments, the scalar, vector, matrix, and data-movement processing capabilities may be distributed across different processing elements (in other words, the processing elements 102, 104, 106, and 108 may be heterogeneous). Additionally, whichever of the processing elements 102, 104, 106, and 108 provide matrix processing capabilities may further include floating point capabilities as part of the matrix processing capabilities. Providing these capabilities in each of the processing elements 102, 104, 106, and 108 may enable a compiler for the inference accelerator 100 to more efficiently schedule code on the individual processing elements, as will be explained in greater detail with respect to FIG. 2.[0020] FIG. 1A is a simplified schematic diagram of an exemplary implementation of the processing element 102 of FIG. 1. As noted above, in some embodiments, processing elements 104, 106, and 108 may be configured identically. The processing element 102 comprises tightly-coupled memory (TCM) 150, vector processing module 151, matrix processing module 152, scalar processing (e.g., DSP) module 153, memory processing module 154, and an optional local synchronization manager (LSM) 155. The TCM 150 is directly connected to at least the vector processing module 151, the matrix processing module 152, and the memory processing module 154. The LSM 155 is directly connected to at least the scalar processing module 153. The processing element 102 is connected to NoCs 142, 144, and 146.[0021] In some implementations, each LSM 155 of a processing element is connected to the GSM 136 of FIG. 1, where the processing elements 102, 104, 106, and 108 implement hardware memory synchronization using LSM 155 working with GSM 136 to coordinate and synchronize data transfers among the processing elements 102, 104, 106, and 108 and the DRAMs 122 and 126, by setting and resetting semaphores that allow or prohibit corresponding data operations. In this implementation, the LSM 155 may be referred to as a synchronization module. In some implementations, the GSM 136 works directly with the TCMs 150 of the processing elements 102, 104, 106, and 108 (forgoing LSMs 155) to set and reset values at known locations in the TCMs 150, where those values similarly allow or prohibit corresponding data operations. In this implementation, the TCM 150 may be referred to as a synchronization module.[0022] The processing element 102 may forgo implementing a memory coherency protocol. Implementing a memory coherency protocol typically includes having a shared cache connected to a plurality of clients and an interconnecting bus with a coherency protocol to ensure that each client is referencing the latest version of corresponding data. Using caches and implementing a coherency protocol are useful when data movement and sharing is not sequential and not deterministic - in other words, what is conventionally referred to as“random.” Caches and coherency are also useful where data movements and sharing are relatively fine-grained. Semaphores, on the other hand, use the setting and modifying of so-called semaphores to gate data movement among a plurality of clients and gate computations involving the data - without using a cache or a bus implementing a coherency protocol. Neural-network inferencing involves large movements of data, and calculations based on that data, whose pattern is known ahead of time. Consequently, the integrity of that data may be maintained using a relatively simple semaphore mechanism. Since implementing memory coherency protocols requires relatively significant power levels, substituting hardware synchronization for coherency allows the inference accelerator 100 to maintain the needed level of memory synchronization at a relatively reduced power level.[0023] Returning to FIG. 1, the compute network 142 coupling the processing elements 102, 104, 106, and 108 may be a relatively higher-bandwidth network (as compared to the sys/mem network 144 and the private network 146), and may support multicast operations (i.e., sending data produced by a single processing element to multiple other processing elements of the inference accelerator 100). The processing elements 102, 104, 106, and 108 may each include tightly-coupled memory (e.g. TCM 150 of FIG. 1A), and may interact with the first DRAM 122 and the second DRAM 126 via the sys/mem network 144 and the memory controllers 112, 114, 116, and 118. Both the compute network 142 and the sys/mem network 144 may support DMA operations from the TCMs (e.g. TCM 150) of each of the processing elements 102, 104, 106, and 108, including read operations, write operations, and, in the case of the compute network 142, multicast operations.[0024] The private network 146 may be a relatively slower and lower-bandwidth network (as compared to the compute network 142 and the sys/mem network 144), as its use may be limited to a configuration time (as opposed to run time) and, thus, would not have a specific performance requirement (as opposed to the compute network 142 and the sys/mem network 144). Having separate networks for these specific purposes allows each of the networks to be designed to match its corresponding expected traffic type and allows each to be individually performance and power optimized to match.[0025] For example, since the workloads handled by the inference accelerator 100 may often involve data words that are all zeros (but that must still be transmitted among the processing elements 102, 104, 106, and 108), the compute network 142 may implement a“zero” encoding protocol, where setting a single override bit on the network bus indicates that the value of the corresponding data word is zero, without having to actually set all the bits of the data bus for that data word to zero or read all of the corresponding bits of the data word. This may reduce power usage both directly and by allowing for the implementation of power-saving operations based on the override bit.[0026] Further, as indicated above, the inference accelerator 100 does not implement a memory coherency protocol, instead managing dependencies that do occur using hardware semaphores and compiler design (as explained later with respect to Figure 2) in accordance with the global sync manager 136, which is configured to interact with the processing elements 102, 104, 106, and 108 to provide hardware semaphore support. Essentially, each of the processing elements 102, 104, 106, and 108 may set semaphores in the global sync manager 136, which may be cleared by the other processing elements 102, 104, 106, and 108 to allow for interdependencies in workloads being processing by the processing elements 102, 104, 106, and 108.[0027] The latency involved in communications between the processing elements 102, 104, 106, and 108 and the global sync manager 136 may be important for the overall performance of the inference accelerator 100. Thus, the topology of the private network 146 providing connectivity between the global sync manager 136 and the processing elements 102, 104, 106, and 108 may depend on the relative number of processing elements that will be coupled to the global sync manager 136. In systems with relatively few processing elements, a ring topology may be used instead of the network 146 shown. In systems with larger numbers of processing elements, a star topology may be used. Those having skill in the art will recognize that the choice of topology may be informed by many factors involved in the overall system design, and the teachings of the present disclosure do not depend on the use of a particular topology.[0028] FIG. 2 is a hybrid schematic and flow diagram 200 for exemplary operation of a compiler which may be configured to schedule operations on inference accelerator 100 of FIG. 1. A neural network description 210 is provided to the compiler, which, in a first phase, transforms, in step 220, the neural network description 210 into a form that may be represented by directed acyclic graph 230. A directed acyclic graph is a graph that has forward progress, without loopbacks, among its nodes (e.g., a tree structure progressing from the trunk to the leaves). The graph 230 comprises a plurality of tasks represented by graph nodes 231, 232, 233, 234, 235, 236, and 237. Graph 230 shows that task 231 must be performed first, and then task 232, but then any of tasks 233, 234, and 235 may be performed. In addition, graph 230 shows that both tasks 234 and 235 have to be completed before task 236 can be executed (in other words, task 236 is dependent on tasks 234 and 235). Similarly, task 237 is dependent on tasks 233 and 236.[0029] In a second phase, in step 240, the compiler converts the tasks 231-237, shown in graph 230, into command lists 252, 254, 256, and 258 and schedules them for processing on corresponding hardware processing elements such as scalar, vector, matrix, and data movement blocks of the processing elements 102, 104, 106, and 108 of FIG. 1. In other words, command lists 252, 254, 256, and 258 may correspond, respectively, to vector processing module 151, matrix processing module 152, scalar processing module 153, and memory processing module 154 of FIG. 1A. The scheduling may be optimized for factors such as, for example, time, power, or resource requirements.[0030] The compiler may be optimized for use with neural networks, and thus it may generate“static” workloads. Specifically, since branching and iteration counts may be known ahead of time, they may be used to generate static workloads, as opposed to, for example, conventional CPU or GPU code, which may have unpredictable branching behavior and iteration counts and, consequently, would require generating dynamic workloads. Because these workloads are static, the command lists generated by the compiler may permit workload balancing in the computing device 100 by dispatching a portion of a total workload to be executed to the computing device 100 after which the computing device 100 may wait (and may possibly even enter a low-power state) for further instructions. This workload distribution and balancing is referred to herein as “dispatch scaling.” Note that, in generating parallel workloads, the compiler may direct the replication of data sets between processing elements, where the replication may be performed using the multicast capabilities of the compute network 142.[0031] The above is possible because, since the workload is static, dispatching one- fourth of a total workload (e.g., one fourth of the total operations), for example, will result in the one-fourth of the total workload being completed. This contrasts with a conventional CPU/GPU workload, in which it may be essentially impossible to predict ahead of time how much of a total workload may be completed by providing one-fourth of the workload to the computing device, and thus, in order to save power, conventional methods such a frequency and voltage scaling may be used. Further, instead of generating command lists, which would conventionally be interpreted by software running on the processing elements 102, 104, 106, and 108, the compiler may alternatively generate static code which is executed in sequence. Dispatch scaling may be used in either case (command lists or statically generated code).[0032] Although the compiler attempts to generate command lists that are fully parallelizable and do not have interdependencies, sometimes this may not be feasible. In cases where interdependencies exist, since the computing device 100 does not implement coherency, the compiler will insert a synchronization indicator (e.g., a semaphore) that is mapped to a hardware semaphore resource. Different processing elements may interact, via, e.g., GSM 136, using the semaphore to guarantee that dependencies are satisfied. The compiler may schedule tasks to command lists based on optimistic estimated completion times and the semaphores may be relied on to guarantee that dependencies are satisfied where actual completion times exceed the estimated completion times.[0033] Those of skill in the art will further appreciate that the various illustrative logical blocks, modules, circuits, and algorithms described in connection with the aspects disclosed herein may be implemented as electronic hardware, instructions stored in memory or in another computer readable medium and executed by a processor or other processing device, or combinations of both. The devices described herein may be employed in any circuit, hardware component, IC, or IC chip, as examples. Memory disclosed herein may be any type and size of memory and may be configured to store any type of information desired. To clearly illustrate this interchangeability, various illustrative components, blocks, modules, circuits, and steps have been described above generally in terms of their functionality. How such functionality is implemented depends upon the particular application, design choices, and/or design constraints imposed on the overall system. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present disclosure.[0034] The various illustrative logical blocks, modules, and circuits described in connection with the aspects disclosed herein may be implemented or performed with a processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or any combination thereof designed to perform the functions described herein. A processor may be a microprocessor, but in the alternative, the processor may be any conventional processor, controller, microcontroller, or state machine. A processor may also be implemented as a combination of computing devices (e.g., a combination of a DSP and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration). [0035] The aspects disclosed herein may be embodied in hardware and in instructions or design data that are stored in hardware, and may reside, for example, in Random Access Memory (RAM), flash memory, Read Only Memory (ROM), Electrically Programmable ROM (EPROM), Electrically Erasable Programmable ROM (EEPROM), registers, a hard disk, a removable disk, a CD-ROM, or any other form of computer readable medium known in the art. An exemplary storage medium is coupled to the processor such that the processor can read information from, and write information to, the storage medium. In the alternative, the storage medium may be integral to the processor. In the case of design data, the data may be an electronic representation of a physical design of a circuit, may be readable by integrated circuit fabrication equipment, and may be in a file format such as GDSII, GERBER, or the like. The processor and the storage medium may reside in an ASIC. The ASIC may reside in a remote station. In the alternative, the processor and the storage medium may reside as discrete components in a remote station, base station, or server.[0036] It is also noted that the operational steps described in any of the exemplary aspects herein are described to provide examples and discussion. The operations described may be performed in numerous different sequences other than the illustrated sequences. Furthermore, operations described in a single operational step may actually be performed in a number of different steps. Additionally, one or more operational steps discussed in the exemplary aspects may be combined. It is to be understood that the operational steps illustrated in the flowchart diagrams may be subject to numerous different modifications as will be readily apparent to one of skill in the art. Those of skill in the art will also understand that information and signals may be represented using any of a variety of different technologies and techniques. For example, data, instructions, commands, information, signals, bits, symbols, and chips that may be referenced throughout the above description may be represented by voltages, currents, electromagnetic waves, magnetic fields or particles, optical fields or particles, or any combination thereof.[0037] The previous description of the disclosure is provided to enable any person skilled in the art to make or use the disclosure. Various modifications to the disclosure will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other variations without departing from the spirit or scope of the disclosure. Thus, the disclosure is not intended to be limited to the examples and designs described herein, but is to be accorded the widest scope consistent with the principles and novel features disclosed herein. |
A system and method for automatically migrating the execution of work units between multiple heterogeneous cores. A computing system includes a first processor core with a single instruction multiple data micro-architecture and a second processor core with a general-purpose micro¬ architecture. A compiler predicts execution of a function call in a program migrates at a given location to a different processor core. The compiler creates a data structure to support moving live values associated with the execution of the function call at the given location. An operating system (OS) scheduler schedules at least code before the given location in program order to the first processor core. In response to receiving an indication that a condition for migration is satisfied, the OS scheduler moves the live values to a location indicated by the data structure for access by the second processor core and schedules code after the given location to the second processor core. |
WHAT IS CLAIMED IS 1. A method comprising: identifying a location within a compute kernel comprising a plurality of instructions at which execution of the compute kernel may migrate during execution of the compute kernel; creating a data structure to maintain and migrate a context of the compute kernel; scheduling code in the compute kernel prior to the location for execution on a first processor core with a first micro-architecture; in response to receiving an indication that a condition for migration is satisfied: moving the context to a location accessible by a second processor core with a second micro-architecture different from the first micro-architecture; and scheduling code in the compute kernel after the location to the second processor core. 2. The method as recited in claim 1, further comprising generating first version of code for the compute kernel corresponding to the first processor core, and generating a second version of code for the compute kernel corresponding to the second processor core. 3. The method as recited in claim 2, wherein the first micro-architecture is a single instruction multiple data (SIMD) micro-architecture and the second micro-architecture is a general-purpose micro-architecture. 4. The method as recited in claim 2, further comprising performing said identifying is based at least on one of the following: profile runtime information and static information. 5. The method as recited in claim 2, further comprising: instrumenting a first version of code for the first processor core with instructions to determine whether the condition for migration is satisfied; and instrumenting a second version of code for the second processor core with instructions to find live values at locations indicated by the data structure and begin execution. 6. The method as recited in claim 5, wherein to determine that a condition for migration is satisfied, the method further comprises determining a number of parallel executing iterations of the compute kernel that have reached an exit point is above a given threshold. 7. The method as recited in claim 5, further comprising: splitting the compute kernel into two compute sub-kernels at the location, in response to predicting a number of later parallel executing iterations of the compute kernel satisfy said condition for migration; scheduling a first compute sub-kernel to the first processor core, wherein the first compute sub-kernel comprises code before the given location; and scheduling a second compute sub-kernel to the second processor core, wherein the second compute sub-kernel comprises code after the given location. 8. The method as recited in claim 6, wherein the location is immediately prior to a conditional branch instruction. 9. A computing system including a heterogeneous multi-core architecture comprising: a first processor core with a first micro-architecture; a second processor core with a second micro-architecture different from the first micro- architecture; an operating system comprising a scheduler, wherein the scheduler is configured to: schedule code within a compute kernel prior to a location for execution on a first processor core with a first micro-architecture; and in response to receiving an indication that a condition for migration is satisfied: move a context of the compute kernel to a location accessible by a second processor core with a second micro-architecture different from the first micro-architecture; and schedule code in the compute kernel after the location to the second processor core. 10. The computing system as recited in claim 9, further comprising a compiler configured to:identify the location within a compute kernel comprising a plurality of instructions as a location at which execution of the compute kernel may migrate during execution of the compute kernel; and create a data structure to maintain and migrate a context of the compute kernel. 11. The computing system as recited in claim 10, wherein the first micro-architecture is a single instruction multiple data (SIMD) micro-architecture and the second micro-architecture is a general-purpose micro-architecture. 12. The computing system as recited in claim 10, wherein the compiler is further configured to perform said identifying based at least on one of the following: profile runtime information and static information. 13. The computing system as recited in claim 10, wherein the compiler is further configured to: instrument a first version of code for the first processor core with instructions to determine whether the condition for migration is satisfied; and instrument a second version of code for the second processor core with instructions to find live values at locations indicated by the data structure and begin execution. 14. The computing system as recited in claim 13, wherein to determine that a condition for migration is satisfied, each of the first and the second processor core is configured to determine a number of parallel executing iterations of the compute kernel that have reached an exit point is above a given threshold. 15. The computing system as recited in claim 13, wherein the compiler is further configured to: split the compute kernel into two compute sub-kernels at the given location, in response to predicting a number of later parallel executing iterations of the compute kernel satisfy said condition for migration; schedule a first compute sub-kernel to the first processor core, wherein the first compute sub-kernel comprises code before the location; and schedule a second compute sub-kernel to the second processor core, wherein the second compute sub-kernel comprises code after the location. 16. The computing system as recited in claim 14, wherein the location is immediately prior to a conditional branch instruction. 17. A computer readable storage medium storing program instructions, wherein the program instructions are executable to: identify a location within a compute kernel comprising a plurality of instructions at which execution of the compute kernel may migrate during execution of the compute kernel; create a data structure to maintain and migrate a context of the compute kernel; schedule code in the compute kernel prior to the location for execution on a first processor core with a first micro-architecture; in response to receiving an indication that a condition for migration is satisfied: move the context to a location accessible by a second processor core with a second microarchitecture different from the first micro-architecture; and schedule code in the compute kernel after the location to the second processor core. 18. The computer readable storage medium as recited in claim 17, wherein the program instructions are further executable to generate a first version of code for the compute kernel corresponding to the first processor core, and generating a second version of code for the compute kernel corresponding to the second processor core. 19. The computer readable storage medium as recited in claim 17, wherein the program instructions are further executable to: instrument a first version of code for the first processor core at the location with instructions to determine whether the condition for migration is satisfied; and instrument a second version of code for the second processor core at the location with instructions to find live values at locations indicated by the data structure and begin execution. 20. The computer readable storage medium as recited in claim 19, wherein to determine that a condition for migration is satisfied, the program instructions are further executable to determine a number of parallel executing iterations of the compute kernel that have reached an exit point is above a given threshold. |
TITLE: AUTOMATIC KERNEL MIGRATION FOR HETEROGENEOUS CORES BACKGROUND OF THE INVENTION Field of the Invention [0001] This invention relates to computing systems, and more particularly, to automatically migrating the execution of work units between multiple heterogeneous cores. Description of the Relevant Art [0002] The parallelization of tasks is used to increase the throughput of computer systems. To this end, compilers may extract parallelized tasks from program code to execute in parallel on the system hardware. With a single-core architecture, a single core may include deep pipelines configured to perform multi-threading. To further increase parallel execution on the hardware, a multi-core architecture may include multiple general-purpose cores. This type of architecture may be referred to as a homogeneous multi-core architecture. This type of architecture may provide higher instruction throughput than a single-core architecture. [0003] Some software applications may not be divided frequently into parallel tasks. In addition, specific tasks may not efficiently execute on a general-purpose core. Particular instructions for a computational intensive task may cause a disproportionate share of a shared resource, which delays a deallocation of the shared resource. Examples of such specific tasks may include cryptography, video graphics rendering and garbage collection. [0004] To overcome the performance limitations of conventional general-purpose cores, a computer system may offload specific tasks to special-purpose hardware. This hardware may include a single instruction multiple data (SIMD) parallel architecture, a field-programmable gate array (FPGA), and other specialized cores. A type of architecture with different types of cores may be referred to as a heterogeneous multi-core architecture. Depending on the scheduling of tasks, this type of architecture may provide higher instruction throughput than a homogeneous multi-core architecture. [0005] In many cases, particular software applications have data parallelism in which the execution of each work item, or parallel function call, is data dependent within itself. For example, a first work item may be data independent from a second work item, and each of the first and the second work items are scheduled on separate paths within a core with a SIMD micro-architecture. However, an amount of instructions executed within each of the first and thesecond work items may be data-dependent. A conditional test implemented as a branch instruction may pass for the first work item, but fail for the second work item dependent on the data for each work item. [0006] The efficiency of parallel execution may be reduced as the second work item halts execution and waits as the first work item continues with its ongoing execution. The inefficiency grows when only a few work items continue execution due to passed tests whereas most of the work items are idle due to failed tests. After efficient, functionality-matching assignment of the work items by an OS scheduler in a heterogeneous multi-core architecture, system performance may still be reduced due to the data-dependent behavior of particular software applications. SUMMARY OF EMBODIMENTS OF THE INVENTION [0007] Systems and methods for automatically migrating the execution of work units between multiple heterogeneous cores are contemplated. [0008] In one embodiment, a computing system includes a first processor core with a first micro-architecture and a second processor core with a second micro-architecture different from the first micro-architecture. In one embodiment, the first micro-architecture is a single instruction multiple data (SIMD) micro-architecture and the second micro-architecture is a general-purpose micro-architecture. The computing system includes a memory coupled to each of the first and the second processor cores. The memory stores a computer program comprising one or more compute kernels, or function calls. As a compiler traverses the instructions of a given function call, the compiler is configured to predict execution of the function call migrates at a given location to a different processor core. The compiler creates a data structure to support moving live values associated with the execution of the function call at the given location. Such live values may be referred to as a "context". [0009] A scheduler within an operating system (OS) schedules at least code before the given location in program order to the first processor core. In response to receiving an indication that a condition for migration is satisfied, the OS scheduler moves the live values to a location indicated by the data structure for access by the second processor core and schedules code after the given location in program order to the second processor core. In order to determine whether a migration condition is satisfied, each of the first and the second processor core is configured to determine whether a number of parallel executing iterations of the function call that have reached an exit point is above a given threshold.[0010] These and other embodiments will be further appreciated upon reference to the following description and drawings. BRIEF DESCRIPTION OF THE DRAWINGS [0011] FIG. 1 is a generalized block diagram of one embodiment of an exemplary processing node with a heterogeneous multi-core architecture. [0012] FIG. 2 is a generalized block diagram of one embodiment of source code utilizing compute kernels. [0013] FIG. 3 is a generalized block diagram of one embodiment of source code defining compute kernels with conditional statements. [0014] FIG. 4 is a generalized block diagram of one embodiment of scheduled assignments between hardware resources and compute kernels. [0015] FIG. 5 is a generalized block diagram of one embodiment of a logical layout of microarchitectures for two types of processor cores. [0016] FIG. 6 is a generalized block diagram of one embodiment of a general-purpose pipeline execution flow. [0017] FIG. 7A is a generalized block diagram of one embodiment of a SIMD pipeline execution flow. [0018] FIG. 7B is another generalized block diagram of one embodiment of a SIMD pipeline execution flow. [0019] FIG. 8 is a generalized block diagram of one embodiment of program code with a migration tagged branch. [0020] FIG. 9 is a generalized flow diagram illustrating one embodiment of a method for instrumenting code for compute kernel migration. [0021] FIG. 10 is a generalized flow diagram illustrating one embodiment of a method for migrating a compute kernel during program execution. [0022] While the invention is susceptible to various modifications and alternative forms, specific embodiments are shown by way of example in the drawings and are herein described in detail. It should be understood, however, that drawings and detailed description thereto are not intended to limit the invention to the particular form disclosed, but on the contrary, the invention is to cover all modifications, equivalents and alternatives falling within the spirit and scope of the present invention as defined by the appended claims.DETAILED DESCRIPTION [0023] In the following description, numerous specific details are set forth to provide a thorough understanding of the present invention. However, one having ordinary skill in the art should recognize that the invention might be practiced without these specific details. In some instances, well-known circuits, structures, and techniques have not been shown in detail to avoid obscuring the present invention. [0024] Referring to FIG. 1, one embodiment of an exemplary processing node 1 10 with a heterogeneous multi-core architecture is shown. Processing node 1 10 may include one or more processing units 115, which may include one or more processor cores 112 and an associated cache memory subsystem 114. In one embodiment, processor core 112 utilizes a general- purpose micro-architecture. [0025] Processing node 110 may also include one or more processing units 170, which may comprise one or more processor cores 172 and data storage buffers 174. Processor core 172 may not be a mirrored silicon image of processor core 112. Processor core 172 may have a micro- architecture different from the micro-architecture used by processor core 112. In one embodiment, the processor core 172 may be a different generation of a same processor family as processor core 112. In another embodiment, the processor core 172 may be a voltage and/or frequency scaled version of processor core 112. In other words, the processor core 172 is not a silicon copy of the processor core 112 with a same functionality and instruction set architecture (ISA), a same clock frequency, same cache sizes, a same memory model, and so forth. [0026] Continuing with the micro-architecture of processor core 172, in yet another embodiment, the processor core 172 may comprise a micro-architecture that provides high instruction throughput for a computational intensive task. Processor core 172 may have a parallel architecture. For example, the processor core 172 may be a single instruction multiple data (SIMD) core. Examples of SIMD cores include graphics processing units (GPUs), digital signal processing (DSP) cores, or other. In one embodiment, the processing node 110 comprises a single instruction set architecture (ISA). Typically, as is well known in the art, single-ISA multi-core architectures have been shown to provide higher power and throughput performances for chip multiprocessors (CMP). [0027] High instruction throughput on processing node 110 may be achieved with measured power consumption within a given power limit when threads of software applications are efficiently scheduled. The threads may be scheduled on one of processor cores 112 and 172 in amanner that each thread has the highest instruction throughput based at least in part on the runtime hardware resources of the processor cores 112 and 172. [0028] Continuing with the components in the processing node 110, the processing node 110 may include memory controller 120, and interface logic 140. In one embodiment, the illustrated functionality of processing node 110 is incorporated upon a single integrated circuit. In one embodiment, processor cores 112 include circuitry for executing instructions according to a predefined general-purpose instruction set. For example, the SPARC® instruction set architecture (ISA) may be selected. Alternatively, the x86, x86-64®, Alpha®, PowerPC®, MIPS®, PA-RISC®, or any other instruction set architecture may be selected. Generally, processor core 112 accesses the cache memory subsystems 114, respectively, for data and instructions. If the requested block is not found in cache memory subsystem 114 or in shared cache memory subsystem 118, then a read request may be generated and transmitted to the memory controller within the node to which the missing block is mapped. [0029] In one embodiment, processing unit 170 is a graphics processing unit (GPU). Modern GPUs are very efficient at manipulating and displaying computer graphics. The highly parallel structure of GPUs makes them more effective than general-purpose central processing units (CPUs), such as processing unit 115, for a range of complex algorithms. Typically, a GPU executes calculations used for graphics and video and a CPU executes calculations for many more system processes than graphics alone. Conventional GPUs utilize very wide single instruction multiple data (SIMD) architectures to achieve high throughput in image-rendering applications. Such applications generally entail executing the same programs, such as vertex shaders or pixel shaders, on large numbers of objects (vertices or pixels). Since each object is processed independently of other objects, but the same sequence of operations is used, a SIMD micro-architecture provides considerable performance enhancement. GPUs have also been considered for non-graphical calculations. [0030] In one embodiment, the GPU 170 may be located on a video card. In another embodiment, the GPU 170 may be integrated on the motherboard. In yet another embodiment, the illustrated functionality of processing node 110 may be incorporated upon a single integrated circuit. In such an embodiment, the CPU 115 and the GPU 170 may be proprietary cores from different design centers. Also, the GPU 170 may now be able to directly access both local memories 114 and 118 and main memory via memory controller 120 from the processing node 110, rather than perform memory accesses off-chip via interface 140. This embodiment maylower latency for memory accesses for the GPU 170, which may translate into higher performance. [0031] Continuing with the components of processing node 1 10 in FIG. 1, cache subsystems 114 and 118 may comprise high-speed cache memories configured to store blocks of data. Cache memory subsystems 114 may be integrated within respective processor cores 112. Alternatively, cache memory subsystems 114 may be coupled to processor cores 114 in a backside cache configuration or an inline configuration, as desired. Still further, cache memory subsystems 114 may be implemented as a hierarchy of caches. Caches that are located nearer processor cores 112 (within the hierarchy) may be integrated into processor cores 112, if desired. In one embodiment, cache memory subsystems 114 each represent L2 cache structures, and shared cache subsystem 118 represents an L3 cache structure. Both the cache memory subsystem 114 and the shared cache memory subsystem 118 may include a cache memory coupled to a corresponding cache controller. [0032] Generally, packet processing logic 116 is configured to respond to control packets received on the links to which processing node 110 is coupled, to generate control packets in response to processor cores 112 and/or cache memory subsystems 114, to generate probe commands and response packets in response to transactions selected by memory controller 120 for service, and to route packets for which node 110 is an intermediate node to other nodes through interface logic 140. Interface logic 140 may include logic to receive packets and synchronize the packets to an internal clock used by packet processing logic 116. [0033] Tuning now to FIG. 2, one embodiment of source code utilizing compute kernels is shown. OpenCL™ (Open Computing Language) is one example of a low-level application programming interface (API) for heterogeneous computing. OpenCL includes a C-like language that defines execution queues, wherein each queue is associated with an OpenCL device. An OpenCL device may be a CPU, a GPU, or other unit with at least one processor core within the heterogeneous multi-core architecture. A function call may be referred to as an OpenCL kernel, or simply a "compute kernel". The OpenCL framework may improve computing performance for a wide variety of data-parallel applications used in gaming, entertainment, science and medical fields. For a heterogeneous architecture, a computer program typically comprises a collection of compute kernels and internal functions. A software programmer may define the compute kernels, whereas the internal functions may be defined in a given library. [0034] For a data-parallel software application, an N-Dimensional computation domain may define an organization of an "execution domain". The N-Dimensional computation domain mayalso be referred to as an N-Dimensional grid or an N-Dimensional Range ("NDRange"). The NDRange may be a one-, two-, or three-dimensional space. Note that some embodiments may allow more than three-dimensional data. This dimensional space may also be referred to as an index space. For example, a software application may perform data processing on a two- dimensional (2D) array of data, such as an image file. The software application may perform an algorithm developed by a software programmer on a pixel-by-pixel basis of a 2D image or an element-by-element basis of a two-dimensional matrix. A given compute kernel may be invoked over the index space (the NDRange). In other embodiments, a software application may include an algorithm that utilizes data-parallel programming for electrostatic potentials mapping on a 3D lattice and direct coulomb summation used in macromolecular modeling. [0035] Typically after compilation, the arguments and parameters of each compute kernel are set. Additionally, associated memory objects and buffers are created. A given instance of the compute kernel may be executed as its own software thread. However, a compute kernel may include control flow transfer instructions that create forks, whereas a fork in a computer program typically creates a software thread, by common definition. A given instance of the compute kernel at a given point in the index space may be referred to as a "work item". A work item may also be referred to as a work unit. A work unit may operate with the one or more instructions in the compute kernel on a record of data corresponding to a given pixel (a given index) of the 2D image. Typically, work units have an associated unique identifier (ID). In another example, an introductory computer program processing the string "Hello World" may have one work unit for computing each letter in the string. [0036] The NDRange may define a total number of work units that execute in parallel if there is sufficient hardware support. For example, the NDRange may define a number of 280 work units, but a GPU may support the simultaneous execution of 64 work units at any given time. The total number of work units may define a global work size. As is well known to those skilled in the art, the work units may be further grouped into work groups. Each work group may have a unique identifier (ID). The work units within a given work group may be able to communicate with each other and synchronize execution and coordinate memory accesses. A number of work units may be clustered into a wave front for simultaneous execution on a GPU in a SIMD manner. Regarding the example above for 280 total work units, a wave front may include 64 work units. [0037] The OpenCL framework is an open programming standard for various compute devices, or OpenCL devices. A software programmer may avoid writing a vendor-specific code, therebyimproving code portability. Other frameworks are available and may offer more vendor-specific coding for heterogeneous architectures. For example, NVIDIA offers Compute Unified Device Architecture (CUD A®) and AMD offers ATI Stream®. With a CUDA framework, a compute kernel is typically statically compiled when the computer program is compiled. With an OpenCL framework, a compute kernel is typically compiled with a Just-In-Time (JIT) method. The JIT method may generate an appropriate binary code after obtaining the system configuration. With a JIT compilation method, the compilation time is included with the total execution time. Therefore, compiler optimizations may increase the execution time. In addition, at run time the OpenCL compiler may generate multiple versions of compute kernels. One version of a compute kernel may be generated for each type of OpenCL device type, such as a general-purpose CPU, a SIMD GPU, and so forth. [0038] The two frameworks, OpenCL and CUDA, have a difference in terminology between their respective execution models. For example, a work unit, a work group, a wave front and an NDRange in OpenCL have corresponding terms in CUDA such as a thread, a thread block, a warp and a grid. Throughout the rest of the description, the terms corresponding to OpenCL are used. However, the systems and methods described may apply to CUDA, ATI Stream and other frameworks. [0039] As shown in FIG. 2, code 210 defines two function calls generally titled "doWorkA" and "doWorkB". Each function call may be referred to as a "compute kernel". A compute kernel may be matched with one or more records of data to produce one or more work units of computation. Therefore, two or more work units may utilize the same instructions of the single function call, but operate on different records of data. For example, the function call "Power2" in code 220 may be used to execute 10 work units, one for each data value in the array "INPUT". Here, a record comprises a single data value. In other examples, a record may comprise two or more fields, wherein each field includes a data value. A SIMD micro-architecture may efficiently execute the instructions of the kernel "Power2", calculate the power of 2 for the values in the INPUT array and write the output to the RESULT array. [0040] The OpenCL framework may invoke an instance of a compute kernel multiple times in parallel. Each call to the compute kernel has one associated unique ID (a work unit ID) that may be fetched by calling an internal function named get global id(O). Regarding the above example in code 220, the compute kernel "Power2" is invoked once for each data value in the INPUT array. In this case, the compute kernel "Power2" is invoked 10 times. Accordingly, ten unique work unit IDs are fetched. With a JIT compiling method, these instances are invoked at runtime.The OpenCL framework may differentiate between these different instances by utilizing the unique work unit IDs. The data to be operated on (a record) may also be specified, such as a specific data value in the INPUT array. Therefore, at runtime, a work unit may be scheduled by default to the same OpenCL device as the associated compute kernel is scheduled. [0041] Turning now to FIG. 3, one embodiment of source code defining compute kernels with conditional statements is shown. Similar to code 210, the code 230 shown in FIG. 3 defines two function calls generally titled "doWorkA" and "doWorkB". Again, each function call may be referred to as a "compute kernel". Here, only one of the two compute kernels is executed during runtime. The selection of which compute kernel is executed is based on a conditional test provided by the function call "EvaluateFunction". A result of a given instruction or whether the given instruction is executed is data-dependent on the execution of previous instructions and data corresponding to an associated record. If the result of the conditional test is not consistent among a wave front of work units, the benefits of a SIMD micro-architecture may be reduced. For example, a given SIMD core may have 64 parallel computation units available for simultaneous execution of 64 work units. However, if half of the 64 work units pass the conditional test while the other half fails the conditional test, then only half of the parallel computation units are utilized during a given stage of processing. [0042] Turning now to FIG. 4, a generalized block diagram illustrating one embodiment of scheduled assignments 400 between hardware resources and compute kernels is shown. Here, the partitioning of hardware and software resources and their interrelationships and assignments during the execution of one or more software applications 430 is shown. In one embodiment, an operating system 420 allocates regions of memory for compute kernels 440a-440j and 440k- 440q. When applications 430, or computer programs, execute, each application may comprise multiple compute kernels. For example, a first executing application may comprise compute kernels 440a-440j and a second executing application may comprise compute kernels 440k-440q. Each one of the kernels 440a-440q may be used to generate one or more work units by being combined with one or more records of data (not shown). For example, compute kernel 440a may produce work units 442a-442d, compute kernel 440j may produce work units 442e-442h, compute kernel 440k may produce work units 442j-442m and compute kernel 440q may produce work units 442n-442q. A work unit may execute independently of other work units and execute concurrently with other work units. [0043] Each of the compute kernels shown in FIG. 4 may own its own resources such as an image of memory, or an instance of instructions and data before application execution. Each ofthe compute kernels may also comprise process-specific information such as address space that addresses the code, data, and possibly a heap and a stack; variables in data and control registers such as stack pointers, general and floating-point registers, program counter, and otherwise; operating system descriptors such as stdin, stdout, and otherwise; and security attributes such as a set of permissions. [0044] In one embodiment, hardware computing system 410 incorporates a general-purpose processor core 112 and a SIMD processor core 172, each configured to process one or more work units. In another embodiment, system 410 includes two other heterogeneous processor cores. In general, for a given application, operating system 420 sets up an address space for the application, loads the application's code into memory, sets up a stack for the program, branches to a given location inside the application, and begins execution of the application. Typically, the portion of the operating system 420 that manages such activities is the operating system (OS) kernel 422. The OS kernel 422 is referred to as "OS kernel" in order not to confuse it with a compute kernel, or a function call. The OS kernel 422 may further determine a course of action when insufficient memory is available for the execution of the application. As stated before, an application may be divided into more than one compute kernel and system 410 may be running more than one application. Therefore, there may be several compute kernels running in parallel. The OS kernel 422 may decide at any time which of the simultaneous executing compute kernels is allocated to the processor cores 112 and 172. The OS kernel 422 may allow a process to run on a core of a processor, which may have one or more cores, for a given amount of time referred to as a time slice. An OS scheduler 424 in the operating system 420 may comprise decision logic for assigning compute kernels to cores. [0045] In one embodiment, only one compute kernel can execute at any time on any one of the hardware computation units 412a-412g and 412h-412r. These hardware computation units comprise hardware that can handle the execution of a given instruction of a given work unit with associated data. This hardware may include an arithmetic logic unit that is configured to perform addition, multiplication, zero detect, a bit-wise shift, division, video graphics and multimedia instructions or other operations known to those skilled in the art of processor design. These hardware computation units may include a hardware thread in a multi-threaded processor, a parallel hardware column in a SIMD micro-architecture, and so forth. [0046] The dashed lines in FIG. 4 denote assignments and do not necessarily denote direct physical connections. Thus, for example, hardware computation unit 412a may be assigned to execute work unit 442d. However, later (e.g., after a context switch), the hardware computationunit 412a may be assigned to execute work unit 442h. In one embodiment, the OS scheduler 424 may schedule the work units 442a-442q to the hardware computation units 412a-412r with a round-robin scheme. Alternatively, the OS scheduler 424 may schedule the work units 442a- 442q to the cores 112 and 172 with a round-robin scheme. An assignment of a given work unit to a given hardware computation unit may be performed by an associated processor core. In another embodiment, the OS scheduler 424 may perform the scheduling based on availability of the processor cores 112 and 172. In yet another embodiment, the OS scheduler 424 may perform the scheduling according to assignments created by a programmer utilizing the OpenCL™ API or another similar API. These scheduling schemes may restrict portability and performance when there is a mismatch between the work unit assignments and hardware resources. [0047] Referring to FIG. 5, a generalized block diagram illustrating one embodiment of a logical layout of micro-architectures for two types of processor cores is shown. Although each of a general-purpose core 510 and a single instruction multiple data (SIMD) core 560 is shown, other types of heterogeneous cores are possible and contemplated. Each of the cores 510 and 560 have a dynamic random access memory (DRAM) 550a and 550b for storage of data and instructions. In one embodiment, the cores 510 and 560 share a same DRAM. In another embodiment, a given level of a cache memory subsystem (not shown) is shared in addition to the DRAM. For example, referring again to FIG. 1, the cache memory subsystem 118 is shared by the cores 112 and 172. [0048] Each of the cores 510 and 560 may include a cache memory subsystem 530. As shown, the general-purpose core 510 logically has the cache memory subsystem 530 separate from the control logic 520 and the arithmetic logic units (ALUs) 540. The data flow within the core 510 may be pipelined, although storage elements, such as pipeline registers, are not shown in order to simplify the illustration. In a given pipeline stage, an ALU may be unused if instructions in this stage do not utilize a certain type of ALU or if another work unit (or another thread for a general- purpose core) consumes the ALUs during this stage. [0049] As shown, the SIMD core 560 has the cache memory subsystem 530 grouped with control logic 520 for each row of computation units 542. The data flow within the core 560 may be pipelined, although storage elements, such as pipeline registers, are not shown in order to simplify the illustration. In a given pipeline stage, a computation unit may be unused if an associated instruction in this stage is not executed based on a previous failed test, such as a not- taken branch.[0050] Referring now to FIG. 6, a generalized block diagram illustrating one embodiment of a general-purpose pipeline execution flow 600 is shown. Instructions 602-608 may be fetched and enter a general-purpose pipeline. Instruction 606 may be a computation intensive instruction. During particular stages of the pipeline execution flow, one or more of the instructions 602-608 consume resources in the general-purpose processor core 112, such as decoder logic, instruction scheduler entries, reorder buffer entries, ALUs, register file entries, branch prediction units, and so forth. [0051] In a balanced scheme, each of the instructions 602-608 consume an equal amount of resources each stage. However, typically, a general-purpose core does not replicate resources for each instruction due to semiconductor real-estate cost, power consumption and other design considerations. Therefore, the workload may become unbalanced. For example, the instruction 606 may consume more resources for one or more pipe stages due to its computation intensive behavior. As shown, the resources 630 consumed by this instruction may become far greater than the resources consumed by other instructions. In fact, the computation intensive instruction may block the usage of hardware resources by other instructions. [0052] Some computation intensive tasks may place pressure on shared resources within the general-purpose core 112 shown in FIG. 1. Thus, throughput losses occur for both the computational intensive process and other processes waiting for the shared resources. In addition, some instructions may occupy the shared resource and other resources to support the computation being performed on the shared resource. Such a long latency instruction may concurrently block other processes from using several resources during a long latency. [0053] Referring now to FIG. 7A, a generalized block diagram illustrating one embodiment of a SIMD pipeline execution flow 700 is shown. Instructions 702-708 may be fetched and enter a SIMD pipeline with associated data. Instruction 704 may be a control flow transfer instruction, such as a conditional branch. The instruction 706 may be a first instruction in a path executed when the condition is true. The instruction 708 may be a first instruction in a path executed when the condition is false. For example, the branch instruction 704 may be associated with an IF statement in a high-level language program. The instruction 706 may be associated with a THEN statement in the high-level language program. The instruction 708 may be associated with an ELSE statement in the high-level language program. [0054] Each of the computation units within a given row may be a same computation unit. Each of these computation units may operate on a same instruction, but different data associated with a different work unit. As shown, some of the work units pass the test provided by theconditional branch instruction 704 and other work units fail the test. The SIMD core 172 may execute each of the available paths and selectively disable the execution units, such as the computation units, corresponding to work items that did not choose the current path. For example, during execution of an If-Then-Else construct statement, within each column of a SIMD architecture are execution units configured to execute the "Then" (Path A) and the "Else" (Path B) paths. The efficiency of parallel execution may be reduced as the first and the second work units pause execution and wait as the third work unit continues with its ongoing execution. Therefore, not all of the computation units are active computation units 710 in a given row after execution of the branch instruction 704. As shown, one or more computation units are inactive computation units 711 that have been disabled for execution. If a large number of computation units are inactive during a given pipe stage, the efficiency and throughput of the SIMD core is reduced. [0055] In one embodiment, an "Else" path is a return for the compute kernel. Execution of the compute kernel ends and the corresponding work unit becomes idle. However, neighboring work units in the SIMD core may continue executing. Referring now to FIG. 7B, a generalized block diagram illustrating another embodiment of a SIMD pipeline execution flow 720 is shown. Similar to execution flow 700, instructions 702-706 may cause one or more computation units to be disabled in a particular row of the SIMD core. Here, each "Else" path may be a return for a compute kernel. Therefore, for a given work unit, a branch resolving in a not-taken direction may cause the given work unit to cease further execution of the compute kernel. In execution flow 720, only one instruction is shown between a first branch instruction 704 and a second branch instruction 712 for ease of illustration. However, multiple instructions may be between the branch instructions 704 and 712. Regardless of the number of instructions between the branches 704 and 712, work units that resolve the first branch 704 in a not-taken direction may complete execution. Similarly for branch 712, work units that resolve the second branch in a not-taken direction may complete execution. Computation units for later stages of a SIMD core may be disabled for these work units. If a large number of computation units are inactive during a given pipe stage, the efficiency and throughput of the SIMD core is reduced. [0056] One example of an application that may cause multiple work units to fail a test and cease execution while neighboring work units may continue is face detection. As known to those skilled in the art, face detection as implemented in OpenCv (Open Computer Vision library) is one application of the Viola- Jones object detection algorithm. The Viola- Jones algorithm exhibits a data-dependent execution pattern. A search compute kernel is applied to a record ofdata, which may include one or more pixels. The search compute kernel searches for faces in a sub-window of a two-dimensional (2D) or a three-dimensional (3D) image. Within the compute kernel, there may be a cascade of tests implemented as control flow transfer instructions, such as branch instructions. In one typical example, a cascade of tests comprises 22 stages, or 22 tests. This cascade of tests may determine whether an input window contains a face. [0057] The cascade of tests in the Viola-Jones algorithm may be designed to prune unpromising paths quickly. Therefore, most work units may determine the non-existence of a face and finish. The execution of work units continues on the remaining pixels that are likely to contain a face. A small fraction of pixels (i.e. work unit executions) may continue through the 22 stages, whereas most pixels are found not to contain faces after a few initial stage tests. Even with large task parallelism, the presence of a few continuing work units on a wavefront may cause low SIMD core utilization. One method described below utilizes a separate heterogeneous core while releasing the SIMD core for further processing. This method may increase overall computing performance when it is detected that a small amount of SIMD parallelism is present. [0058] Turning now to FIG. 8, one embodiment of code 800 including a tagged branch to define a migration point is shown. The code 800 comprises a compute kernel generally titled "foo". During execution, a portion of the code 800 may be migrated to a separate heterogeneous core. In the example shown, the outer loop is data dependent. In one embodiment, a compiler informs the SIMD core of the data dependence by using a tag bit in the branch instruction corresponding to the "while" loop test. During execution, when a condition for migration is detected, such as a measured SIMD utilization is below a given threshold, the intermediate local values may be moved to a data structure in memory to be accessed by a separate heterogeneous core. For example, a general-purpose core may continue execution of the compute kernel from the point of the tagged branch migration point. For example, the implicit conditional branch in the while statement is tagged with the label "secondary_entry". The separate heterogeneous core may use a compiler-generated data structure. In another embodiment, this data may be cached, alleviating migration costs. In one example, the live data may include both a local slice of the "tmp" array, as well as a current value of the local temp variable. During migration, this data may be communicated to the runtime environment, which directs continued execution of the compute kernel to the secondary entry point indicated by the label "secondary entry". [0059] Turning now to FIG. 9, one embodiment of a method 900 for optimizing parallel execution of multiple work units in a processor by utilizing pre-runtime data information is shown. The components embodied in the processing node 110 and the hardware resourceassignments shown in FIG. 4 described above may generally operate in accordance with method 900. For purposes of discussion, the steps in this embodiment and subsequent embodiments of methods described later are shown in sequential order. However, in other embodiments some steps may occur in a different order than shown, some steps may be performed concurrently, some steps may be combined with other steps, and some steps may be absent. [0060] In block 902, a software program or subroutine may be located and analyzed. This software program may be written for compilation and execution on a heterogeneous multi-core architecture. Program code may refer to any portion of a software application, subroutine, dynamic linked library, or otherwise. A pathname may be entered at a command prompt by a user, a pathname may be read from a given directory location, or other, in order to begin compiling the source code. The program code may be written by a designer in a high-level language such as C, a C-like language such as OpenCL™, and so forth. In one embodiment, the source code is statically compiled. In such an embodiment, during a static front-end compilation, the source code may be translated to an intermediate representation (IR). A back-end compilation step may translate the IR to machine code. The static back-end compilation may perform more transformations and optimizations. In another embodiment, the source code is compiled with a Just-In-Time (JIT) method. The JIT method may generate an appropriate binary code after obtaining the system configuration. With either method, the compiler may identify a compute kernel in the program code. In one embodiment, the compiler, such as the OpenCL compiler, may generate multiple versions of compute kernels. One version of a compute kernel may be generated for each type of OpenCL device type, such as a general-purpose CPU, a SIMD GPU, and so forth. [0061] In block 904, the compiler may read one or more instructions of the compute kernel and analyze them. A conditional statement may be a control flow transfer instruction, such as a branch. Different types of control flow transfer instructions may include forward/backward branches, direct/indirect branches, jumps, and so forth. It may be possible for a compiler or other tool to statically determine a direction of a branch and/or a target of a branch. However, in one embodiment, some processing typically performed during runtime on associated data may be performed during compilation. For example, a simple test to determine a direction (taken, not- taken) of a branch may be performed. Although, compilation may be referred to as "static compilation", one or more small dynamic operations may be performed. This compilation may also be referred to as "pre-runtime compilation". Another example of a dynamic step performed at this time is identifying a next instruction to execute in each of a THEN, ELSE IF and ELSEblocks of an If-Then-Elself-Else construct. For example, if a conditional branch fails, a return statement may be executed. Therefore, the compiler knows that during execution, a corresponding work unit for this computer kernel may become idle when the branch test fails. [0062] In block 906, particular lines of code in a compute kernel are selected for creating a migration point. A migration point may be a location in the computer kernel where in-flight execution transfers to a different heterogeneous core. In one embodiment, this compute sub- kernel migration may be achieved by a mechanism similar to process migration, wherein an execution state is moved from a first heterogeneous core to a second heterogeneous core with a possibly different micro-architecture than the first core. In another embodiment, this compute sub-kernel migration may be achieved by creating multiple compute sub-kernels that are later dispatched. [0063] In one embodiment, the compiler may automatically identify migration points. As used herein, migration points may also be referred to as switch points. The compiler may use control flow analysis. Identifying a migration point may include utilizing static control flow analysis to find data-dependent loops leading to a compute kernel exit or return. Rather than identify each branch with a path including an exit or return, the compiler may use a count to reduce a number of migration points. For example, the first five branches found in a compute kernel may not be candidates for tagging as a migration point. Every third branch after the first five branches may be candidates for tagging as a migration point. Other filtering algorithms based on a count are possible and contemplated. [0064] In addition, the compiler may use profile input from previous executions to identify migration points. For example, a conditional test associated with a given branch may fail for a number of records of data above a given threshold. Therefore, this branch may be identified as a migration point. Further, programmer annotations to indicate migration points may be added as "pragmas" or as an extension to the OpenCL framework. [0065] In block 908, the compiler may tag the selected points in the code for each version of the compiled code. Each version may be referred to as a destination compute kernel for a respective OpenCL device. Again, the compiler may compile an identified compute kernel to produce two or more versions of compiled code, each capable of running on a respective one of the OpenCL devices. Referring again to code 800 in FIG. 9, the secondary entry point indicated by the label "secondary entry" is an example of a migration tag for a branch. A code generator within the compiler may insert the tag and insert other code to invoke the live values during migration. Invoking the live values may include transferring the live values to a destinationOpenCL device and initializing the values on the destination OpenCL device. The code generating and inserting process may be similar to debugger code being inserted at debut points and instrumentation for measuring dynamic behavior. [0066] In one embodiment, a compute kernel may be tagged to identify migration points as described above. In another embodiment, the compute kernel may be divided into multiple compute sub-kernels that are scheduled and dispatched independently. Runtime profile information or compiler static estimation may be used to determine pass/fail statistics for conditional tests implemented by branch instructions. A "hot" execution path may comprise a large number of passes above a given threshold of the conditional test for multiple records of data. A "cold" execution path may comprise a small number of passes below a second given threshold of the conditional test for multiple records of data. A compute kernel may be divided into compute sub-kernels based on the "hot" and "cold" execution paths. [0067] Generation of the corresponding compute sub-kernels may utilize similar runtime code generation mechanisms in addition to creation of a corresponding execution range (NDRange) for those compute sub-kernels, such as the "cold" execution paths, that continue execution on a general-purpose core. This may be done by creating a potentially sparse array containing the compute sub-kernel identifiers (IDs), which may utilize an OpenCL designation, to be executed on the general-purpose core. A given compute kernel may utilize indirect access to this array to identify a proper compute sub-kernel and later work unit. Alternatively, the compiler may generate a list of these IDs, and a corresponding compute sub-kernel to be invoked and mapped for each of the executing work units. [0068] After a profile run or a static estimation, a compute sub-kernel corresponding to a "hot" execution path may be compiled for a SIMD core. A compute sub-kernel corresponding to a "cold" execution path may be compiled for a general-purpose core. The early stages of a cascade of tests may have a high probability of passing. Therefore, these execution paths may be implemented in the "hot" compute sub-kernels executed on the SIMD core. After execution of these particular "hot" compute sub-kernels, the associated produced data may be moved in memory. This data movement promotes the local data that is live to global data. The work units corresponding to the "hot" compute sub-kernels may write a bit array based on its work unit ID to indicate whether an associated "cold" compute sub-kernel subsequently continues execution on a general-purpose core. [0069] In block 910, the compiler identifies a set of live values at the identified migration points. The live values may include intermediate computation values and local arrays. Referringagain to code 800 in FIG. 8, the live data may include both a local slice of the "tmp" array within the code, as well as a current value of the local temp variable. If migration occurs later during execution of an associated work unit, the live values may be transferred and initialized on a destination OpenCL device. As described above, the code generator within the compiler may insert the tag and insert other code to invoke the live values during migration. At the destination OpenCL device, code generation for migration entry points initializes data structures containing live values and proceeds with kernel execution. Alternatively, the compiler may create compute sub-kernels to proceed with the execution as described above. In block 912, the compiler completes compilation of the compute kernel for at least two heterogeneous processor cores. Other debug and instrumentation code may be inserted. [0070] In one embodiment, the compiler generates multiple data structures. Two or more data structures include executable object code for each compute sub-kernel on a given target OpenCL device, such as a general-purpose core and a SIMD core. Another data structure includes the live data to be transferred and accessed at the time of migration. Given a label designated as a potential migration point in a compute kernel, the compiler utilizes data flow analysis to determine live values that may be transferred. Live values that are not defined at that point in the execution, such as being cached in a register, are placed in a location accessible to a runtime environment. Examples of these locations include associated original memory locations and registers that hold contents that are preserved. In one embodiment, a heuristic check may be utilized to determine whether the size of the data transfer allows a profitable change execution between heterogeneous cores. [0071] Additionally the compiler may generate another data structure that is interpreted by the runtime environment to transfer the live data to an associated destination OpenCL device. This data structure may provide the locations and sizes of the live data to be transferred and their locations in an address space of both the source and destination OpenCL devices. Also, the compiler generates a corresponding version of the kernel for the destination device. The respective compiled code for each of the OpenCL devices accesses the live data at the designated locations and begins execution at the migration points. [0072] Turning now to FIG. 10, one embodiment of a method 1000 for optimizing parallel execution of multiple work units in a processor by utilizing pre-runtime data information is shown. The components embodied in the processing node 110 and the hardware resource assignments shown in FIG. 4 described above may generally operate in accordance with method 1000. For purposes of discussion, the steps in this embodiment and subsequent embodiments ofmethods described later are shown in sequential order. However, some steps may occur in a different order than shown, some steps may be performed concurrently, some steps may be combined with other steps, and some steps may be absent in another embodiment. [0073] In block 1002, an associated record of data is assigned to each work unit of a given compute kernel. In block 1004, the OS scheduler 424 schedules the work units to heterogeneous cores. In block 1006, the heterogeneous processor cores execute the corresponding scheduled work units. [0074] In block 1008, a given tagged migration point is reached. In one embodiment, a measurement of the utilization of a currently used OpenCL device may be performed. If the measurement indicates the utilization or performance is below a given threshold, then the associated compute kernel or compute sub-kernel may be migrated to another OpenCL device, such as a heterogeneous core with a different micro-architecture. In one embodiment, this measurement is a count of a number of currently executing work units on a SIMD core that reached an exit or return within an associated compute kernel or compute sub-kernel. Alternatively, a count of a number of disabled computation units in a wavefront may provide the same number. If this count is above a given threshold, then the work units that have not yet reached an exit point may be migrated to another heterogeneous core, such as a general-purpose core. Then the wavefront on the SIMD core may be released and is available for other scheduled work units. [0075] In other embodiments, the above technique may be extended to initiate migrations at any situation in which it is determined that a large fraction of the parallel executing work units in a wavefront on a SIMD core are idle and the remaining work units are expected to continue substantial execution. For example, the generated data structures may be in shared memory and in one or more caches. In a system with virtual memory support, a subset of the work units may hit the cache whereas the remaining work units experience virtual memory misses, which are long latency events. In this case, overall computing performance may be better with continued execution on a general-purpose core since further execution may benefit from prefetching techniques enabled by the current execution. [0076] If execution efficiency is not determined to be below a given threshold (conditional block 1010), then control flow of method 1000 returns to block 1006 and execution continues. If execution efficiency is determined to be below a given threshold (conditional block 1010), then in block 1012, one or more work units are identified to migrate to a second processor core with a micro-architecture different from a micro-architecture of the first processor core. The identifiedwork units may have caused the above measurement to be below the given threshold. In block 1014, the associated local data produced by the first processor core is promoted to global data. In block 1016, the compiled versions of the migrated work units are scheduled to be executed on the second processor core beginning at the migration tagged point. [0077] It is noted that the above-described embodiments may comprise software. In such an embodiment, the program instructions that implement the methods and/or mechanisms may be conveyed or stored on a computer readable medium. Numerous types of media which are configured to store program instructions are available and include hard disks, floppy disks, CD- ROM, DVD, flash memory, Programmable ROMs (PROM), random access memory (RAM), and various other forms of volatile or non-volatile storage. Generally speaking, a computer accessible storage medium may include any storage media accessible by a computer during use to provide instructions and/or data to the computer. For example, a computer accessible storage medium may include storage media such as magnetic or optical media, e.g., disk (fixed or removable), tape, CD-ROM, or DVD-ROM, CD-R, CD-RW, DVD-R, DVD-RW, or Blu-Ray. Storage media may further include volatile or non-volatile memory media such as RAM (e.g. synchronous dynamic RAM (SDRAM), double data rate (DDR, DDR2, DDR3, etc.) SDRAM, low-power DDR (LPDDR2, etc.) SDRAM, Rambus DRAM (RDRAM), static RAM (SRAM), etc.), ROM, Flash memory, non-volatile memory (e.g. Flash memory) accessible via a peripheral interface such as the Universal Serial Bus (USB) interface, etc. Storage media may include microelectromechanical systems (MEMS), as well as storage media accessible via a communication medium such as a network and/or a wireless link. [0078] Additionally, program instructions may comprise behavioral-level description or register-transfer level (RTL) descriptions of the hardware functionality in a high level programming language such as C, or a design language (HDL) such as Verilog, VHDL, or database format such as GDS II stream format (GDSII). In some cases the description may be read by a synthesis tool which may synthesize the description to produce a netlist comprising a list of gates from a synthesis library. The netlist comprises a set of gates which also represent the functionality of the hardware comprising the system. The netlist may then be placed and routed to produce a data set describing geometric shapes to be applied to masks. The masks may then be used in various semiconductor fabrication steps to produce a semiconductor circuit or circuits corresponding to the system. Alternatively, the instructions on the computer accessible storage medium may be the netlist (with or without the synthesis library) or the data set, as desired.Additionally, the instructions may be utilized for purposes of emulation by a hardware based type emulator from such vendors as Cadence®, EVE®, and Mentor Graphics®. [0079] Although the embodiments above have been described in considerable detail, numerous variations and modifications will become apparent to those skilled in the art once the above disclosure is fully appreciated. It is intended that the following claims be interpreted to embrace all such variations and modifications. |
Device simulators and methods for using the same are disclosed. In some embodiments, the device simulators are capable of permitting accurate pixel to pixel and inch to inch mapping between a simulated display and a display of a target device. Web application development tools utilizing such simulators are also disclosed. In some embodiment, such web application development tools provide a convenient method to convert electronic document source files to interactive document web applications for multiple operating systems and form factors. |
1. A system comprising:processor;A memory having stored thereon a device emulator instruction, wherein when executed, the device emulator instruction causes the processor to perform the following operations:Generating a user interface within a web browser, the user interface including at least one hosting framework and at least one scalar, the at least one hosting framework including a simulation of a target device running therein, the target device including at least one display;Converting the position of the scalar to a zoom ratio;Applying the scaling ratio to the at least one hosting framework;Wherein the at least one first location of the scalar is associated with a zoom ratio that enables an inch-to-inch mapping between the at least one hosted frame and the at least one display of the target device.2. The system of claim 1 wherein the at least one second location of the scalar is associated with a scaling ratio that enables pixel-to-pixel mapping of the at least one hosted framework to the at least one display of the target device.3. The system of claim 1 wherein said device emulator instructions, when executed, further cause said processor to generate a plurality of device frameworks adjacent to said at least one hosted framework.4. The system of claim 1 wherein said plurality of device frames comprise images of a screen of said target device.5. The system of claim 1 wherein said device simulator instructions, when executed, further cause said processor to refresh said at least one hosted framework in response to an event associated with said scalar.6. The system of claim 5 wherein said event associated with said scalar is selected from the group consisting of: starting, stopping, moving, or a combination thereof.7. The system of claim 5 wherein said at least one hosting framework is refreshed in real time in response to said event associated with said scalar.8. The system of claim 1 wherein said at least one first location is associated with a scaling ratio that causes said at least one hosted framework to differ by about 5% from a corresponding size of said display of said target device At least one dimension, or smaller, presents the at least one hosted frame on a display of the system.9. A method comprising:Generating a simulation of the target device in a web browser executed by the processor, the target device including at least one display;Changing the scaling of the simulation based on a scaling ratio determined from a scalar position;Wherein the at least one first location of the scalar is associated with a scaling ratio that enables an inch-to-inch mapping between the simulation and the at least one display of the target device.10. The method of claim 9, wherein the at least one second location of the scalar is associated to a scaling ratio that enables pixel-to-pixel mapping between the simulation and the at least one display of the target device.11. The method of claim 9 further comprising:Performing the simulation within at least one hosting framework within the web browser;Imaging of a screen of the target device is displayed in at least one device frame adjacent to the at least one hosted frame.12. The method of claim 10 further comprising refreshing said simulation in said web browser in response to a scalar event.13. The method of claim 12 wherein said scalar event is selected from the group consisting of: starting, stopping, moving, or a combination thereof.14. The method of claim 12 wherein said simulation is refreshed in real time in response to said scalar event.15. A method comprising:Displaying a user interface within a web browser executed by a processor, the user interface including a presentation layer and a presentation layer, the presentation layer including hypertext markup language code, the presentation layer including JavaScript code;Performing a simulation of a target device within the presentation layer, the target device including at least one display;Detecting a position of the scalar with the presentation layer;Converting the position of the scalar to a zoom ratio with the rendering layer;Applying the scaling ratio to the simulation;Wherein the at least one first location of the scalar is associated with a scaling ratio that enables an inch-to-inch mapping between the simulation and the at least one display of the target device.16. The method of claim 15 wherein the at least one second location of the scalar is associated to a scaling ratio that enables pixel-to-pixel mapping between the simulation and the at least one display of the target device.17. The method of claim 15 further comprising:Performing the simulation within at least one hosting framework within the user interface;Imaging of a screen of the target device is displayed in at least one device frame adjacent to the at least one hosted frame.18. The method of claim 15 further comprising:The simulation is refreshed with the rendering layer in response to a scalar event.19. The method of claim 18 wherein said scalar event is selected from the group consisting of starting, stopping, moving, or a combination thereof.20. The method of claim 18 wherein said simulation is refreshed in real time in response to said scalar event. |
Simulation of web applications and auxiliary devices in web browsers, web application development tools, and methods of using themField of inventionThe present disclosure generally relates to device emulation and web application development for multiple operating systems and form factors.Background techniqueA web application is a computer software application hosted in a browser controlled environment (eg, a Java applet) or a computer software application encoded in a browser supported language such as JavaScript, Hypertext Markup Language (HTML), and the like. This type of application is popular because of the extensive and cross-platform use of web browsers. In fact, web browsers are frequently used in many popular operating systems ("OS" or "multiple OS"), for example, Windows? OS sold by Microsoft? sold by Microsoft?, MAC OS sold by Apple? Android? OS sold by Google? They can be used in devices that fall within a wide range of form factors, such as desktop computers, laptop computers, tablet personal computers ("PC" or "multiple PCs"), and handheld devices (eg, mobile phones) , smart phones, etc.).Web applications are continually being developed using authoring tools, which are themselves web applications, hosted in a web browser. Often, such authoring tools take the form of a device simulator that is displayed in a web browser running on the development system. The device simulator includes one or more images of the screen of the target device (eg, mobile phone, desktop computer, etc.). The web application under development is displayed within the image of the screen of the target device. In this way, the simulator allows the developer to preview the web application under development in the context of the target device screen.In order for the device simulator to accurately represent how the web application will be presented on the target device, two types of mappings need to be implemented. First, the device simulator must be capable of pixel-to-pixel mapping, where one pixel of a simulated display in the device simulator (hereinafter referred to as "analog display") is associated with the display of the target device (hereinafter referred to as the "target display") One pixel. Second, an inch-to-inch (ie, physical) mapping is required, where one inch of the simulated display is associated with one inch of the target display.While existing authoring tools are useful, they do not achieve accurate inch-to-inch mapping. This is due to the fact that accurate inch-to-inch mapping requires information about the number of pixels per inch (PPI) of the simulated display, or in other words, PPI information about the display of the development system in which the device simulator is running. In many instances, the PPI of the simulated display is unknown. Regardless, mapping the PPI of the development display to the PPI of the target display can be difficult, even if the PPI of the development display is known.Moreover, existing web application development tools do not provide a straightforward, simple mechanism for simultaneously converting electronic documents (such as e-books) into interactive document applications for multiple OS and/or form factors simultaneously. In contrast, existing web application development tools typically require application developers to use different tools to generate applications for each OS. Such a process can be cumbersome and inconvenient, and can create an interactive document application that is inconvenient for the user interface between OS and/or form factors. Moreover, many existing utilities for converting documents into interactive document applications do not adjust the page layout to take into account changes in resolution and screen orientation between different platforms. Therefore, users of interactive document applications developed using existing tools may have to roll over to read a single page of the document, which is not what the user desires.DRAWINGSFIG. 1 provides a block diagram of software components of a device simulator in accordance with a non-limiting embodiment of the present disclosure.2 is a flow diagram of a non-limiting method of scaling a simulated display of a simulated display presented by a device simulator in real time, in accordance with a non-limiting embodiment of the present disclosure.FIG. 3 provides a non-limiting example of JavaScript pseudo code that can accurately zoom in/out a managed framework (iframe) in real time in a variety of web browsers (eg, Internet Explorer, Firefox, and Chrome).4 is an exemplary block diagram of a model, view, control (MVC) architecture pattern upon which one or more aspects of the web application development tool of the present disclosure may be based.5 is a top-level architecture and workflow diagram for a web application development tool in accordance with a non-limiting embodiment of the present disclosure.6 is an architectural diagram of an interactive document web application in accordance with a non-limiting embodiment of the present disclosure.7 is an architectural diagram of a native application generated by a compiler service in accordance with a non-limiting embodiment of the present disclosure.FIG. 8 provides a non-limiting example of a user interface in accordance with the present disclosure.9 is an architectural diagram of a web-based user interface in accordance with a non-limiting embodiment of the present disclosure.10 is an architectural diagram of a conversion service in accordance with a non-limiting embodiment of the present disclosure.11 is a class diagram of a non-limiting example of a web application development tool in accordance with the present disclosure.12 is a method and flow diagram in accordance with a non-limiting embodiment of the present disclosure.Detailed waysOne aspect of the disclosure relates to systems and methods for accurately simulating a display of a target device. Accordingly, described herein is a system that includes a processor and a memory on which device emulator instructions are stored. When executed, the device emulator instructions can cause the processor to perform a variety of functions. For example, the device emulator instructions may cause the processor to generate a user interface within a web browser, wherein the user interface includes at least one hosting framework and at least one scalar. In some embodiments, the simulation of the target device can be run in a hosting framework, wherein the target device includes at least one display.The device simulator instructions, when executed, can also operate to cause the processor to convert the position of the scalar into a scaling ratio and apply the scaling ratio to the hosting framework. In some non-limiting embodiments, the at least one first location of the scalar can be associated with a zoom ratio that enables an inch-to-inch mapping between the at least one hosted frame and the display of the target device. In still other non-limiting embodiments, the at least one second location of the scalar may be associated with a scaling ratio of pixel-to-pixel mapping between the at least one hosted frame and the display of the target device.Also described herein are methods for simulating a target device. These methods may include, for example, generating a simulation of the target device in the web browser executed by the browser and changing the size of the simulation based on the scaling ratio determined by the scalar position. In some non-limiting embodiments of the methods described herein, the at least one first location of the scalar is associated with a scaling ratio that enables an inch-to-inch mapping between the simulation and a display of the target device. In still other non-limiting embodiments of the methods described herein, the at least one second location of the scalar is associated with a scaling ratio that enables pixel-to-pixel mapping between the at least one hosted framework and a display of the target device.In some embodiments, the methods described herein include displaying a user interface within a web browser executed by the processor. The user interface can include a presentation layer and a presentation layer. The presentation layer can include HTML code, and the presentation layer can include JavaScript code. In some embodiments, the method includes performing a simulation of the target device within the presentation layer, wherein the target device includes at least one display. The method can also include utilizing the rendering layer to detect the location of the scalar, utilizing the rendering layer to convert the scalar location to a scaling ratio, and applying the scaling ratio to the simulation. In some non-limiting embodiments, the at least one first location of the scalar is associated with a scaling ratio that enables an inch-to-inch mapping between the simulation and at least one display of the target device. In still other non-limiting embodiments, the at least one second location of the scalar is associated with a zoom ratio that enables pixel-to-pixel mapping between the at least one hosted frame and the at least one display of the target device.One aspect of the disclosure relates to systems and methods for achieving accurate inch-to-inch mapping in a browser-hosted simulation of a target device (eg, a mobile phone). In some embodiments, the systems and methods of the present disclosure enable accurate inch-to-inch mapping between target device and browser-hosted simulations of target devices, while also providing precise pixels for switching between analog and target devices A simple mechanism for mapping pixels.Accordingly, described herein is a device simulator that runs as a web application on a processor of a computing device. In general, the device simulator described herein enables a user interface (UI) to be displayed in a web browser running on a computing device. The UI includes at least one device emulator preview area (also referred to herein as a "hosting framework") configured to emulate a display of a target device, such as a mobile phone, tablet PC, laptop computer, etc. . An interactive document web application, such as an e-book, can be run and displayed within at least one hosting framework. In this manner, the device simulator of the present disclosure can enable a user (eg, a software developer) to implement the operation and appearance visualization of an interactive document web application in an environment that simulates a target device display.In addition to the basic elements and functions described above, the device simulator described herein can also include at least one device framework proximate to or otherwise adjacent to at least one hosted framework. Such a device framework can display an image, such as an image of a screen of a target device. In such an instance, the device framework can be distributed around the hosting framework to enhance the simulated operation and appearance of the web application executing within the hosting framework.The device simulator described herein may also include mechanisms for adjusting the attributes of at least one of the hosting frameworks. For example, the UI of the device simulator described herein may include elements and underlying code that allow selection of target device type, resolution, and orientation. For example, the device simulator described herein can include source files containing data relating to multiple target devices and form factors such as mobile phones, tablet PCs, smart phones, and laptops. Once a particular device or attribute (eg, resolution and/or orientation) is selected, the device simulator of the present disclosure can use the adjusted properties to adjust the relevant characteristics of the hosted framework and render the interactive document web application running in the hosted framework again. .For example, if a user selects a different resolution through the UI, the device simulator described herein can work to adjust the resolution of the hosted framework and re-render the hosted framework (and the web application running in it) with the newly selected resolution and / or one or more device frameworks. Similarly, if the device type is changed through the UI, the device simulator described herein can load data related to the selected device (eg, screen imaging, resolution, orientation, etc.) and utilize the characteristics associated with the selected device again. Render a hosting framework and/or one or more device frameworks.The UI may also include a scalar that, in combination with a scalar code (described later), enables the user to change the size of at least one hosting framework and/or one or more device frameworks. For example, the scalar location can be converted to a scaling ratio by a scalar code, which can then be applied to the scaling properties of at least one managed framework and/or one or more device frameworks. As described in detail later, this functionality enables accurate inch-to-inch mapping between a display that is simulated in at least one hosted framework and a display of a target device (eg, a mobile phone, tablet PC, etc.). And in some embodiments, this functionality can provide an efficient mechanism for the precise inch-to-inch mapping and pixel-to-pixel mapping conversion between the hosted frame and the display of the target device.For the purposes of the present disclosure, the term "position" when used in the context of a scalar is one of the actual position of the indicator quantity (eg, relative to another portion of the scalar) and the value attributable to the position of the scalar or Both have both. For example, if the scalar is a slider having an arm that can travel left to right over a range of values (eg, 0 to 100), the "position" of the slider can refer to the relative position of the slider arm, and/or Attributable to the value at which the slider is located (eg, 0, 25, 50, 100, etc.).FIG. 1 provides a block diagram of a non-limiting example of software components of a device simulator in accordance with the present disclosure. As previously explained, the device emulator can be executed within a web browser running on a development system, such as a desktop PC.As shown in FIG. 1, device simulator 100 includes a presentation layer 101 and a presentation layer 102. The presentation layer 101 (also referred to herein as "view" in the context of a model, view, control architecture) includes underlying code that generally functions to render a user interface in a web browser running on a processor of the development system. In the non-limiting example shown in FIG. 1, presentation layer 101 renders preview area 103, hosting framework 104, one or more device frameworks 105, and scalars 106. In conjunction with the generation of a UI for a web application development tool, the specific operation of the presentation layer 101 is described later, which utilizes the device simulator described herein to help generate an interactive document web application. For the present discussion, it should be noted that the presentation layer 101 can be encoded using HTML, and the presentation layer 101 can include a variety of plug-ins that are consistent with those skilled in the art. For example, and as will be described in detail below, the presentation layer 101 can include an index.html file (or other similar file) containing references to third party plug-in applications and/or databases, such as jQuery javascript slider plugins and associated databases. .Although FIG. 1 illustrates the preview area 103 as being much larger than the hosting frame 104, it should be understood that the preview area 103 can be any size. In some embodiments, the preview area 103 is sized such that it approximates at least one size of the hosting frame 104.The preview area 103 can be subdivided (e.g., using HTML) into a plurality of columns and rows to define separate regions or frames. This concept is illustrated in FIG. 1 by dashed line 107, which illustrates that the preview area 103 is divided into three rows and three columns, thereby dividing the preview area 103 into eight areas surrounding the hosting frame 104 (eg, 8 Equipment framework 105). According to the previous discussion, each region (frame) of the preview area 103 (including one or more device frames 105) can be independently encoded to display an image, such as an image of a screen of the target device. This concept is illustrated later in Figure 8, which provides a non-limiting example of a user interface in which one or more device framework(s) display an image of the screen of the target device.Although FIG. 1 illustrates dividing the preview area into eight areas around a single hosting frame 104, it should be understood that the preview area 103 can be divided into any number of areas and can contain more than one hosting frame. In fact, a device simulator having a preview area containing 1, 2, 3, 4, 5 or more managed frames is contemplated by the present disclosure. By way of example, the preview area 103 can be divided into six columns and three rows to enable display of two hosting frames each surrounded by eight regions (including eight device frames 105). In this way, multiple simulations of the target device can be presented simultaneously.The preview area 103 (including the hosting framework 104) can be encoded using HTML, its variants, and/or other suitable code. In an example in which the preview area 103 is encoded using HTML, the hosting framework 105 can be defined using, for example, an <iframe> HTML tag.The scalar 106 is a user interface object having a range of locations that the user can change. In some embodiments, the scalar 106 is presented in the form of a slider, a pulley, a pair of zoom in/out buttons, a drop down list, and/or a series of radio buttons. As a non-limiting example of a suitable scalar that can be used in accordance with the present disclosure, reference is made to the jQuery Ul slider library and/or plugin, which will now be described later.The presentation layer 102 operates to apply device model data and other characteristics to the hosting framework 104, render a custom panel (described later) within the web browser, and render the hosting framework 104 with suitable attributes of the target device and/or user selected characteristics. Running web application. The presentation layer 102 can provide this functionality by, for example, calling at least one source file that contains attributes associated with the display of the target device. Such source files may include information of one or more target devices (eg, device type, orientation, resolution, etc.). The presentation layer 102 can apply the attributes of the target display to the relevant attributes of the hosting framework 104. Thus, the web application running in the hosting framework 104 of the device emulator 100 can be displayed within the web browser of the development system with the appropriate attributes of the target device.The presentation layer 102 may also include a scalar code 107 that monitors the location of the scalar 106. Based on the location of the scalar 106, the scalar code 107 can determine the scaling ratio or scale based on the browser in which the device simulator 100 is located. For example, a scalar code can convert the position of the scalar 106 to the scale used in the Firefox web browser or the zoom ratio used in Microsoft Internet Explorer. For the sake of convenience and brevity, the result of this conversion is referred to as "scaling ratio".The scalar code 107 can apply the determined scaling ratio to the preview area 103, the hosting framework 104, one or more device frameworks 105, and combinations thereof, and render the preview area 103 again using the determined scaling ratio (including the hosting frame 104 and one or Multiple device frameworks 105).The scalar code 107 can also include an event handler that applies the determined scaling ratio to the preview area 103 in response to the scalar event. For example, the scalar code 107 may change when the position of the scalar 106 changes ("move" or "slide") when the position of the scalar 106 begins to change ("start"), or the position of the scalar 106 stops changing ("stop") or In combination, the determined zoom ratio is applied to the preview area 103 (or a component thereof). In some embodiments, as the position of the scalar 106 changes, the scalar code 107 applies the determined scaling ratio to the preview area 103 (including the device simulation preview area 104 and one or more frames 105, if any). In such an instance, the scalar code 107 can cause the rendering layer 100 to update the zoom ratio of the preview area 103, the hosting framework 104, one or more device frameworks 105, or a combination thereof, and render the one or more in real time using the determined zoom ratio. Multiple such zones.By appropriately adjusting the scalar 106, the size of the preview area 103, the hosting frame 104, one or more device frames (105), and combinations thereof can be adjusted to provide an accurate inch-to-inch mapping of the simulated display of the target device to the actual target device. . This may be by, for example, adjusting the scalar 106 to cause the device simulator 100 to present the preview area 103 (and in particular the hosting frame 104) on the display of the development system to approximate the size of the preview area 103 (and specifically the hosting frame 104) to the target device. Physical size is achieved. In this way, the device simulator can provide a WYSIWYG environment. Thus, the web application can be presented in the device emulator in the same manner as it would be shown on the target device.2 is a flow diagram of a non-limiting method of adjusting the scaling of a simulated display presented by a device simulator of a target device in real time in accordance with the present disclosure. In a start step 201, the device emulator 100 can render the web application in the hosting framework 104 of the preview area 103 using characteristics consistent with the default target display. Alternatively, device emulator 100 may wait for the selection of the target device before presenting the web application in all or a portion of preview area 103, including hosting framework 104 and one or more device frameworks 105. Regardless of the operation performed in the start step 201, in the presentation preview area step 202, the device simulator 100 presents the preview area 103.After presenting the preview area 103, in the detect scalar position step 203, the scalar code 107 monitors the position of the scalar 106. Then in the Convert Scalar Position to Zoom Ratio step 204, the position of the scalar 106 can be converted to a zoom ratio. In applying the scaling ratio to the device simulation step 205, the device simulator 100 can then apply the determined scaling ratio to the preview zone. Steps 202-205 can be repeated as the scalar 106 is adjusted. Using this approach, the preview area 103 (including the hosting framework 104 and/or one or more device frameworks 105) can be updated to allow the user to visualize changes to the zoom ratio of the preview area 103 in real time.In some embodiments, the device simulator described herein is capable of generating a simulated range of target devices that differ in size from the display device's physical size by about 5% or less, about 2% or less, or even about 1% or less. monitor. In some embodiments, the device simulator described herein is capable of generating a simulated display of a target device having a size equal to the physical size of the display of the target device.To achieve an inch-to-inch mapping, the scalar 106 can be manually adjusted until the comparison of the simulated display presented by the device simulator 100 approaches the physical size location of the target device. This can be accomplished by, for example, presenting a simulated display on a monitor of the development system, securing the target device to the hosted system, and visually comparing the size of the simulated display to the physical size of the target device while adjusting the scalar. Alternatively, this comparison can be done automatically, for example by encoding a program that compares the properties of the simulated display to the number of pixels per inch (PPI) of the developed display. For example, such a program can invoke a database containing multiple PPI information for developing displays and compare the PPI information to the simulated display, specifically the associated attributes of the hosted framework.The scalar 106 can also be configured such that the selected location is associated with a zoom ratio of the device emulation preview area 103, the hosting framework 104, the one or more device frameworks 105, or a combination thereof, that is mapped to the display pixel of the target device. For example, the selected location of the scalar 106 can be associated to a 100% zoom ratio of the device emulation preview area 103, the hosting framework 104, one or more device frameworks 105. In some non-limiting embodiments, the number of pixels in the preview area is greater than or equal to the number of pixels of the display of the target device. That is, the resolution of the preview area (and specifically the hosted frame) is preferably equal to or greater than the resolution of the display of the target device. In such an example, the scaling determined by the scalar can provide a simulated display that implements pixel-to-pixel mapping with the display of the target device.In some examples, the resolution of the preview area can be set to be lower than the resolution of the display of the target device. In such an instance, the simulated display running in the preview area (and specifically for a particular hosting framework 104) can be associated with a portion of the display of the target device. Nonetheless, using the scalars described herein, a scaling ratio of pixel-to-pixel mapping between the simulated display and the corresponding portion of the display of the target device can be determined.As a non-limiting example of a scalar that can be used in accordance with the present disclosure, a slider enabled by the jQuery Ul slider plugin and the Javascript library (hereinafter referred to as "jQuery plugin") is mentioned. The jQuery plugin provides a variety of slider options, including a variety of handles and ranges that developers can choose from. These handles and ranges can be selected by the user, for example using a mouse or keyboard.To illustrate the non-limiting use of the JQuery UI slider plugin, refer to Figure 3. In general, Figure 3 provides exemplary JavaScript pseudocode that enables real-time accurate zooming in/out of a hosted framework (iframe) in a variety of web browsers, such as Internet Explorer, Firefox, and Chrome.In the non-limiting example shown in FIG. 3, the jQuery library can be included by adding the following line to an HTML document defining the presentation layer 101 (eg, in index.html):and.Then, in the body part of the HTMI document, you can use "<div>" to provide the slider's locator, for example:.You can then use the following statement to instruct the jQuery library to display the slider at the appropriate locator, with the range having the maximum, minimum, and default values:Of course, you can also use different types of sliders and sliders with different ranges.The event handler for the slider can be specified at the end of the previous one. In the non-limiting example in Figure 3, the event handler is defined as follows: This instructs the event handler to pick up each movement of the slider and update the preview area to redraw the preview area (including the hosting frame) in real time in response to each movement of the slider. However, as mentioned above, it is also possible to specify an event handler that redraws the preview area in response to other events (eg, start and/or stop events).The remainder of the exemplary pseudo code in Figure 3 specifies that the slider position is converted to a zoom ratio, and the scaling ratio is applied in a variety of web browsers, such as Internet Explorer and Chrome.As mentioned in the background, current web application development tools do not provide a convenient way for application developers to simultaneously convert electronic documents, such as e-books, into interactive document applications for a variety of OS and/or form factors. In contrast, existing tools typically require application developers to use different tools for different OSs to convert electronic documents into such interactive document applications. This process is time consuming and can result in an inconsistent experience for the user of the generated interactive document application. Moreover, existing tools often do not consider differences in resolution, screen orientation, and other factors that may affect the reading experience across different platforms and operating systems.The present disclosure may address one or more of these problems by providing an integrated development method and web application development tool that utilizes the device simulator described herein. For example, the web application development tools and methods of the present disclosure may provide a mechanism to quickly convert an electronic document, such as an e-book, into an interactive document web application for one or more OS and form factors. Moreover, the tools and methods described herein are capable of adjusting various elements of the generated application to take into account differences in screen resolution, orientation, form factors, and the like between different target devices.Accordingly, another aspect of the present disclosure is directed to a web application development tool. In some embodiments, the web application development tools described herein run in a web browser executing on a processor of a development system of a desktop or laptop computer as follows. The web application development tool typically includes a web-based user interface and conversion services. In some embodiments, these web application development tools also include a compiler service. The functions and exemplary configurations of each of these components are discussed in detail below.In this regard, with reference to FIG. 4, an exemplary block diagram of a model, view, control (MVC) architecture pattern upon which one or more aspects of the web application development tools of the present disclosure may be based is provided. In the MVC mode, the model 401 can manage the behavior and data of the application domain in response to a request for information about its state (eg, from a view), and/or in response to an instruction to change state (eg, from a controller). In an event-driven system, the model 401 can notify the observer (eg, a view) when the information about its state changes to enable the observer to react.View 402 is generally configured to render the model in a form suitable for interaction, such as a user interface element. It should be understood that there may be multiple views for a single model. In such cases, each view can be used for the same or a different purpose than another view. View 402 can have a 1:1 correspondence with a display surface (eg, a monitor) and can be configured such that it knows how to render the display surface.Controller 403 is generally configured to receive (eg, from a user) input and initiate a response to such input through a call to a model object or other resource. For example, controller 403 can receive input from a user and instruct a model and view to perform an action based on the input.As mentioned above, the tools and methods of the present disclosure can utilize UI and translation services to convert an electronic document, such as an e-book, into an interactive document web application. As will be described in detail below, the UI can be configured to provide a convenient mechanism for the provisioning of source files by, for example, providing an upload or import utility that enables the source files to be loaded into the web application development of the present disclosure. In the tool. Once loaded, the conversion service can convert the source file into an interactive document web application (eg, in HTML/JavaScript language), which can be run in the device simulator and edited. Alternatively, the web-based authoring tool can utilize the transformation service to package the interactive document web application into a package and application installer suitable for use with a wide variety of OS and form factors.For purposes of clarity and conciseness, the present disclosure focuses on converting an electronic document stored in an epub file format into an interactive document web application. However, it should be understood that electronic documents stored in any suitable file format can be used. For example, documents stored in PDF format, doc format, docx format, HTML format, TXT format, MOBI format, and the like can be used.FIG. 5 provides a top level architecture and workflow diagram of an exemplary web application development tool in accordance with a non-limiting embodiment of the present disclosure. As shown, the web application development tool 500 includes a user interface 501, a translation service 502, and a compiler service 503. In operation, a source file 504, such as an e-book, is loaded into the web application development tool 500, such as through the user interface 501. Once loaded, the translation service 502 converts the source file 504 into an interactive document web application 505. A preview of the interactive document web application 505 is generated within a device simulator (not shown) running within the user interface 501.As will be described later, the interactive document web application 505 can include style code (eg, a cascading style sheet) and dynamic code (eg, JavaScript). In general, the style code allows customization of the user interface of the interactive document web application 501. In contrast, dynamic code generally performs operations to dynamically adjust various elements of the interactive document web application 501 (eg, its page layout, orientation, etc.) in response to input made through the user interface 501 of the web application development tool 500.Note that the user interface 501 can include custom code, simulator control code, and project bar code, any of which can be used to customize the interactive document web application 505. For example, input through the UI can be used to change the user interface of the interactive document application 505, and/or its resolution, layout, orientation, font, and the like. Moreover, user interface 501 can include elements that are executed to insert a plug-in (eg, a social plug-in) into interactive document web application 501. In response to such input, the interactive document web application 505 and/or its preview can be updated.Once the customization of the interactive document web application 505 is completed, the compiler service 503 can perform operations to package the final customized interactive document web application into a native application suitable for installation on a variety of OSs. This is illustrated in the exemplary top-level architecture and workflow diagram of Figure 5, where the compiler service 503 packages the interactive electronic document web application 505 into application installers 507 1 , 507 2 for OS 1, OS 2, OSn, respectively. , 507 n . The compiler service 505 can package the interactive document web application into a native application using a web toolkit or another suitable mechanism.The electronic document web application generated by the conversion service includes source files and/or underlying code that enable the application to be dynamically edited in the device simulator. For example, an electronic document web application can include dynamic code, such as a JavaScript engine configured to dynamically generate HTML pages from other source files.Further, the interactive document web application can include layout code, such as one or more HTML pages that define the layout of the user interface of the electronic document web application. Moreover, the electronic document web application can include a look-and-feel (eg, color scheme, button shape, icon, etc.) style code (eg, a cascading style worksheet) that defines each component of the user interface of the interactive document web application. The interactive document web application may also include a TOC code (eg, one or more HTML files) that provides a table of contents (TOC) and optionally an entry for the interactive document web application. For example, the interactive document web application may include chapter code (eg, one or more html files) containing the basic content of each chapter of the electronic document contained in the source file described above.FIG. 6 provides an exemplary architectural diagram of an interactive document web application in accordance with the present disclosure. As shown, the interactive document web application 601 is encoded using the MVC architecture described above. The model 602 includes dynamic code 603 (eg, the JavaScript engine treesaver.js), which is dynamically based on source files, such as layout code 604 (eg, resources.html), style code 605 (eg, style.ess), TOC code. An HTML page is generated 606 (eg, index.html) and chapter code 607 (eg, an HTML file).One or more of dynamic code 603 (eg, treesaver.js), layout code 604, and/or style code 605 can be encoded such that they load TOC code 606 (eg, index.html) and/or chapters in a hosted browser. Code 607 was loaded before. This may enable dynamic code 603 to generate HTML in a hosted browser using styles specified in style code 605 (eg, style.ess), such as style sheets, and navigation layouts specified in layout code 604 (eg, resources.html). page. The HTML pages generated by the dynamic code 603 can be rendered in view 608 (eg, in a hosted browser), where the pages can be viewed by the user.Dynamic code 603 can also be responsive to input through control 609, which can be included in the user interface of the web application development tool described herein. Thus, for example, control 609 can be run to change aspects of model 602, such as moving to a previous/next chapter, a previous/next page, or a rotating screen. Dynamic code 603 can generate an HTML page in response to such input, thereby updating model 602 to include the changes entered via control 609. The view 608 can then be updated with an HTML page generated in response to the input, thereby enabling the user to view changes to the interactive document web application.It is worth noting that in the non-limiting example shown in Figure 6, HTML and JavaScript are used to generate an interactive document web application. Therefore, interactive document web applications can be platform-independent, as a wide variety of OS and hardware can execute HTML and JavaScript. In fact, it is expected that the target device should be able to execute the interactive document web application of the present disclosure without difficulty as long as the OS and hardware of the target device support web browsing.While the interactive document web applications described herein may be platform and/or OS independent, such applications may need to be packaged into a suitable installer before they can be installed on the target operating system. Note that different OSs may require different types of installers. For example, Microsoft Windows® may require an installer in the MSI format, while another OS may require an installer in another format. In some embodiments, the web application development tools of the present disclosure may address this issue through the use of optional compiler services.In general, the compiler service of the present disclosure performs operations to compile an interactive document web application into a final native application of a target OS and generate a suitable installer for the target OS. In instances where an interactive document web application is to be used on a different OS, the compiler service can be configured to generate native applications and installers for each target OS simultaneously or separately.The compiler services disclosed herein may include one or more of pre-processed code (eg, ebulwagsvc.php in Figure 9) and packaged code. Pre-processing code can perform a variety of functions. For example, the pre-processing code can add an icon (eg, a user-uploaded icon) to the interactive electronic document web application. Additionally or alternatively, the pre-processed code can compress the interactive electronic document web application into a single file, such as a .zip file. And in some embodiments, once the various native application versions of the interactive document web application are built, the pre-processed code can upload the compressed file to the distribution server and present the download link to the web-based UI.The package code is a service that runs to compile an interactive document web application into a native application for a variety of OS and form factors. The package code can be hosted at a local or remote location. For example, the packaged code can be a computer (eg, a server) that is remote from the hosted web service of the development system running the web application development tools described herein.7 provides a non-limiting example of an architectural diagram of a native application generated by a compiler service in accordance with the present disclosure. As shown, the compiler service 701 encapsulates the interactive document web application into the interactive document web application package 702. In addition, the compiler service 701 can add additional layers, such as a web engine 703, a software development kit (SDK) 704, and a hybrid packetizer 705.The interactive document web application package 702 can include various components of the interactive document web application disclosed herein. For example, the interactive document web application package 702 can include dynamic code, layout code, style code, chapter code, and TOC code, as previously described. The interactive document web application package may take the form of a compressed file (eg, a zip file) containing multiple files discussed above.The web engine 703 can be an OS-independent engine configured to extract an interactive electronic document web application from the interactive electronic document web application package 702. Additionally, the web engine 703 can be configured to parse the underlying code of the interactive document web application (eg, layout code (eg, html), style code (eg, CSS), and dynamic code (eg, JavaScript)) and present the output to The specified window (for example, the window specified by the hybrid packer 705). Currently, the web engine must compile for each OS. In this regard, web engines that can work across multiple OSs become available, and the present disclosure contemplates such web engines.The SDK 704 can be an OS related SDK and can perform a number of different functions. For example, the SDK 704 can verify whether a particular user is authorized to use the final native application. In addition, the SDK can perform operations to place the final native application in full screen mode or windowed mode. Of course, these functions are merely exemplary, and the SDK 704 can provide other SDK functions known in the art. In some embodiments of the present disclosure, the compiler service 701 compiles an SDK for each target OS.The hybrid packetizer 705 can be an application compiled for the target OS and can be used to invoke authentication and other protocols. For example, the hybrid packetizer 705 can invoke the SDK 704 to verify that the user is authorized to use the final native application. Once successfully verified, the hybrid packetizer 705 can load the web engine 703 of the target OS and pass the location of the interactive document web application package to the web engine 703.Regardless of the method, it should be understood that the compiler service 701 performs operations to generate application installers 706 1 , 706 n for OS 1 and OS n , respectively. This can be achieved by using the exemplary architecture diagram and method shown in Figure 7 or in another way.As mentioned above, the web application development tool of the present disclosure may include a user interface (UI). As will be described in detail below, the UI of the present disclosure may be web-based and may allow a user to edit various aspects of an interactive document web application running in a device emulator. Edits made to the interactive document web application can be rendered in real time, periodically, or at varying intervals. In this way, Ul can allow the user to observe the effects of changes made to the interactive document web application in the context of the simulated display of the target device.To this end, reference is made to Figure 8, in which a non-limiting example of the UI of the present disclosure is provided. As shown, the UI 800 includes a device emulator that includes a device emulator preview area 801 and a device framework 802. Typically, the device simulator acts as a place to preview the interactive document web application generated by the transformation service. Device framework 802 is generally associated with one or more device frameworks of the device simulators previously described herein. Accordingly, device framework 802 can perform operations to display images of one or more images, such as a screen of a target device (eg, a mobile phone, tablet PC, etc.). The device simulator preview area 801 can be associated with the hosting framework of the device simulator previously described herein. Thus, a preview of the interactive document web application can run within the device emulator preview area 801.As further illustrated in FIG. 8, UI 800 can include other components, such as item bar 803, one or more simulator settings panels 804, and one or more custom panels 805. In general, item column 803 contains elements related to item level operations. For example, the item bar 803 can include elements that allow the user to select a project type, create a new project, make a test package, and/or publish a release package for an existing project.The simulator settings panel 804 generally includes elements related to the operation of the device simulator running within the UI 800. For example, the simulator settings panel 804 can include a scalar 806 (eg, a slider) that can be operated to adjust the zoom ratio of the device simulator preview area 801 and/or the device frame 802. In some embodiments, the scalar 806 may be capable of implementing pixel-to-pixel and inch-to-inch mapping of the simulated display generated by the device simulator and the display of the target device, as previously described.In addition to the scalar 806, the simulator settings panel 804 can include other custom commands that affect the display of the interactive document web application running in the device simulator executing within the UI 800. For example, the simulator settings panel may include commands that allow changing the simulated device type, resolution, orientation (rotation), and the like.Customizing panel 805 can include commands to change the format, style, and layout of the interactive document application. For example, the customization panel 805 can include commands to change the user interface of the interactive document web application, the organization of information in the application, fonts, and the like. Additionally, the customization panel 805 can include commands to insert plugins at suitable locations within the interactive document web application. For example, the customization panel 805 can include commands to insert links or plugins to social media sites (eg, www.facebook.com and www.twitter.com) at suitable locations within the interactive document web application. As will be described later, this can be accomplished using dynamic code that inserts the relevant plugin code into the source file of the interactive document web application.The web-based UI of the present disclosure can be designed using the MVC pattern. As a non-limiting illustration of this concept, reference is made to FIG. 9, which provides a non-limiting architectural diagram of a UI 900 in accordance with the present disclosure.When the e-book is selected as the project type (eg, via project bar 803), the underlying code of UI 900 can generate a suitable controller class object 903 (eg, eBookController.js) to other resources, such as model class object 904 ( For example, eBookmodel.js) loads the relevant data and panels for the selected project. Additionally, controller class object 903 can render related custom panels in a web browser.The underlying code of UI 900 can also generate one or more additional controller class objects to load device emulator data from the appropriate resources and render the device framework of the device emulator running in UI 900. This is illustrated in the non-limiting example of Figure 9, where deviceHelper.js loads device simulator data from sharedmodel.js, a model resource containing simulator information.When a source document, such as an epub document, is provided (eg, by uploading, importing, or another way), the source file can be converted to include layout and style code using a transformation service 901 (eg, epub2html.php in Figure 9). The interactive document web application 905, as explained above. For example, the epub2html.php service can convert source files into interactive document web applications that include HTML and CSS files. Once completed, the controller class object 903 (eg, eBookController.js) can load the interactive document web application and render it in the device emulator preview area.When a command such as item bar 803, one or more simulator settings panel 804, and one or more custom panels 805 are invoked (eg, clicking on them or dragging them in the device simulator preview area) The controller class object 903 (eg, eBookcontroller.js) can (eg, via a service such as ebupdatesvc.php) the translation service 901 to modify the source file of the interactive document web application (eg, layout code such as resources.html), such as Style code for style.css, etc.) to include the desired changes. Additionally, controller class object 903 can instruct conversion service 901 to reload the interactive document web application in the device simulator preview area of view 902. Once the editing of the interactive document application 905 is complete, the build service (eg, ebulgwasvc.php) 906 can compile native installers for a variety of OSs.In light of the above, it should be apparent that the conversion service of the non-limiting embodiment of the present disclosure can perform two functions. First, the conversion service described herein can perform operations to convert an electronic document into an interactive document web application (eg, from epub to webapp). Second, the conversion service described herein can perform operations to update the source file of the interactive document web application to include any changes made through the UI. As shown in Figure 9 and briefly described above, these two functions can be performed by corresponding services, such as epub2html.php and ebupdatesvc.php.Regarding the conversion of electronic documents into interactive web applications, the conversion services described herein (eg, epub2html.php in Figure 9) can use a conversion algorithm to extract data from document source files. Typically, the conversion algorithm includes an e-book parser and a webapp generator that extracts the electronic document source file (using epub or another supported formula) to retrieve the relevant data, and the webapp generator creates the interactive Some source files for web applications. For example, the e-book parser can extract metadata corresponding to the creator, title, publisher, chapter list, etc., and parse the body text and/or image of the document source file. The webapp generator can create TOC code for an interactive document web application by populating a template file (eg, a template TOC.html file) with data extracted by the e-book parser (eg, metadata and chapter lists) (eg, TOC.html) ). In addition, the webapp generator can create chapter codes (eg, separate HTML files) by populating chapter templates (eg, template chapter html files) with the extracted body text data and images.Regarding updating the source file of the interactive document web application to include any changes made through the UI, the conversion service (eg, ebupdates.php of Figure 9) can perform one or more functions. For example, the conversion service can perform operations to insert the social plugin code into a suitable location in the layout code (eg, resources.html) of the interactive document web application. Alternatively or additionally, the conversion service can perform operations to replace or update the style code (eg, style.css) and/or layout code (eg, resources.html) by the style and layout code specified in the selected theme. To change the subject of the book. In such instances, the social plugin code may or may not be retained. In some embodiments, the social plugin code is retained even if the subject changes. Thus, it should be understood that the conversion service can be used to update any and all elements related to the layout and appearance of the interactive document web application, such as font, font size, orientation, resolution, and the like.The conversion service can also be configured to provide a reset option. When executed, the reset option can be run to reset the style and layout code of the interactive document web application to the style and layout code specified in the default or pre-selected theme. In such instances, social plugin code (eg, html code) may or may not be retained. In some embodiments, the social plugin code can be removed when the reset option is executed.10 is an exemplary architectural diagram of a conversion service in accordance with the present disclosure. As shown, the document source file 1001 can be loaded into the web-based authoring tool of the present disclosure by the UI 1002 or another manner. Once loaded, the conversion service 1003 extracts data (eg, metadata, body text, images, etc.) from the document source file 1001 using the e-book parser 1004. The webApp generator 1005 applies the extracted information to generate a source file of the interactive document web application 1006.A preview of the interactive document web application 1006 can be generated and displayed in a device simulator running in the UI 1002 (eg, using a default theme, layout, etc.). The update service 1007 can detect changes made through the UI 1002 and instruct dynamic code (eg, JavaScript code such as treesaver.js) to update the source files of the interactive document web application 1006 accordingly. A preview of the interactive document web application 1006 running within the UI 1002 can then be refreshed.For clarity, the present disclosure will now discuss a non-limiting example of the workflow performed by the conversion algorithm of an electronic document to web application in accordance with the present disclosure. For purposes of illustration, this example focuses on converting a document source file stored in epub format into an interactive document web application. However, it should be understood that the same or similar steps can be performed using source documents stored in other formats.In this non-limiting example, the conversion service includes an e-book parser that extracts source document files in epub format to a temporary directory. As an initial problem, the e-book parser can verify the contents of the extracted file and then proceed to extract the metadata by examining an XML file (eg, container.xml) extracted from the source document file and stored in the temporary directory. With this check, the e-book parser can get the full path attribute of the rootfile node, which is the path to the epub's metadata, file list, and open reading format (OPF) for linear reading order.Once the path to the OPF file is determined, the e-book parser can open the OPF file as an Extensible Markup Language (XML) file, locate the metadata node, and make a clone copy of the metadata node (hereinafter referred to as "clone metadata" node"). The e-book parser can then search for specific nodes within the metadata node to identify information that can be used in an interactive electronic document application. For example, an e-book parser can identify nodes associated with a title (eg, dc:title) and a table of contents (eg, a spin node). Regarding the content table, the e-book parser can check the spin node of the TOC attribute and search for all child nodes under the list node to obtain an item having an id equal to the TOC attribute. In addition, the e-book parser may identify a href attribute value that may correspond to a path name of a navigation control file (NCX file).The conversion algorithm may generate a table of contents by iterating all itemref sub-nodes under the spin node; obtaining their identification values (eg, idref); searching all sub-nodes under the inventory node to obtain an item having an identifier equal to the identification value Get the href attribute value of such an item; and create a new chapter item in the TOC list using the href attribute as the full path name.The e-book parser can open an NCX file as an XML file and check it to perform several functions. For example, the e-book parser can locate the docTitle node in the NCX file and get its text node value. In instances where the dc:title node is missing from the OPF file, this value can be used as the title. In addition, the e-book parser can analyze NCX files for the docAuthor node and obtain text node values. This value can be used as the book creator name. Moreover, the e-book parser can update the TOC list by locating the navMap node in the NCX file; iterating over all navPoint sub-nodes and getting their navLabel/text node values. This value can be used as the title for each chapter. The e-book parser can also search the TOC list described above and add the navLabel/text node value as a title to any chapter item that has the same full path name as the content node's src attribute. Otherwise, a default descriptor can be used, for example, "Part xx" as the title of the chapter, where xx is the index number in the TOC list.As explained briefly above, the e-book parser can extract body text and images from electronic document source files. The following description provides a non-limiting example of how the e-book parser and webapp generator can use this information to generate source files for an interactive document web application.In this non-limiting example, the e-book parser can open the HTML file specified by each chapter item in the TOC list described above. The Webapp generator can open the output chapter file (in the output directory) for writing, and can pre-populate the output chapter file with the contents of the template file. In some instances, the output chapter file has the same name as the source electronic document file, but has an HTML extension. The e-book parser can also locate the relevant head->title node, and the webapp generator can use the value of the node as the title of the chapter of interest. If no head->title node is available, the webapp generator can use another value as the chapter title, for example, the book title.The web app generator can then output the cloned metadata node to the output chapter file. Using a recursive function (for example, convertnodelist()), the e-book parser can iterate through all child nodes of the <body> section of the chapter file and extract the body text and/or image. The webapp generator can then add body text and images to the output chapter file.Recursive functions (for example, convertnodelist()) can use two arguments: nodeList and parentNode. The nodeList argument is a list of input nodes from which the e-book parser can read. The parentNode argument is the target output node, which is where the webapp generator can insert output.The e-book parser can iterate through all the nodes in the nodeList argument and perform a variety of actions based on the node type. For example, if the node type is XML_TEXT_NODE, the e-book parser can extract the body text from the nodeValue and append the bodyText to the parentnode.If the node type is XML_ELEMENT_NODE, the e-book parser can check the nodeName and perform different actions based on the nodeName. For example, if nodeName is img, the e-book parser can extract the location of the image from the node's src attribute and copy the image file from the temporary directory to the output directory. The Webapp generator can then create an img node, set the src attribute of the img node to the location in the output directory, and set the width and height properties of the image to the desired values.If nodeName is div, p, span, and/or pre, the webapp generator can perform functions based on the name of the associated parent node. For example, if the parent node name is p or span, or if there is no text child, the webapp generator can call a recursive function (for example, convertnodelist()), use all child nodes of the current node as nodeList, and use the current parent node as parentNode. Otherwise, the webapp generator can: create a new p-node; append it to the parentNode; call a recursive function (for example, convertnodelist()); use all child nodes of the current node as a nodeList; and use the new p-node as a parentNode.If nodeName is table, tr, td, or svg, the webapp generator can call a recursive function (for example, convertnodelist()), use all child nodes of the current node as nodeList, and use the current parentNode as the parentNode.If nodeName is h1, h2, h3, h4, h5, h6, b, l, big, ol, ul, li, d, dt, dd, em, code, strong, and/or blockquote, the webapp generator can: create with A new node with the same name; append it to parentNode; call a recursive function (for example, convertnodelist()); use all child nodes of the current node as nodeList; and use the new node as a parentNode.The web app generator can also create TOC code (eg, index.html) of the interactive document web application using the TOC template (eg, TOC.html) and TOC list described above. For example, the web app generator can open the output TOC file (eg, index.html) and pre-populate it with the contents of the TOC template file. The TOC template may include a node item of the title (eg, head->title) and a creator's node item (eg, body->article), which may be populated by the webapp generator with relevant data extracted by the e-book parser. In some embodiments, the webapp generator populates the body->article node with the h1 node of the book title and the h4 node of the creator. The webapp generator can also iterate through all the chapter items in the TOC list (described above) and create nodes (eg, h4 nodes) for each chapter item. The value of each node can then be set to include text corresponding to the chapter title (which can be determined by the e-book parser as described above). It is also possible to add a hyperlink associated with the location of each chapter code under each chapter item node.When the conversion of the electronic document to the web application is completed, the conversion service can return one or more output files (eg, index.html) in the temporary directory to the web-based UI for preview rendering.11 provides a non-limiting example of a class diagram of a user interface of a web application development tool in accordance with the present disclosure. You can map the objects in this unrestricted class diagram to the architecture view of Figure 6, as shown in Figure 1:Table 1Figure 11 also identifies a number of functions that can be defined for each object in the webappauthoringtool object, such as +load() and +unload(). Such functions are provided for illustrative purposes only and are not to be considered as limiting the scope of the disclosure. As will be appreciated in the art, a web application development tool in accordance with the present disclosure can use a user interface that includes more or fewer functions than the particular non-limiting examples shown in FIG.Another aspect of the disclosure relates to a computer-implemented method for authoring an interactive document web application and compiling such an application into native files for a variety of OS and/or form factors. In this regard, with reference to Figure 12, a non-limiting example of the method of the present disclosure is provided.In the source file providing step 1200, an electronic document source file (eg, epub, pdf, etc.) is provided to the web application development tool in accordance with the present disclosure. The electronic document source file can be provided by the user, for example, via the previously described UI upload or by another means.In a conversion step 1210, the conversion service is invoked to extract data from the electronic document source file. For example, and as previously described, the conversion service can extract metadata, body text, images, and other information from the electronic document source file. The transformation service can use this data to generate interactive web applications that contain content tables, chapters, and other desired portions using default or pre-selected styles.In a rendering step 1200, a preview of the interactive document web application is rendered in the device simulator in accordance with the present disclosure. As described above, this preview is rendered in a default or pre-selected style. Thereafter, a preview of the interactive document web application can be rendered to include the editing made during the editing step 1230 discussed below.In an editing step 1230, a preview of the interactive document web application can be edited using the user interface as described above. For example, you can change the resolution, orientation, style, and format (for example, font, font size, and so on) of an interactive document web application. In addition, plugins (eg, to social media web sites) can be plugged into an interactive document web application.In an update step 1240, the interactive document web application is updated (eg, by a conversion service) to include the changes entered during the editing step 1230. You can then refresh the preview of the interactive document web app.Once no further editing is required, the interactive document web application can be compiled in compile step 1250 into an interactive web application package and installer for the selected OS and form factor. The end result is to provide an installer for the selected platform, as shown in step 1260 in FIG.Another aspect of the disclosure relates to a machine readable medium containing instructions for performing the operations of the present disclosure or code or design defining the structures, circuits, apparatus, processors, and/or system features described herein. data. For example, the present disclosure contemplates an article of manufacture comprising a machine readable medium having stored thereon instructions that, when executed by a processor, cause the processor to perform operations consistent with the device simulator, web application development tool and/or method of the present disclosure. .The foregoing description refers to specific names (eg, the names of files, services, functions, and objects described herein) and code. Such names and codes are provided herein for the purpose of discussion only and should be considered exemplary. Those skilled in the art will appreciate that any suitable name may be used for the files, services, and functions described herein, as well as variations in the encoding of the various features described herein are possible. Such variations are contemplated by the present disclosure and are incorporated herein.Other embodiments of the present disclosure will be apparent to those skilled in the <RTIgt; The description is to be considered as illustrative only, and the true scope and spirit of the invention |
<P>PROBLEM TO BE SOLVED: To provide a security mechanism that improves security of a computer system and/or security of data accessed by the computer system. <P>SOLUTION: A method, an integrated circuit and a system for implementing a secure chain of trust are disclosed. While executing secure boot code in a secure boot mode, less-secure boot code may be authenticated using a secret key. A secure key may also be calculated or generated during the secure boot mode. After control is handed over to the authenticated less-secure boot code, at least one application may be authenticated using the secure key. Once authenticated in the less-secure boot mode, the application may be executed by the programmable integrated circuit. In this manner, a secure chain of trust may be implemented for the programmable integrated circuit. <P>COPYRIGHT: (C)2010,JPO&INPIT |
In a method for implementing a secure chain of trust for a programmable integrated circuit, authenticating a second boot code using a secret key while executing the first boot code in a secure boot mode; and Generating a secure key based on the secret key and a unique identifier associated with the programmable integrated circuit, and restricting access to the secret key before exiting the secure boot mode And authenticating an application for execution on the programmable integrated circuit during execution of the second boot code in a boot mode, the authentication further comprising: Using to authenticate the application using, and The method comprising the steps of: executing the exited and the application Tomodo, the.The method of claim 1, wherein the application includes an operating system for execution by a processor of the programmable integrated circuit.The method further comprises performing an operation on data accessible to the programmable integrated circuit using the secure key, the operation selected from the group consisting of encryption of the data and decryption of the data The method of claim 1, wherein:While executing the first boot code in a secure boot mode, decrypting the second boot code using the secret key, and executing the second boot code in the boot mode The method of claim 1, further comprising decrypting the application using the secure key.The method of claim 1, wherein the unique identifier is provided by a first party and the secret key is provided by a second party.The method of claim 1, wherein the execution of the first boot code in the secure boot mode comprises performing a boot process selected from the group consisting of a warm boot process and a cold boot process.The method further comprising: determining a mode of the programmable integrated circuit; and performing the execution of the first boot code when the programmable integrated circuit is put into an operation mode. The method according to 1.Accessing information related to a peripheral device operable to communicate with the programmable integrated circuit, the peripheral device being located outside the programmable integrated circuit; Configuring at least one component of the programmable integrated circuit to improve the performance of the peripheral device during execution of a boot code, the configuration further comprising: The method of claim 1, further comprising configuring the at least one component based on information.In an integrated circuit for use in a portable electronic device, a memory for storing a first boot code and coupled to the memory to execute the first boot code in a secure boot mode of the integrated circuit A processor operable to execute a second boot code in a boot mode of the integrated circuit and further operable to execute an application, and a secure encryption coupled to the processor An authentication engine that authenticates the second boot code using a secret key in the secure boot mode and generates a secure key based on the secret key and a unique identifier associated with the integrated circuit. Occurs in the boot mode before execution of the application by the processor. Integrated circuit and a secure encryption engine for implementing a secure chain of trust by, authenticating the application using the secure key.The integrated circuit of claim 9, wherein the processor is further operable to restrict access to the secret key before exiting the secure boot mode.The integrated circuit of claim 9, wherein the processor is further operable to restrict access to the secure key prior to exiting the boot mode.The integrated circuit of claim 9, wherein the application includes an operating system for execution by a processor of the integrated circuit.The integrated circuit of claim 9, wherein execution of the first boot code by the processor implements a boot process selected from the group consisting of a warm boot process and a cold boot process.And further comprising at least one component operable to adjust the performance of a peripheral device coupled to the processor, wherein the peripheral device is located external to the integrated circuit, while the processor is in the secure boot mode. The integrated circuit of claim 9, further operable to configure the at least one component to improve performance of the peripheral device.A memory for storing a first boot code; and a processor coupled to the memory for executing the first boot code in a secure boot mode of the integrated circuit. A processor operable to execute other boot code in another boot mode of the integrated circuit and further operable to execute an application, and a secure encryption engine coupled to the processor Using a secret key in the secure boot mode to authenticate the other boot code and generating a secure key based on the secret key and a unique identifier associated with the integrated circuit; Prior to execution of the application by the processor, the secure key in the other boot mode A secure encryption engine for implementing a secure trust chain by using and authenticating the application, and further comprising a peripheral device coupled to the integrated circuit, the peripheral device comprising: A system for storing information accessible to a component selected from the group consisting of a processor and said secure encryption engine.The system of claim 15, wherein the processor is further operable to restrict access to the secret key prior to exiting the secure boot mode.The system of claim 15, wherein the processor is further operable to restrict access to the secure key before exiting the other boot mode.The system of claim 15, wherein the application includes an operating system for execution by a processor of the integrated circuit.16. The system of claim 15, wherein execution of the first boot code by the processor implements a boot process selected from the group consisting of a warm boot process and a cold boot process.The integrated circuit further comprises at least one component operable to adjust the performance of the peripheral device, wherein the processor is configured to improve the performance of the peripheral device while in the secure boot mode. The system of claim 15, further operable to configure at least one component. |
Method and system for implementing a secure chain of trustRelated applicationsThis application is inventor named Michel Cox, Philip Smith and Stefan Liu and is assigned to the assignee of the present invention and has agent control number NVID-P-SC-08-0071-US1, 2008 Related to US patent application Ser. No. 12 / 029,432 entitled “METHOD AND SYSTEM FOR GENERATING A SECURE KEY” filed on Feb. 11, 2000. This application is incorporated herein by reference in its entirety for all purposes.This application is named Gordon Grigger and Philip Smith and is assigned to the assignee of the present invention and has an agent management number of NVID-P-SC-08-0072-US1, February 11, 2008 US patent application Ser. No. 12 / 029,467 entitled “SECURE UPDATE OF BOOT IMAGE WITHOUT KNOWLEDGE OF SECURE KEY” filed on the same day. This application is incorporated herein by reference in its entirety for all purposes.This application is named Philip Smith, Joan Sashinowsky and Gordon Grigger and is assigned to the assignee of the present invention and has an agent management number of NVID-P-SC-08-0073-US1, 2008 US patent application Ser. No. 12 / 029,464 entitled “MECHANISM FOR SECURE DOUNLOAD OF CODE TO A LOCKED SYSTEM” filed on Feb. 11, 2000. This application is incorporated herein by reference in its entirety for all purposes.This application has the names of Michel Cox, Gordon Grigger, Philip Smith and Parsa Saracy Sriram, assigned to the assignee of the present invention and assigned the agent management number NVID-P-SC-08-0074-US1. And related to US patent application Ser. No. 12 / 029,463 entitled “HANDLING OF SECURE STORAGE KEY IN ALWAYS ON DOMAIN” filed on Feb. 11, 2008. This application is incorporated herein by reference in its entirety for all purposes.Security mechanisms are commonly used by computer systems to secure data stored in the computer system and / or to secure the operation of the system itself. For example, the data can be encrypted to prevent or limit unauthorized access to the data. In addition, the computer system can authenticate the boot video before it is executed by a central processing unit (CPU), improving the security of the system itself and the data stored in the system.Conventional computer systems perform authentication operations using a trusted platform module (TPM). For example, the CPU can execute microcode that accesses the boot video and sends the boot video to the TPM for authentication. TPM is often implemented in software or a hardware device that is separate from the CPU. Once authenticated, conventional computer systems execute authenticated boot code to boot the system.TPMs are typically used in desktop (eg, non-portable) computer systems, but are susceptible to various attack means. For example, unauthorized users may compromise system and / or data security by performing code-based attacks, hardware-based attacks, etc. on the TPM or other system components. Thus, the TPM does not provide sufficient security measures for certain systems and / or data.Accordingly, there is a need for security mechanisms that improve the security of computer systems and / or the security of data accessed by computer systems. There is also a need for a security mechanism that implements a more secure chain of trust for boot code, applications, data, etc. accessed by a computer system. There is a further need for improved security mechanisms for use with or in portable electronic devices. Embodiments of the present invention provide a novel solution to these and other needs, as described below.Embodiments of the present invention relate to methods, integrated circuits, and systems for implementing a secure chain of trust. More particularly, while executing secure boot code in a secure boot mode, a secret key (eg, confined within and exclusive of an integrated circuit that executes secure boot code or a programmable integrated circuit). The secure boot key used) can be used to authenticate the low secure boot code. During the secure boot mode, a secure key (eg, a secure storage key for performing security operations associated with data and / or applications accessed by the integrated circuit) can also be calculated or generated. After control is passed to the authenticated low-secure boot code, at least one application (eg, an operating system, an application using digital rights management (DRM), or other security mechanism, etc.) Authentication can be performed using a key (eg, also confined within the integrated circuit and thereby used exclusively). Once authenticated in the low secure boot mode, the application can be executed by the integrated circuit (eg, in a lower secure mode or non-secure mode). In this way, a secure chain of trust (eg, from secure boot code to low secure boot code, and even lower secure or non-secure applications) can be implemented on an integrated circuit.In one embodiment, a method for implementing a secure trust chain for an integrated circuit authenticates a second boot code using a secret key while executing the first boot code in a secure boot mode. Including that. Also, during execution of the first boot code in secure boot mode, a secure key is generated based on the secret key and a unique identifier associated with the programmable integrated circuit. Prior to exiting the second boot mode, access to the secret key is restricted. While executing the second boot code in the boot mode, the application is authenticated for execution on the programmable integrated circuit, the authentication further including authenticating the application using a secure key. . Exit the boot mode and run the application. The secure key can also be used to perform operations (eg, encryption operations, decryption operations, etc.) on data accessible to the programmable integrated circuit. Further, in one embodiment, the execution of the first boot code may implement a warm boot process or a cold boot process. In addition, information regarding peripheral devices operable to communicate with the programmable integrated circuit (eg, located outside the programmable integrated circuit) can be accessed. During execution of the first boot code, at least one component of the programmable integrated circuit can be configured to improve the performance of the peripheral device, the configuration further based on information about the peripheral device. Configuring at least one component.In another embodiment, an integrated circuit for use in a portable electronic device includes a memory for storing a first boot code. A processor is coupled to the memory, the processor is operable to execute a first boot code in a secure boot mode of the integrated circuit, and the processor is further configured to execute a second boot code in the boot mode of the integrated circuit. And the processor is further operable to execute the application. Coupled to the processor is a secure encryption engine that authenticates the second boot code using a secret key in a secure boot mode, and the secret key and a unique associated with the integrated circuit. Generate a secure key based on the unique identifier and operate to enforce a secure chain of trust by authenticating the application using the secure key in boot mode prior to execution of the application by the processor be able to.In yet another embodiment, the system comprises an integrated circuit. The integrated circuit comprises a memory for storing a first boot code, and a processor coupled to the memory for executing the first boot code in a secure boot mode of the integrated circuit, the processor In addition, the processor may be operable to execute other boot code in another boot mode of the integrated circuit, and the processor may be further operable to execute an application. The integrated circuit also includes a secure encryption engine coupled to the processor that authenticates other boot codes using the secret key in a secure boot mode, and the secret key and integrated Secure trust chain by generating a secure key based on a unique identifier associated with the circuit and authenticating the application using the secure key in other boot modes prior to execution of the application by the processor It is for carrying out. The system also includes a peripheral device coupled to the integrated circuit for storing information accessible to a component selected from the group consisting of a processor and a secure encryption engine. .Hereinafter, the present invention will be described by way of example with reference to the accompanying drawings in which like elements are denoted by the same reference numerals, but the present invention is not limited thereto.1 is a block diagram illustrating a system for implementing a secure trust chain in accordance with one embodiment of the present invention. FIG.FIG. 3 is a block diagram illustrating a secure encryption engine according to one embodiment of the invention.1 is a block diagram illustrating a fuse according to an embodiment of the invention. FIG.2 is a flowchart illustrating a first portion of a computer-implemented process for implementing a secure chain of trust for a programmable integrated circuit in accordance with one embodiment of the present invention.6 is a flowchart illustrating a second part of a computer-implemented process for implementing a secure chain of trust for a programmable integrated circuit according to an embodiment of the present invention.6 is a flowchart illustrating a third part of a computer-implemented process for implementing a secure chain of trust for a programmable integrated circuit according to an embodiment of the present invention.6 is a flowchart illustrating a fourth part of a computer-implemented process for implementing a secure chain of trust for a programmable integrated circuit according to an embodiment of the present invention.6 is a flowchart illustrating a computer-implemented process for performing a pre-production operation in accordance with one embodiment of the present invention.6 is a flowchart illustrating a computer-implemented process for performing a failure analysis operation in accordance with one embodiment of the present invention.4 is a flowchart illustrating a computer-implemented process for performing a recovery operation in accordance with one embodiment of the present invention.6 is a flowchart illustrating a computer-implemented process for performing a warm boot in accordance with one embodiment of the present invention.6 is a flowchart illustrating a computer-implemented process for performing a cold boot in accordance with one embodiment of the present invention.Embodiments of the present invention shown in the accompanying drawings will be described in detail below. While the invention will be described with reference to these embodiments, it should be understood that the invention is not limited to these embodiments. On the contrary, the invention is intended to cover alternatives, modifications and equivalents, which are encompassed within the spirit and scope of the invention as defined by the claims. Furthermore, in the following detailed description of the present invention, numerous specific details are set forth in order to provide a thorough understanding of the present invention. However, embodiments of the invention may be practiced without these specific details. In other instances, well-known methods, procedures, components and circuits have not been described in detail so as not to unnecessarily obscure aspects of the present invention. [Notation and terminology]Certain portions of the detailed descriptions that follow are presented in terms of procedures of operation, logical blocks, processing, and other symbolic representations of data bits within a computer memory. These descriptions and representations are the means by which those skilled in the data processing arts will most effectively convey the substance of their work to others skilled in the art. In this application, procedures, logical blocks, processes, etc. are considered self-consistent sequences of steps or instructions that lead to the desired result. A step requests physical manipulation of a physical quantity. Usually, though not necessarily, these quantities take the form of electrical or magnetic signals capable of being stored, transferred, combined, compared, and otherwise manipulated in a computer system.It should be borne in mind, however, that all of these and similar terms are to be associated with the appropriate physical quantities and are merely convenient labels applied to these quantities. Unless otherwise indicated, as will be apparent from the following description, throughout the present invention, “acceptance”, “access”, “addition”, “adjustment”, “analysis”, “application”, “assemble”, “ `` Specify '', `` Authentication '', `` Calculation '', `` Capture '', `` Compose '', `` Compare '', `` Collect '', `` Configure '', `` Generate '', `` Decrease '', `` Decipher '', `` Define '', `` Describe '' , "Detect", "decision", "display", "encryption", "establish", "execute", "exit", "occurrence", "grouping", "identification", "increase", "start" , “Dialogue”, “restriction”, “change”, “monitor”, “move”, “output”, “padding”, “execution”, “placement”, “presentation”, “processing”, “programming”, "Query", "removal", "repetition", "sampling", "classification", "memory", "subtraction", "tracking", "transformation", "use" An explanation using terms such as “confirm” refers to the computer system registers and data expressed as physical (electronic) quantities in the memory, computer system memory or registers or other such information storage, transmission Or it will be apparent to refer to the actions and processes of a computer system or similar electronic computing device that manipulates and converts to other data that is also represented as a physical quantity in a display device. [Embodiment of the Invention]FIG. 1 is a block diagram illustrating a system 100 for implementing a secure chain of trust in accordance with one embodiment of the present invention. As shown in FIG. 1, the system 100 includes an on-chip device or system (SoC) 110 and at least one peripheral device 120-150. In one embodiment, the system 100 is a general purpose computer system, embedded computer system, laptop computer system, handheld computer system, portable computer system, portable electronic device, stand alone computer system, game console, some combination thereof , Etc. can be implemented. In addition, peripheral devices 120-150 may include internal and / or external peripheral devices such as keypads, cursor controllers, communication ports, storage devices (eg, hard disk drives, flash memory, random access memory (RAM), read only memory ( ROM), etc.), etc. In another embodiment, the peripheral device is for communicating with a device or system external to the system 100 (eg, based on standards such as USB, USB 2.0, FireWire, PCI-Express, SATA, eSATA, etc.). A communication interface, a device or system coupled to the system 100 via one or more interfaces, and the like. One or more of the peripheral devices 120-150 may include the fuse 300 of FIG. 3 in one embodiment.In one embodiment, at least one of peripheral devices 120-150 and / or at least one component of device 110 (eg, memory 113) is (eg, based on one or more of the processes shown in FIGS. 4A-9). Computer readable code (eg, executed by general purpose processing unit 115, special processing unit 116, etc.) may be stored to implement a secure chain of trust for system 100. For example, during the secure boot mode of system 100, processor 115 executes secure boot code 114 (eg, stored in memory 113) and low secure (eg, stored in peripheral device 150). The boot code 155 can be authenticated. Once authenticated, the low secure boot code 155 can be executed (eg, by the processor 115) during a low stability boot mode. Application 145 (eg, an operating system for device 110 and / or system 100) may authenticate during the low secure boot mode. Once application 145 is authenticated, it can be executed (eg, in subsequent low secure or non-secure mode by processor 115). In this manner, a secure chain of trust for the system 100 is a transition of control (eg, from secure boot code 114 to low secure boot code 155 to low secure boot code 155 during system 100 boot and / or operation). To application 145, etc.) can be performed on authenticated code or trusted code (eg, authenticated during execution of other authenticated code or trusted code).The processor 116 is a graphics processing unit or GPU in one embodiment, while the processor 115 is a central processing unit or CPU in one embodiment. Alternatively, processor 115 may be a complex GPU / CPU capable of performing graphics processing operations (eg, related to graphics data processing and display) and other processing operations (eg, more general central processing operations). .Secure encryption engine 118 may perform an authentication operation to implement a secure chain of trust in one embodiment. For example, while in a secure boot mode (eg, during execution of secure boot code 114), engine 118 uses a secret key (eg, secure boot key (SBK)) (eg, peripheral device 150). The low-secure boot code 155 (accessed from) can be authenticated. The secret key (eg, SBK 330) may be accessed from a secure portion (eg, 310) of the fuse 300 (eg, FIG. 3), in one embodiment, and the fuse 300 is connected to the device 110 (eg, on-chip). At least one component, at least one of the peripherals 120-150, any combination thereof, etc. Alternatively, the secret key can be accessed from the key slot 210 of the engine 118, which is stored in the SBK 330 accessed from the fuse 300, in another part of the system 100 (eg, always on). -on) (A / O) key, etc., stored in the A / O register 112 of the domain 111 and accessed in response to a reset of the system 100).The engine 118 authenticates the application 145 (eg, accessed from the peripheral device 140) using a secure key during the low secure boot mode (eg, during execution of the low secure boot code 155). Can do. The secure key may be used during secure boot mode (eg, during execution of secure boot code 114), during low secure boot mode (eg, during execution of low secure boot code 155), any combination thereof, etc. Can be calculated or generated (e.g., by engine 118). Alternatively, the secure key may be supplied by a user (eg, a system manufacturer before shipping the system 100, etc.). In one embodiment, the secure key can be accessed from the key slot 220 of the engine 118, which is accessed from another location (eg, during secure boot mode) (eg, The secure key is calculated or generated (during secure boot mode), etc.In one embodiment, a secure key can be calculated or generated using repurposed data (eg, used for purposes other than secure key generation) and / or additional data. This repurposed data may include a secret key (eg, SBK 330), a unique device identifier (eg, 350 accessed from non-secure portion 320 of fuse 300), or other data. The UDI 350 may include a serial number (eg, device 110, system 100, etc.), a MAC identifier (eg, device 110, system 100, etc.), etc. Additional data may be secure device data (eg, 340 accessed from secure portion 310 of fuse 300), eg, device key (eg, unique to device 110, unique to system 100, Device 110 and at least one other similar device, shared by system 100 and at least one other similar system, etc.), etc. Once in the mode (eg, after being incorporated into the system 100, after the system 100 is brought into a store for sale or shipped to an end user for use, etc.) It cannot be accessed from outside and / or outside the system 100.Further, in one embodiment, a secure key can be calculated or generated (eg, by engine 118) based on a format for the following:SSK = AES [SBK; UID ^ AES (SBK; DK)] where “DK” is secure device data (eg, 340), “SBK” is a secure boot key (eg, 330), “UID” is a unique device identifier (for example, 350). Thus, SBK can be encrypted using DK as an encryption key, which in one embodiment can use symmetric key cryptography based on the Advanced Encryption Standard (AES). A logical operation (eg, bitwise XOR operation, another logical operation, etc.) can then be performed on the first encryption result and UID. Thereafter, the result of the logical operation can be used as an encryption key to encrypt the SBK, which in one embodiment uses symmetric key cryptography based on the Advanced Encryption Standard (AES). Can do. It is clear that the result of one or more intermediate operations can be padded to a larger size (eg, with a pattern of 0, 1 and 0, at least a portion of the result overlapping, etc.). For example, the device key (DK) can be padded to the size of the UID (eg, before encryption, after encryption, etc.) to allow bitwise XOR operations, Data can be padded, and so on.Thus, using such a secure key increases computer system security (eg, data stored in or accessible to device 110, system 100, components of system 100, etc.). be able to. This is because a secure component (eg, secure encryption engine 118) generates a secure key during a secure mode (eg, secure boot mode during execution of secure boot code 114). is there. In this way, the secure key is accessible and only known by the components of the device 110 (eg, external components, systems, entities, human users, etc. do not know). As yet another example, a device manufacturer (e.g., of device 110) securely programs or provides a unique device identifier (e.g., 350) and (e.g., device 110 is incorporated into system 100). If the system manufacturer (of system 100) secretly programs or provides a secret key (eg, SBK 330), neither party knows both the unique identifier and the secret key. Thus, in this embodiment, no one (including each party) can compute or generate a secure key.By using such a secure key, the security of the computer system can be further improved. This is because finding a secure key does not automatically expose the secret key (eg, SBK 330) nor any other data used to generate the key. Further, if at least one encryption operation is used to generate a secure key, it is difficult to reverse engineer the secure key to find the data used to generate the key.As shown in FIG. 1, device 110 includes an always-on (A / O) domain 111 and a controllable supply potential domain 160 (eg, memory 113, processor 115, processor 116, engine 118, and system controllers 119a-119c). Multiple domains). The power to the components in the controllable supply potential domain (eg, 160) is adjusted, reduced, turned off, etc. in one embodiment, while to the components in the A / O domain (eg, 111). Is generally maintained. Thus, information is temporarily or permanently moved to A / O domain 111 (eg, stored in A / O register 112) and during power reduction or termination to at least one component of domain 160. Ensure that it is not lost (eg, during reset or power down of device 110, system 100, etc.). The A / O register 112 may store secure information (eg, a secure key or SSK generated by the engine 118, a secret key or SBK 330, etc.) in one embodiment. In addition, read and / or write access to the A / O register 112 is limited (eg, individually or in groups) by setting “sticky” or persistence bits, It can also be present in the A / O domain (eg 111).The device 110 can also include a controllable frequency domain 170 (eg, including the unit 117). Unit 117 may include hardware, software, or some combination thereof (eg, firmware, etc.) that can be used to generate different frequencies or that can be configured using different frequencies. In one embodiment, unit 117 may include at least one phase locked loop (PLL) configured using hardware and / or software to generate a signal at a frequency (eg, a clock signal). it can. For example, a clock signal generated by at least one PLL of unit 117 is transmitted to a peripheral device (eg, 120-150) via a corresponding system controller (eg, 119a-119c) to adjust the performance of the peripheral device. be able to. Thus, the peripheral device is initially configured to operate at an initial or low performance level (eg, during execution of secure boot code 114) and then at some future time (eg, secure boot code 114 During subsequent executions (eg by increasing the frequency of the clock signal) is configured for high performance. Accordingly, in these embodiments, the device 110 can be used in connection with various peripheral devices (e.g., high performance peripheral devices, low performance peripheral devices, different types of peripheral devices, etc.). Design and / or price flexibility can be provided to system manufacturers (eg, of system 100). In other embodiments, unit 117 may comprise different components of system 100 (eg, system controllers 119a-119c, etc.) and / or separately (eg, change parameters in addition to or other than frequency). The components of the system 100 can be configured.Although FIGS. 1, 2 and 3 show components of the system 100 with certain features, the features of the system 100 shown in FIGS. 1-3 are merely exemplary and thus in other embodiments It will be clear that it can be configured separately. Further, it will be apparent that the system 100 and / or the apparatus 110 may include different numbers and / or configurations of components in other embodiments. Further, it will be apparent that the components of system 100 and / or device 110 can implement a secure chain of trust with any number of secure and / or non-secure applications and / or data. For example, in some embodiments, control can be passed to more or fewer secure boot codes before passing control to more or fewer less secure or non-secure applications.4A-4D are flowcharts illustrating a computer-implemented process 400 for implementing a secure chain of trust for a programmable integrated circuit in accordance with one embodiment of the present invention. As shown in FIG. 4A, step 410 includes accessing at least one PLL operating frequency for communicating with at least one peripheral device. The operating frequency accessed is at least one component (eg, system controller 119a-) from the PLL (eg, of unit 117) to allow at least one peripheral device (eg, 120-150) to communicate with device 110. 119c, peripheral devices 120-150, etc.). In one embodiment, the operating frequency accessed in step 405 is an initial or basic operating frequency that operates one type of various devices, various types of devices, and the like.Step 410 includes determining whether the device (eg, 110) and / or system (eg, 100) is in a pre-production mode. The pre-production mode state is indicated by at least one component (eg, bit, strap pin, fuse, etc.) placed in a state indicating the pre-production mode. The component that indicates the pre-production mode state is located in the A / O register 112, one or more peripherals 120-150, the fuse 300, another component that has access to the system 100, or some combination thereof, etc. Is done. If it is determined that the device 110 and / or the system 100 is in the pre-production mode, step 412 may be performed.A manufacturer (eg, device 110, system 100, etc.) programs at least one component (eg, SBK 330, secure device data 340, UID 350, etc.) in step 412 when in pre-production mode. Designated), i.e., perform initial configuration or debugging before shipping the product (e.g., device 110, system 100, etc.) to another entity. Alternatively, when in pre-production mode, step 412 enters recovery mode (eg, UART recovery mode including execution of recovery code stored in a UART peripheral of peripherals 120-150) and recovery. Performing operations (eg, downloading, decrypting, authenticating, etc. new secure boot code 114, new low secure boot code 155, new application 145, etc.) may be included. Step 412 may be performed based on process 500 of FIG. 5 in one embodiment. After performing step 412, in one embodiment, the apparatus 110 and / or system 100 may be reset before steps 405 and 410 are repeated.Alternatively, if it is determined in step 410 that the device 110 and / or system 100 is not in the pre-production mode, it is determined in step 415 whether the device 110 and / or system 100 is in the failure analysis mode. The failure analysis mode state is indicated by at least one component (eg, bit, strap pin, fuse, etc.) placed in a state indicating failure analysis mode. The component that indicates the failure analysis mode state is located in the A / O register 112, one or more peripherals 120-150, the fuse 300, the system 100, or another component that has access to the system 100, any combination thereof, etc. Is done. If it is determined that the device 110 and / or the system 100 is in a failure analysis mode, step 435 may be performed.The manufacturer (eg, device 110, system 100, etc.) and / or service representative, when in failure analysis mode, programs at least one component at step 435, ie debugs at least one component. Can perform operations, etc. Access to secure information (eg, in device 110, system 100, etc.) is restricted or disabled when in failure analysis mode. In one embodiment, when in failure analysis mode, step 435 enters recovery mode (eg, UART recovery mode including execution of recovery code stored in a UART peripheral of peripherals 120-150); And performing a recovery operation (eg, downloading, decrypting, authenticating, etc. a new secure boot code 114, a new low secure boot code 155, a new application 145, etc.). At step 435, one or more only if a given piece of data is provided (eg, specifying SBK 330, secure device data 340, UID 350, etc.) and matches other data (eg, UID 350, etc.) Operations (e.g., recovery operations), thereby reducing unauthorized access and / or placing the device 110 and / or system 100 in a failure analysis mode and allowing the data to be stored in the device (e.g., 110 ) Make it difficult to download. Step 435 is performed based on process 600 of FIG. 6 in one embodiment. After performing step 435, in one embodiment, the device 110 and / or the system 100 may be reset and then steps 405-415 may be repeated.Alternatively, if it is determined in step 415 that the device 110 and / or system 100 is not in failure analysis mode, secure boot code (eg, 114) is executed (eg, by the processor 115) in step 420. be able to. Thus, in one embodiment, starting execution of secure boot code (eg, 114) in step 420 means entering a secure boot mode.As shown in FIG. 4A, step 425 includes programming at least one PLL (eg, of unit 117) to generate a clock signal at the operating frequency accessed in step 405. This clock signal is sent from the PLL (eg, of unit 117) to at least one component (eg, system controllers 119a-119c, to allow at least one peripheral device (eg, 120-150) to communicate with device 110. Peripheral devices 120-150, etc.). In one embodiment, the PLL programming of step 425 may generate a clock signal having an initial or basic operating frequency that operates one type of various devices, various types of devices, etc.As shown in FIG. 4B, step 430 includes determining whether the warm boot state has been set. The warm boot state is indicated by at least one component (eg, bit, strap pin, fuse, etc.) of device 110 and / or system 100. The component that indicates the warm boot state is located in the A / O register 112, one or more peripherals 120-150, the fuse 300, another component of the system 100 or access to the system 100, some combination thereof, etc. . The device 110 and / or system 100 may be responsive to a reboot of the device 110 and / or system 100 (eg, performing a recovery operation, loading a new secure boot code, or updating a secure boot code Responding to, loading new low secure boot code or updating low secure boot code, changing the supply potential of one or more components in controllable supply potential domain 160, etc. And a warm boot state. If it is determined that the warm boot state has been set, a warm boot operation can be performed in step 431.Step 431 may be performed based on process 700 of FIG. 7 in one embodiment. If step 432 determines that the warm boot operation was successful, step 450 can be performed as described herein. If the warm boot operation is not successful, the warm boot state is cleared (for example, setting a new bit, strap pin, fuse, etc. or resetting the warm boot bit, strap pin, fuse, etc. And a reset of device 110 and / or system 100 is initiated (eg, before repeating steps 405-430). In one embodiment, clearing the warm boot state in step 433 causes at least one recovery operation (e.g., in step 437) and / or at least one cold boot operation (e.g., upon reset of device 110 and / or system 100). For example, execution of (in step 440) is induced.Alternatively, if it is determined in step 430 that the warm boot state is not set, it can be determined in step 435 whether the forced recovery mode state is set. The forced recovery state is indicated by at least one component (eg, bit, strap pin, fuse, etc.) of device 110 and / or system 100. The component that indicates the forced recovery state is located in the A / O register 112, one or more peripherals 120-150, the fuse 300, the system 100, or another component that has access to the system 100, any combination thereof, etc. The If it is determined at step 435 that the forced recovery mode condition has been set, then at step 437, a recovery mode operation can be performed.In one embodiment, setting the forced recovery mode state and performing the recovery mode operation in step 437 reads the boot code (eg, low secure boot code 155) (eg, using engine 118). In response to failure to decrypt or authenticate. Thus, the recovery operation performed in step 437 may be used to lock the device 110 and / or system 100 from a locked or “bricked” (eg, inactive) state to “de-brick” or Can be restored. Alternatively, the recovery operation performed in step 437 may be used during manufacturing to load data (eg, SBK 330, secure device data 340, UID 350, etc.) into device 110 and / or system 100 (eg, for the first time). can do. In one embodiment, step 437 is performed based on process 700 of FIG.Referring to FIG. 7, step 710 includes coupling device 110 to a host system or device directly or through a network. At least one of the peripheral devices 120-150 can implement a communication channel with the host, where the communication channel, in one embodiment, is a USB standard, a USB 2.0 standard, a firewall standard, a PCI-Express standard. , SATA standards, eSATA standards, etc. The device 110 may broadcast the UID (eg, 350) of the device 110 to the host via the communication channel in step 720. The UID (eg, 350) is mapped to a given SBK (eg, 330) by the host at step 730. The host can then generate and send a self-confirmation message to device 110 at step 740.The message sent from the host to the device (eg, 110) is a non-secure length, hash, random AES block, secure length, command (eg, header data) and data (eg, other data), payload , And padding (eg, 0X80 followed by additional 0X00 bytes as needed). Random AES blocks, secure lengths, commands and data, payload, padding, or some combination thereof can be encoded or encrypted using an SBK that maps to a UID (eg, 350).As shown in FIG. 7, when a message is received by the device 110 and / or the system 100, it can be verified at step 750 (eg, using the SBK 330). In one embodiment, if the non-secure length matches the secure length, the hash is correct, and at least one of the commands is valid (eg, a valid command format for a given message) If the message size is correct (eg as specified by the command and data), the payload size is correct, the padding pattern is correct, the secure boot code version number of the command and data is If the version number of the secure boot code (eg, 114) matches, the version number of the command and data secure boot code matches the version number of the low secure boot code (eg, 155) of the device 110; In some combination, etc., the message It can be determined to be effective.If the message has been confirmed (eg, determined to be valid), the device (eg, 110) loads the message to a peripheral device (eg, 120-150) and adds additional information at step 760. Recovery mode operation can be performed. Additional recovery mode operations include executing one or more commands in the message, executing code contained in the message, low secure boot code from the message (eg, 155) given the surroundings Including storing in a device (eg, 120-150), or some combination thereof. For example, if a low secure boot code (eg, 155) is received in the message, the low secure boot code is encoded using SBK (eg, 330) and the peripheral (eg, 120-150). Is remembered. Alternatively, the device (eg, 110) can download and authenticate additional data from the host. The additional data is encrypted and signed using SBK (eg 330) before writing it to the peripheral (eg 120-150). In this manner, the recovery mode of process 700 can provide multiple message transmission and response sequences. Alternatively, if the message is not confirmed (eg, determined not valid at step 750), the device (eg, 110) is reset at step 770 (eg, enters an infinite loop requesting a system reset). And proceed to automatically initiate a system or device reset).Alternatively, if it is determined in step 435 that the forced recovery mode state is not set, a cold boot operation can be performed in step 440. For example, in step 440, the low-secure boot code (eg, 155) is read, decrypted, authenticated, some combination thereof, etc., and secured to the device 110 and / or system 100 (secure From the execution of the secure boot code 114 in the secure boot mode to the execution of the secure boot code 155 in the low secure boot mode). Further, a secure key (eg, SSK) is calculated and / or generated as a cold boot operation in step 440. The cold boot operation may be performed in response to powering on the device 110 and / or the system 100 in one embodiment. Further, step 440 may be performed based on process 900 of FIG. 9 in one embodiment. If step 442 determines that the cold boot operation was not successful, step 437 may be performed as described herein. Alternatively, if the cold boot operation is successful, step 450 can be performed as described herein.As shown in FIG. 4C, step 450 provides access to secure information (eg, secret key (eg, SBK330), secure key (SSK), information used to generate the secure key, etc.). Including limiting. Such access may be a “sticky” or persistent bit corresponding to a register (eg, A / O register 112), key slot (eg, 210 and / or 220), or other storage medium that stores secure information. Limited by setting. In this way, read and / or write access to secure information can be restricted. Alternatively, secure information may be flushed from a register (eg, A / O register 112), a key slot (eg, 210 and / or 220), or other storage medium that stores secure information (eg, Overwritten with zero, overwritten with other information, cleared, etc.).Step 460 includes exiting the secure boot mode. In step 460, execution of the secure boot code (eg, 114) can be terminated and / or control can be transferred to the less secure boot code (eg, 155).As shown in FIG. 4C, step 470 includes entering a low secure boot mode and initiating execution of a low secure boot code (eg, 155). The low-secure boot code (eg, 155) can be executed by the processor (eg, 115) of the device (eg, 110) and / or the system (eg, 100). Further, the low secure boot code (eg, 155) may be stored locally in the memory (eg, 160) of the device (eg, 110) and / or system (eg, 100).Step 472 includes performing an operation with an SBK (eg, 330) and / or a secure key using a secure encryption engine (eg, 118). For example, a secure encryption engine can be used to perform encryption and / or decryption operations (eg, SBK330 or a secure key is used as the encryption key) and encrypted and / or decrypted. The data to be passed is passed to a secure encryption engine (eg, as encrypted and / or decrypted by secure encryption engine 118), and the secure encryption engine (eg, 118) is then Output processed (eg, encrypted, decrypted, etc.) data. In this way, the SBK 330 and / or the secure key are kept in a secure state (e.g., each secure encryption engine 118 is allowed to perform encryption and / or decryption operations in a low secure boot mode). Key slots 210 and 220). Similarly, using a secure encryption engine (eg, 118), an authentication operation (eg, a digital signature is associated with the SBK 330 and / or a secure key; otherwise, authentication of the data is performed by the SBK 330 and / or Requires knowledge of secure keys, etc.) and / or can perform DRM operations. Again, a secure encryption engine (eg, 118) is used to restrict access to the SBK 330 and / or secure key during authentication and / or DRM operations, otherwise control it. be able to.As shown in FIG. 4C, step 474 includes overwriting the SSK into a key slot (eg, 220) of a secure encryption engine (eg, 118). In one embodiment, a new SSK to be used for overwriting can be specified (eg, by a system manufacturer that also specifies SBK 330 and / or secure device data 340, etc.). Alternatively, the SSK may be played back. For example, new secure device data 340 (eg, accessing different secure device data, and using the new secure device data for SSK calculation, programming the fuse associated with secure device data 340 It may be specified (by modifying the contents of secure device data 340, etc.).As shown in FIG. 4D, step 480 includes restricting access to the SSK (eg, secure key). For example, a secure encryption engine (eg, 118) can restrict access to a key slot (eg, 220) that stores a secure key (eg, designated as read-only and designated as write-only). ,And so on). In another embodiment, a register (eg, 112), cache or other memory that stores information related to the secure key may be flushed. Also, in one embodiment, by setting a “sticky” or persistence bit (eg, located in A / O domain 111, located elsewhere in device 110, etc.), a register (eg, 1 Access to more than one A / O register 112) can be restricted.Step 485 includes flushing the SBK (eg, secret key 330) from the key slot (eg, 210) of the secure encryption engine (eg, 118). In one embodiment, flushing the SBK can be done by writing all zeros to the key slot 210. Alternatively, other data can be written to the key slot 210. In other embodiments, the secret key (eg, 330 stored in the key slot 210, etc.) can be changed, hidden, removed, and so on. Thus, in one embodiment, access to the SBK (eg, 330) is further restricted (in addition to the restrictions in step 450) and the security of the SBK (eg, 330) and / or SBK (eg, 330) is used. Or, otherwise, the security of components accessing it (eg, device 110, system 100, etc.) can be improved.As shown in FIG. 4D, step 490 includes exiting the low secure boot mode. Step 491 includes entering a non-boot mode and starting execution of non-boot code. In one embodiment, the low-secure boot code (eg, 155) may finish execution and take control to other code (eg, non-boot code such as application 145, etc.). Non-boot code (eg, 145) resides in the peripheral (eg, 140) of the device (eg, 110) and is executed by the device (eg, 110) and / or system (eg, 100) Or other applications.Step 492 includes performing operations associated with SSK (eg, a secure key). For example, an operating system or other application running on the system (eg, 100) and / or device (eg, 110) accesses the SSK and uses the SSK to use each piece of data (eg, video content). Audio content, audio / video content, other data, etc.) can be encrypted, decrypted, authenticated, signed, and so on. Thus, the SSK is stored in the system (eg, 100) and / or device (eg, 110) or otherwise accessed by the data (eg, video content, audio content, audio / video). Provided to secure access to information used to generate SSK (eg, SBK 330, secure device data 340, etc.) while securing content, other data, etc.) and used. In another embodiment, data (e.g., video content, audio content, audio / video content, other data, etc.) is encrypted with a secure encryption engine (e.g., for encryption, decryption, authentication, signing, etc.). 118), which can provide implementation flexibility and / or increased security as needed.As shown in FIG. 4D, step 494 includes performing a non-SSK operation. For example, tasks performed by an operating system or other application running on a system (eg, 100) and / or device (eg, 110) can be directly and / or indirectly dependent on data without relying on SSK. Access and / or data can be processed. In one embodiment, these operations can be considered normal operations of a system (eg, 100) and / or device (eg, 110) that does not use a secure key.FIG. 5 is a flowchart illustrating a computer-implemented process 500 for performing a pre-production operation in accordance with one embodiment of the present invention. Process 500 may be used to perform step 412 of FIG. 4 in one embodiment.As shown in FIG. 5, step 510 includes initiating a pre-production operation. In one embodiment, step 510 includes initiating execution of the recovery code. The recovery code may be accessed from a peripheral device (eg, 120-150) in one embodiment. The peripheral device may be a UART peripheral device in one embodiment.Step 520 includes programming at least one PLL (eg, of unit 117) to implement communication with at least one component that stores data to be downloaded (eg, to device 110). This programming includes configuring the PLL to generate a clock signal at an operating frequency that allows a peripheral device (eg, 120-150) to establish a communication channel. One or more components are peripheral devices (eg, 120-150) of a system (eg, 100) coupled to a programmable integrated circuit (eg, device 110). Alternatively, one or more components may be located external to the system (eg, 100) and communicatively coupled to the device (eg, 110).As shown in FIG. 5, step 530 includes establishing a communication channel with at least one component. For example, components of a system (eg, 100) and / or device (eg, 110) (eg, system controllers 119a-119c, etc.) can access a clock signal generated by a PLL in one embodiment. . One or more messages (e.g., performing a "handshake", etc.) are sent to a component (e.g., system controller 119a-119c, processor 115, etc.) and at least one component (e.g. Device 120-150, components external to system 100, etc.) to establish a communication channel.Step 540 includes downloading data from one or more components. The data includes new or updated boot code (eg, secure boot code 114, low secure boot code 155, etc.). In another embodiment, the data includes an application (eg, 145) and / or data for access by a component of the device (eg, 110) and / or system (eg, 100).As shown in FIG. 5, step 550 includes authenticating the downloaded data. The data can be authenticated using a secret key (eg, SBK 330), a secure key, etc. Further, the data can be authenticated and / or otherwise processed (eg, decrypted, encrypted, etc.) in a secure environment (eg, within secure encryption engine 118).FIG. 6 is a flowchart illustrating a computer-implemented process 600 for performing a failure analysis operation according to one embodiment of the invention. This process 600 may be used in one embodiment to perform step 417 of FIG.As shown in FIG. 6, step 610 includes initiating a failure analysis operation. In one embodiment, step 610 may be performed similarly to step 510 of process 500. Further, steps 620 and 630 of process 600 may be performed similarly to steps 520 and 530 of process 500 in one embodiment.As shown in FIG. 6, step 640 includes accessing a device identifier from one or more components (eg, one or more of peripheral devices 120-150). For example, a request for such a device identifier is made to at least one component, and / or at least one component broadcasts such a device identifier without a formal request from the device (eg, 110). be able to.Step 650 determines whether a device identifier accessed from at least one component (eg, one or more of peripheral devices 120-150) matches the UID (eg, 350) of the device (eg, 110). Including. Steps 640 and 650 reduce unauthorized access and / or make it difficult for the end user to place device 110 and / or system 100 in failure analysis mode and download data to the device (eg, 110). It can be carried out. If the device identifier does not match the UID (eg, 350), the process 600 ends. Alternatively, if the device identifier matches a UID (eg, 350), step 660 can be performed. Steps 660 and 670 of process 600 may be performed similarly to steps 540 and 550 of process 500 in one embodiment.FIG. 8 is a flowchart illustrating a computer-implemented process 800 for performing a warm boot in accordance with one embodiment of the present invention. Process 800 may implement step 431 of process 400 in one embodiment. Further, process 800 may be performed after resetting the device (eg, 110) and / or system (eg, 100) in one embodiment.As shown in FIG. 8, step 810 includes reading data from an always-on (A / O) register (eg, 112). This data includes peripheral device configuration information (eg, SDRAM configuration information, PLL configuration information, eg, operating frequency for components accessing clock signals generated by the PLL, device 110 settings, system 100 settings, etc.) including. The data can also include the address of the restart code (eg, executed at step 850). Further, in one embodiment, the data includes a fingerprint (eg, a non-secure hash value for the restart code, a secure hash value for the restart code, etc.) or other information about the restart code. You can also.Step 820 includes configuring at least one peripheral device (eg, 120-150) that stores the restart code. For example, if the SDRAM peripheral (eg, one of the peripherals 120-150 coupled to device 110) stores the restart code, the SDRAM can be taken out of self-refresh mode. In addition, the peripheral device can be further configured (eg, adjusting the operating frequency, preparing a system controller coupled to the peripheral device to set the communication channel of the peripheral device, etc.).As shown in FIG. 8, step 830 includes establishing a communication channel with at least one peripheral device that stores the restart code. Step 830 may be performed similar to step 530 of process 500 in one embodiment.Step 840 includes authenticating the restart code. The restart code can be verified or authenticated by calculating a non-secure hash or digest of the restart code stored in the peripheral device. If the hash or digest matches a fingerprint accessed from an A / O register (eg, 112) at step 810, a restart code can be executed at step 850 to recover the system state. In one embodiment, the restart code includes a vector, so step 850 can jump to the restart vector (eg, in SDRAM or other peripheral) to restore the system state.FIG. 9 is a flowchart illustrating a computer-implemented process 900 for performing a cold boot in accordance with one embodiment of the present invention. This process 900 may implement step 440 of process 400 in one embodiment. Further, process 900 may be performed after power on of the device (eg, 110) and / or system (eg, 100), in one embodiment.Step 910 includes accessing information regarding the type of at least one peripheral device. For example, the type of one or more peripheral devices (eg, 120-150) coupled to a device (eg, 110) can be identified. Peripheral device types may include NOR, NAND, SPI, MMC, other peripheral device types, and the like. Information regarding the peripheral device type may be stored in a fuse (eg, 300) of the device (eg, 110) in one embodiment. Information about peripheral devices may also be stored in other parts of device 110 (eg, strap pins, etc.) in one embodiment.As shown in FIG. 9, step 920 includes accessing information that enables communication with at least one peripheral device. In one embodiment, this information may include the characteristics of the peripheral device whose type was identified in step 910. For example, this information includes the type of ECC to be used when communicating with the peripheral device, the address format for communication, the basic or initial operating frequency for implementing the communication channel with the peripheral device, etc. be able to. Information about the peripheral device may be stored in a fuse (eg, 300) of the device (eg, 110) in one embodiment.Step 930 includes accessing information to improve the performance of one or more peripheral devices. For example, an improved operating frequency for the peripheral device can be accessed, and this improved operating frequency is higher than that accessed at step 920. This information can be accessed from the peripheral's boot configuration table (BCT) after establishing a communication channel (eg, operating at a basic or initial operating frequency) with the peripheral. In this way, the information accessed in steps 910 and 920 can be used to establish a communication channel with a peripheral device operating at a basic or low performance level, and the performance of the peripheral device and / or communication channel can be determined. Information for improving (eg, increasing speed, bandwidth, etc.) can be communicated over the implemented communication channel operating at a lower performance level (eg, at step 930).As shown in FIG. 9, step 940 includes configuring a system (eg, 100) with information for improving the manufacture of at least one peripheral device (eg, accessed in step 930). . For example, one or more PLLs (eg, of unit 117) can generate a higher frequency clock signal based on information for improving the performance of at least one peripheral device. Further, in step 940, the system controller (eg, 119a-119c) can be reinitialized for higher performance.Step 950 includes accessing information regarding the low-secure boot code (eg, 155). For example, the location (eg, load address, entry address, etc.) of the low secure boot code (eg, 155) can be accessed. In addition, redundancy information for low-secure boot code (eg, 155) can also be accessed, and this redundancy information can be pointers to different versions or generations of low-secure boot code (eg, to previous generations). Pointer to an additional copy of the same generation of low-secure boot code (for returning or updating) (eg, reading the same generation due to errors or other exceptions during reading, updating, etc. of low-secure boot code) For reloading), and so on.As shown in FIG. 9, step 960 initializes a peripheral device (eg, 150) that stores a low secure boot code (eg, 155) and generates a low secure boot code from the peripheral device (eg, 150). Including reading. This initialization may involve a clock signal (eg, a low operating frequency based on the information accessed in step 920, a high frequency based on the information accessed in step 930, etc.), a peripheral device (eg, 150), Supply to a system controller (eg, 119c) coupled to a peripheral device, and so on. In addition, the initialization of the peripheral device includes communicating a “handshake” or other information used to establish a communication channel between the peripheral device (eg, 150) and the device (eg, 110). .Step 970 includes decrypting the low secure boot code and / or authenticating the low secure boot code using SBK (eg, 330). The SBK can be accessed from the secure portion 310 of the fuse 300, the key slot 210 of the secure encryption engine 118, the A / O register (eg, 112) of the device 110, etc. In one embodiment, once the low-secure boot code (eg, 155) is decrypted (eg, by secure encryption engine 118), the computation (eg, computed by engine 118 or another component of system 100) It can be authenticated or verified by comparing the hash or digest obtained with the low-secure boot code accessed in step 950.The embodiments of the present invention have been described above with reference to numerous specific details that may vary from implementation to implementation. Accordingly, the only exclusive expression of what the invention is and what the applicant intends, including subsequent modifications, is covered in a specific form in the claims. Here, the scope of the claims is not limited in any way by elements, characteristics, features, effects, or attributes that are not clearly described in the claims. The specification and drawings are, accordingly, to be regarded in an illustrative rather than a restrictive sense.DESCRIPTION OF SYMBOLS 100 ... System, 110 ... Device (SoC), 111 ... Always-on (A / O) domain, 112 ... A / O register, 113 ... Memory, 114 ... Secure boot Code: 115 ... General-purpose processing unit (CPU), 116 ... Special processing unit (GPU), 117 ... Unit, 118 ... Secure encryption engine, 119a-c ... System controller, 120 ... peripheral device (USB), 130 ... peripheral device, 140 ... peripheral device (HDD), 145 ... application, 150 ... peripheral device (RAM), 155 ... low secure Boot code, 160 ... controllable supply potential domain, 170 ... controllable frequency domain, 210 ... secure boot key (SB) ) Key slot, 220 ... secure key (SSK) key slot, 300 ... fuse, 310 ... secure part, 320 ... non-secure part, 330 ... secure boot key ( SBK), 340 ... secure device data, 350 ... unique device identifier (UID) |
Electronic assemblies and their manufacture are described. One assembly includes a coreless substrate comprising a plurality of dielectric layers and electrically conductive pathways, the coreless substrate including a first side and a second side opposite the first side. The assembly includes a first die embedded in the coreless substrate, the first die comprising an RF die, the first die positioned in a dielectric layer that extends to the first side of the coreless substrate. The assembly includes a second die positioned on first side, the second die positioned on the first die. In another aspect, a molding material may be positioned on the die side, wherein the first die and the second die are covered by the molding material. In another aspect, an electrical shielding layer may be positioned over the first side. Other embodiments are described and claimed. |
1.A component that includes:a coreless substrate comprising a plurality of dielectric layers and a conductive via, the coreless substrate comprising a first side and a second side opposite the first side;a first die, the first die being embedded in the coreless substrate, the first die comprising an RF die, the first die being located at a location extending to the coreless substrate In the dielectric layer on the first side;a second die, the second die being on a first side, the second die being on the first die.2.The assembly of claim 1 further comprising:a molding material on the die side, wherein the first die and the second die are covered by the molding material;An electrical shielding layer, the electrical shielding layer being above the first side.3.The assembly of claim 1 further comprising:a third die, the third die being embedded in the coreless substrate, the third die being in the same dielectric layer as the first die;A fourth die, the fourth die being on the third die on the first side of the coreless substrate.4.The assembly of claim 1 further comprising a plurality of interconnect pads on a pad side of said coreless substrate; and a printed circuit board, wherein said coreless substrate passes said interconnect pad Electrically coupled to the printed circuit board.5.The assembly of claim 1 wherein said first die includes an active side and a back side, said active side of said first die being located on said back side of said first die Between the second sides of the coreless substrate.6.The assembly of claim 1 further comprising wire bonds electrically coupling said second die to said coreless substrate.7.The assembly of claim 1 wherein the second die comprises a power amplifier, and wherein the second die is electrically coupled to the first die.8.The assembly of claim 1 wherein the second die includes an active side and a back side, and wherein a back side of the second die faces a back side of the first die.9.The assembly of claim 1 wherein at least a portion of said second die is directly over said first die.10.The assembly of claim 1 wherein said second die includes an active side and a back side, and wherein an active side of said second die is toward a back side of said first die.11.The assembly of claim 1 further comprising a gap between said second die and a back side of said coreless substrate.12.The assembly of claim 5 wherein said first die comprises a metallization layer on a back side thereof.13.A component that includes:a coreless substrate, the coreless substrate comprising a first side and a second side;a first die, the first die being embedded in a dielectric layer in the coreless substrate, the first die comprising an RF die;a second die, the second die being on the first side of the coreless substrate and electrically coupled to the first die;Wherein the first die is separated from the second side by a plurality of dielectric layers;Wherein the second die is aligned with the first die such that the second die covers at least a portion of the first die when viewed from above.14.The assembly of claim 13 further comprising:a molding material, the molding material being on the first side, wherein the first die and the second die are covered by the molding material;An electrically shielded structure coupled to the molding material on the first side.15.The assembly of claim 13The first die includes a metallization layer and a die attach film thereon;The second die includes a metallization layer and a die attach film thereon;Wherein the die attach film of the second die is disposed in contact with the die attach film of the first die.16.The assembly of claim 13 wherein said first die is located in a dielectric layer extending to said first side of said coreless substrate.17.The assembly of claim 13 further comprising: a third die embedded in said dielectric layer; and a fourth die on the die attach side of said coreless substrate.18.A method comprising:Embedding a first die including an RF die in a dielectric layer of the coreless substrate, the coreless substrate including a first side and a second side opposite the first side, the first die being located Extending into the dielectric layer of the first side;Providing a second die on the first side of the coreless substrate, the second die being over the first die;Forming a molding layer on the first side of the substrate, the molding layer covering the first die and the second die;An electrically shielded layer of the molded layer coupled to the die side is provided.19.The method of claim 18, further comprising: embedding the third die and the first die in the same dielectric layer; providing a fourth tube on the first side of the coreless substrate a core, the fourth die being located on the third die.20.The method of claim 18, further comprising: disposing the first die and the second die such that an active side of the first die faces the second of the coreless substrate The side and the back side of the first die are toward the second die.21.The method of claim 18 further comprising: disposing said second die such that a back side of said second die faces a back side of said first die.22.The method of claim 18 further comprising: disposing said second die such that an active side of said second die is toward a back side of said first die.23.The method of claim 18, further comprising: forming a recessed region on the first side, wherein a plurality of electrical connections from the second die to the coreless substrate are fabricated in the recessed region .24.The method of claim 18 wherein said second die is spaced apart from said first side of said coreless substrate. |
System-in-package with embedded RF die in a coreless substrateBackground techniqueAs electronic devices are made smaller and smaller and wireless communication requirements increase, the thickness of conventional components including radio frequency dies (RF dies) on package substrates makes it difficult to form low profile small form factor wireless communication devices. achieve.DRAWINGSThe embodiments are described by way of example and with reference to the drawings1 illustrates an assembly including a multilayer substrate including an embedded RF die in accordance with some embodiments.2 illustrates an assembly including a multilayer substrate including an embedded RF die and another embedded die, in accordance with some embodiments.3 illustrates an assembly including a multilayer substrate including an embedded RF die and a flip chip die on the surface of the substrate, in accordance with some embodiments.4 illustrates a component including an embedded RF die and a flip chip die in which there is a gap between the surface of the flip chip die and the substrate, in accordance with some embodiments.Figure 5 is an operational flow diagram for forming an assembly including a multilayer substrate including an embedded RF die in accordance with an embodiment.Fig. 6 shows an electronic system device to which an embodiment can be applied.Detailed waysReference will now be made to the drawings in which the same reference numerals To best illustrate the structure in the various embodiments, the figures herein include illustrations of electronic devices and various components. Thus, the actual appearance of the fabricated structure may appear different, but still incorporates into the structure claimed by the illustrated embodiment. Moreover, the drawings may only show the necessary structures for understanding the illustrated embodiments. Other structures known in the prior art are not included in order to keep the drawings clear.The RF (radio frequency) package assembly is formed to include one or more RF die structures on the substrate along with other components including, but not limited to, power amplifiers, switches, and other devices.Some embodiments relate to an assembly structure that includes an RF die embedded in a substrate and components on the RF die. Certain embodiments are also directed to the use of multiple embedded RF die structures and multiple components. Other embodiments are also directed to methods for fabricating component structures including embedded RF die structures.1 is a cross-sectional view of an embodiment including an assembly 2 that includes a substrate 10. The substrate 10 as shown is coreless and includes a first side 12 and a second side 14. As shown in the embodiment of Figure 1, the first side 12 can be referred to as the device mounting side because electrical components (including but not limited to, amplifiers, switches, processors) can be located on the device mounting side. The second side 14 can be referred to as a pad side and includes a plurality of interconnect pads 16 on which electrical connections to other devices such as a board (not shown in Figure 1) can be fabricated. Substrate 10 includes a plurality of layers including dielectric layers 18, 20, 22, 24, 26. Layer 26 can be a solder resist layer. Substrate 10 also includes a conductive via formed in substrate 10 for transmitting electrical signals. 1 shows an example of a conductive via in dielectric layer 18 and extending into dielectric layer 20, including a patterned metal layer 28 and a pad metal region that extends to a pad for wire bonding. Conductive vias 30, 32, 34, 36 of 38, 40. The metal path layout as shown in Fig. 1 is an exemplary arrangement, and various modifications can be made thereto. For the sake of simplicity, the metal vias through most of the dielectric layers are not shown. In the embodiment of FIG. 1, a substrate 10 can be formed using a solderless build-up layer (BBUL) technique in which a dielectric layer and a metal layer are deposited and laminated to form a coreless solderless build-up layer (BBUL-C). Package.As shown in the embodiment of FIG. 1, RF die 44 is embedded in upper dielectric layer 18 of substrate 10. The RF die 44 can include a metallization layer 52 on its back side. The metallization layer can be a single metal layer or can be a stack of metal layers. Electrical connections to and from the RF die 44 are made on the active side of the RF die 44 by connections 46,48. For the sake of simplicity, only two connections 46, 48 are shown. The die attach film 54 is formed, for example, of a polymer, on the metallization layer 52, wherein the metallization layer 52 is between the RF die 44 and the die attach film 54.Another component, such as die 56, may be located on die attach film 54 on RF die 44 of substrate 10. In some embodiments, the die 56 can include a second RF die that is wire bonded to the substrate 10 at the pad regions 38, 40 by wire bonds 58, 60. The die 56 may also include a metallization layer 62 and a die attach film 64 with the metallization layer 62 between the die attach layer 64 and the die 56 and the die attach film 64 coupled to the tube on the RF die 44. The core attaches the film 54. It will be understood that one or more of the die attach layers 54, 64 and the metallization layers 52, 62 may be modified or omitted in certain embodiments, depending on the particular die structure and/or component used. It should also be understood that the various layers shown in FIG. 1 are not necessarily to scale and are not necessarily a uniform thickness and may be different from the illustrated embodiments.As shown in FIG. 1, RF die 44 is embedded in substrate 10, and die 56 is located on RF die 44, through metallization layers 52, 62 and adhesion film layers 54, 64 and RF die 44. Separated. The enlarged portion of Fig. 1 shows the relationship between the layers when viewed from above, in which the adhering film layers 54 and 64 are in contact with each other. A molding layer 66, such as a polymer, may be formed to cover the surface of the substrate, the substrate surface including the die 56 and wire bonds 58 and 60 coupled to the pad regions 38,40. A suitable conformal shield 68 can also be formed on the sides and top of the mold layer 66 to shield electromagnetic (EM) noise. In order to minimize the height of the components, a connection to the board can be made through the interconnect pads 16 using a grid array (LGA). Other interconnect configurations, including ball grid arrays (BGA), may be used, but are not limited thereto. In some embodiments, RF die 44 may include a baseband and medium access control circuit (BB-MAC). Moreover, in some embodiments, component 56 can be selected from structures including, but not limited to, another RF die or analog die element.One or more of the following advantages may be provided in certain embodiments by forming an assembly, such as that shown in Figure 1, comprising a package structure. First, by embedding RF die 44 into substrate 10, the height of the package can be reduced as compared to packages having RF cores that are not embedded in the substrate. Second, by embedding RF die 44, the signal length can be reduced. Third, the design shown in Figure 1 also provides for in-situ shielding of the RF die 44. Fourth, as shown in FIG. 1, by placing the die 56 on the RF die 44, for example, the width of the substrate 10 can be reduced and can be reduced as compared to packages having a die structure of a different configuration. Small interconnect length.FIG. 2 illustrates a cross-sectional view of an assembly 102 including a substrate 110 in accordance with some embodiments. Substrate 110 is coreless and includes a first side 112 and a second side 114. Substrate 110 includes a first side 112 that includes electrical components (including but not limited to amplifiers, switches, processors) disposed thereon. The second side 114 includes a plurality of interconnect pads 116 on which electrical connections to other devices such as a board (not shown in Figure 2) can be fabricated. Substrate 110 can include a plurality of layers including dielectric layers 118, 120, 122, 124, 126. Layer 126 can be a solder resist layer. The thickness of the dielectric layer need not be uniform. Substrate 110 includes a conductive path formed to transmit electrical signals. 2 illustrates an example of a conductive via in dielectric layer 118 and extending to dielectric layer 120, including patterned metal layer 128 within dielectric layer 126, and conductive vias 131, 132, 133 contacting metal layer 128. , 134, 135, and 136, and pad regions 138, 139, 140, and 141 used as wire bonding regions. The conductive path as shown in Figure 2 is an exemplary arrangement and various modifications can be made thereto. For simplicity, conductive vias (including, for example, patterned metal layers, vias, and other metal regions such as those described above) may extend through other dielectric layers not shown. The substrate 110 may be formed using a solderless built-in layer (BBUL) technique to form a coreless solderless built-in layer (BBUL-C) package. Substrate 110 can include a mold layer 166 and a conformal shield 168 thereon.In some embodiments, multiple die structures can be embedded in the substrate. As shown in the embodiment of FIG. 2, RF die 144 and die 145 are embedded in upper dielectric layer 118 of substrate 110. In one embodiment, RF die 144 includes a radio frequency integrated circuit (RFIC) that includes a baseband and medium access control circuit (BB-MAC). In one embodiment, die 145 may be an integrated passive device (IPD), for example, including circuitry that provides RF matching and frequency adjustment functions for the power amplifier. A metallization layer 152 and a die attach film 154 may be disposed on the RF die 144, and a die attach film 155 may be disposed on the die 145. Electrical connections to and from the RF die 144 are made by the connections 146, 148 on the active side in the embodiment shown in FIG. For the sake of simplicity, two connections 146, 148 are shown, although embodiments may include a greater number of connections. A die attach film 154 can be disposed over the metallization layer 152 such that the metallization layer 152 is disposed between the RF die 144 and the die attach film 154.Components such as die 156 may be, for example, RF power amplifier dies, and may be located on die attach film 154 that is embedded on substrate 110 on RF die 144 in the substrate. In some embodiments, die 156 can be wire bonded to substrate 110 at pad regions 138, 140 by wire bonds 158, 160. As shown in the enlarged portion on the left in FIG. 2, the die 156 may also include a metallization layer 162 and a die attach film 164, wherein the die attach film 164 is coupled to the die attach film 154 on the RF die 144.As shown in the enlarged portion on the right in FIG. 2, components such as die 157 may be, for example, RF switch dies, and may be located on a die 110 that is embedded in die 110 on substrate 110. The core is attached to the film 155. In some embodiments, the die 157 can be wire bonded to the substrate 110 at the pad regions 139, 141 by wire bonds 159, 161. The die 157, such as an RF switch die, can also include a metallization layer 163 and a die attach film 165 with the metallization layer 163 between the die attach film 165 and the die 157, and the die attach film 165 coupled to the RF The die attach film 155 on the die 144.The assembly according to the embodiment as shown in Fig. 2 may include various RF components embedded in the device attachment side of the multilayer substrate or on the device attachment side of the multilayer substrate. Such an assembly can form a complete RF transceiver package in some embodiments.FIG. 3 illustrates a cross-sectional view of a component 202 including a substrate 210 including a flip chip die 256 on an embedded RF die 244, in accordance with some embodiments. Substrate 210 is coreless and includes a first side 212 and a second side 214. The first side 212 can include electrical components (including but not limited to amplifiers, switches, processors) located thereon. The second side 214 includes a plurality of interconnect pads 216 on which electrical connections to another device, such as a board, can be fabricated. Substrate 210 includes a plurality of layers including dielectric layers 218, 220, 222, 224, 226. Layer 226 can be a solder resist layer. Substrate 210 also includes a conductive via formed to transmit electrical signals in substrate 210. 3 shows an example of a conductive via in dielectric layer 218 and extending into dielectric layer 220, including patterned metal layer 228 and conductive vias 230, 232, 234 that extend to pad metal regions 238, 240. 236. The metal path layout as shown in Fig. 3 is an exemplary layout, and various modifications can be made thereto. For the sake of simplicity, the metal vias in other dielectric layers are not shown. The substrate 210 may be formed using a solderless build-up layer (BBUL) technique in which metal and dielectric layers are deposited and laminated to form a coreless solderless built-in layer (BBUL-C) package. Substrate 210 includes a mold layer 266 and a conformal shield 268 thereon.In the embodiment shown in FIG. 3, flip chip die 256 is located on adhesion film 254 embedded in RF die 244 of upper dielectric layer 218. The RF die 244 can include a metallization layer 252 on its backside surface. Electrical connections to the RF die 244 can be made on the active side of the RF die through electrical connections 246, 248. Flip chip die 256 can be electrically coupled to RF die 244 by electrical connections 241, 243 to, for example, pad regions 238, 240. Pad areas 238, 240 can be recessed to minimize the vertical height of the assembly. As shown in FIG. 3, recessed regions 251, 253 are formed in dielectric layer 226 on first side 212, and electrical connections 241, 243 extend through between flip chip die 256 and pad regions 238, 240. The recessed areas 251, 253. Depending on the size and precise configuration of the recessed regions 251, 253, in some embodiments, the die structure can be at least partially located in the recessed region and at least partially embedded in the substrate 210.4 illustrates a cross-sectional view similar to assembly 302 of FIG. 3 in some aspects, including substrate 310 and flip chip die on embedded RF die 344, in accordance with some embodiments. 356. The substrate 310 is coreless and includes a first side 312 and a second side 314, the first side 312 can include electrical components thereon (including but not limited to amplifiers, switches, processors), and the second side 314 includes A plurality of interconnect pads 316 on which electrical connections to another device, such as a board, can be fabricated. Substrate 310 includes a plurality of layers including dielectric layers 318, 320, 322, 324, 326. Layer 326 can be a solder resist layer. Substrate 310 also includes a conductive via formed to transmit electrical signals within substrate 310. 4 shows an example of a conductive via in dielectric layer 318 and extending into dielectric layer 320, including patterned metal layer 328 and conductive vias 330, 332, 334 that extend to pad metal regions 338, 340. 336. The metal path layout as shown in Fig. 4 is an exemplary layout, and various modifications can be made thereto. For the sake of simplicity, the metal vias in most of the dielectric layers are not shown. The substrate 310 can be formed using a solderless built-in layer (BBUL) technique in which metal and dielectric layers are deposited and laminated to form a coreless solderless built-in layer (BBUL-C) package. Substrate 310 can include a mold layer 366 and a common shield 368 located thereon.In the embodiment shown in FIG. 4, flip chip die 356 is electrically coupled to RF die 344 embedded in upper dielectric layer 318. The RF die 344 can include a metallization layer 352 and a die attach film 354 on its backside surface. Electrical connections to the RF die 344 can be made on the active side of the die by electrical connections 346, 348 coupled to the patterned metal layer 328. Flip chip die 356 can be electrically coupled to RF die 344 by electrical connections 341, 343 to, for example, pad regions 338, 340. Pad regions 338, 340 extend to the surface of side 312 of substrate 310. There are also other layers on the flip chip die 356, but are not shown for simplicity. Flip chip die 356 is disposed with a gap 359 between die 356 and the surface of one side 314 of substrate 310. This gap 359 serves to minimize electrical interference between the flip chip die 356 and the RF die 344. The size of the gap 359 between the flip chip die 356 and the surface of one side 314 of the substrate 310 can be controlled by the height of the electrical connections 341, 343.Figure 5 illustrates an operational flow diagram for forming a component including an embedded RF die in accordance with some embodiments. Block 401 is on the die side of the substrate with at least one RF die embedded in the substrate dielectric layer. Any suitable processing operation can be used including, but not limited to, BBUL-C processing. In the BBUL-C process, the RF die can be placed on a surface and then a dielectric layer can be built around the RF die. In some embodiments, a contact opening through the dielectric layer is then formed and the opening is filled with metal to form an electrical pathway for connection to the RF die. Block 403 is to form other dielectric and metal layers over the dielectric layer comprising the RF die. In the BBUL process, such layers are laminated to a structure (with appropriate electrical vias formed) to create a multilayer substrate. Block 405 is to form a connection pad on the multilayer substrate for attaching the substrate to a printed circuit board (PCB). Block 407 is to place other dies on the device attachment side (on opposite sides of the formed connection pads), wherein other dies are disposed such that at least a portion of the other dies are directly in the embedded die on. This arrangement is used to minimize the electrical connection distance between the embedded die and other dies. Block 409 is to provide a mold layer and shield over the other die and embedded die on the device attachment side to provide protection and electrical shielding. It should be understood that various additions, reductions, and/or modifications may be made within the scope of the various embodiments for the operations described above in connection with FIG. For example, in block 407, the other die may be part of a package substrate assembly that may be sized to fit over the embedded RF die on the attachment side. Moreover, some embodiments may involve a subset of the operations specified in Figure 4 and are independent of the other operations specified in Figure 4.Embodiments described herein may provide one or more of the following advantages. First, the embedded structure of the RF die and one or more other die structures provides the package structure with a smaller height (z-direction), and some embodiments include a substrate having a molded layer having less than 1 mm. total height. Second, the package substrate can have a smaller lateral dimension (x-y direction) by stacking the components on the embedded die. In some embodiments, this configuration allows the lateral dimension to be reduced by 50%. Third, by providing RF dies on top of each other, shorter and more reliable connections can be made, minimizing RF losses and improving RF performance. Fourth, multiple integrations of multiple techniques can be implemented in a single package substrate assembly depending on the type of component located in or on the substrate. Fifth, the RF transceiver can be customized on a single package substrate. In addition, metallization layers such as those formed in one or more of the die structures of Figures 1-4 can be used to minimize electrical interference.The assembly including the structure formed in the above embodiment can find applications in various electronic components. Fig. 6 schematically illustrates an example of an electronic system component in which various aspects of the described embodiments may be embodied. Other embodiments need not include all of the features specified in Figure 6, and may include alternative features not specified in Figure 6.Component 502 of FIG. 6 can include at least one embedded RF die 544 in substrate 510. The RF die 544 can be electrically coupled to other dies 556 disposed on the RF die. As shown in Figure 6, a portion of the other die 556 is cut away to show the RF die 544 (shown in phantom to indicate that it is embedded in the substrate 510). The RF die 544 and other dies 556 located thereon may be configured as shown in certain embodiments above, including, for example, those illustrated in Figures 1, 3, and 4. Although only one embedded RF die and one other die are shown in FIG. 6, embodiments may include multiple embedded dies and multiple other dies (RF tubes) on the substrate. Core or other type of die structure), for example, as described in connection with FIG. The size of the system can be reduced by providing various components (e.g., CPU, amplifier, etc.) in or on the package substrate.Substrate 510 can be coupled to printed circuit board 588. Component 502 can further include other components including, but not limited to, memory 590 and one or more controllers 592a, 592b ... 592n, both of which are distributed over board 588. The board 588 can be a single layer or a multi-layer board having a plurality of wires that provide communication between the circuitry in the package substrate 510 and other components mounted to the board 588. In some embodiments, board 588 can include a card such as a daughter card or an expansion card. Some components can also be seated in the plug or directly connected to the board. Various components can also be integrated into the same package. A display 594 can also be included.In memory 590, any suitable operating system and various applications can be executed and retained. The content retained in memory 590 can be cached in accordance with known caching techniques. Programs and data in memory 590 can be swapped to storage device 596 as part of a memory management operation. System component 502 can include any suitable computing device including, but not limited to, a host, a server, a personal computer, a workstation, a laptop, a palmtop, a netbook, a superbook, a tablet, an e-book, a handheld gaming device, a handheld entertainment device (eg, MP3 (Motion Picture Expert Compression Standard Audio Level 3) player), PDA (Personal Digital Assistant), smartphone or other telephony device (wireless or wired), network application device, virtualization device, storage controller, network Controllers, routers, etc.Controllers 592a, 592b ... 592n include one or more system controllers, peripheral controllers, memory controllers, hub controllers, I/O (input/output) bus controllers, video controllers, network controllers , memory device controllers, communication controllers, and more. For example, the memory controller can control reading data from and writing data to storage device 596 in accordance with a storage protocol layer. The storage protocol layer can be any number of known storage protocols. Data written to or read from storage device 596 can be cached by known caching techniques. The network controller can include one or more protocol layers to send network packets to the remote device over the network 598 and to receive network packets from the remote device. Network 598 may include a local area network (LAN), the Internet, a wide area network (WAN), a storage area network (SAN), and the like. Embodiments can be configured to transmit and receive data over a wireless network or connection. In some embodiments, the network controller and various protocol layers may employ an Ethernet protocol, Token Ring protocol, Fibre Channel protocol, etc., or any other suitable network communication protocol over an unshielded twisted pair cable.It will be appreciated that various modifications may be made within the scope of the embodiments described herein. The term dies as used herein refers to a work that is manipulated by various processes to convert into desired electronic devices. The die is typically a single piece that is split from the wafer and can be fabricated from a combination of semiconductor, non-semiconductor or semiconductor and non-semiconductor materials. Terms such as "first," "second," and the like, as used herein, do not necessarily denote a particular order, quantity, or importance, and are merely used to distinguish different elements. Terms such as "top", "bottom", "upper", "lower", "above", "below" and the like are used for the purpose of description and are used to provide a relative position and should not be understood as limit. Embodiments can be made, used, and included in a variety of positions and orientations.In the above detailed description, various features are grouped together for the purpose of simplified disclosure. The method of the present disclosure is not to be construed as a limitation of the invention, and the claimed embodiments of the invention are claimed. Rather, as the following claims reflect, inventive subject matter may be less than all features of a single disclosed embodiment. Therefore, the following claims are hereby incorporated into the specification of the claimsWhile certain exemplary embodiments have been described and illustrated in the drawings, the embodiments The specific construction and arrangement is described as will be modified accordingly by those skilled in the art. |
Aspects of the embodiments are directed to an IC chip that includes a substrate comprising a first metal layer, a second metal layer, and a ground plane residing on the first metal layer. The second metal layer can include a first signal trace, the first signal trace electrically coupled to a first signal pad residing in the first metal layer by a first signal via. The second metal layer can include a second signal trace, the second signal trace electrically coupled to a second signal pad residing in the first metal layer by a second signal via. The substrate can also include a ground trace residing in the second metal layer between the first signal trace and the second signal trace, the ground trace electrically coupled to the ground plane by a ground via. The vias coupled to the traces can include self-aligned or zero-misaligned vias. |
A package substrate (100) comprising:a substrate (102) comprising a first metal layer (M1 104) and a second metal layer (M2 106);a ground plane (130) residing on the first metal layer (M1 104);a first signal trace (124, 208a) residing in the second metal layer (M2 106), the first signal trace (124) electrically coupled to a first signal pad (132, 204a) residing in the first metal layer (M1 104) by a first signal via (126), the first signal via (126, 210a) comprising a width substantially similar to a width of the first signal trace (124);a second signal trace (208b) residing in the second metal layer (M2 106), the second signal trace (208b) electrically coupled to a second signal pad (204b) residing in the first metal layer (M1 104) by a second signal via, the second signal via (210b) comprising a width substantially similar to a width of the second signal trace (208b); anda ground trace (120, 212b) residing in the second metal layer (M2 106) between the first signal trace (208a) and the second signal trace (208b), the ground trace (120, 212b) electrically coupled to the ground plane (130, 206) by a ground via (122, 214b), the ground via (122, 214b) comprising a width substantially similar to a width of the ground trace (120, 212b).The package substrate of claim 1, wherein the ground trace (120, 212b) is a first ground trace electrically coupled to the ground plane (130, 206) by a first ground via (122, 214b), the package substrate further comprising:a second ground trace (212a) residing in the second metal layer (M2 106), the second ground trace (212a) electrically coupled to the ground plane (130, 206) by a second ground via (214a), the second ground via (214a) comprising a width substantially similar to a width of the second ground trace (212a); wherein the first signal trace (208a) resides between the first ground trace (212b) and the second ground trace (212a).The package substrate of claim 2, wherein the first ground trace (212b) is electrically connected the second ground trace (212a) by the ground plane (130, 206).The package substrate of claim 3, wherein the ground plane (130, 206) comprises a patterned metal line electrically coupled to the first ground via (214b) and the second ground via (214a).The package substrate of claim 3, wherein the ground plane (130, 206) comprises a ground plane on the first metal layer (M1 104) spanning an area of the first metal layer that covers the first signal trace.The package substrate of claim 5, wherein the package substrate comprises two adjacent signal traces in the second metal layer, the two signal traces defining a differential pair of signal traces (412); andwherein the ground plane (130, 206) comprises a gap in a region of the first metal layer above the two adjacent signal traces defining the differential pair of signal traces.The package substrate of any of claims 1-6, wherein the ground plane (130, 206) is a first ground plane, the package substrate further comprising a third metal layer (M3 108), the third metal layer comprising a second ground plane (110), the second metal layer (M2 106) between the first metal layer (M1 104) and the third metal layer (M3 108), the second ground plane (110) electrically connected to the ground trace (120, 212b) by the first ground plane (130, 206) in the first metal layer (M1 104).The package substrate of any of claims 1-6, wherein the ground via (122) comprises one of a self-aligned via or a zero-misaligned via; and wherein the first signal via (126, 210a) and the second signal (210b) via comprise one of a self-aligned via or a zero-misaligned via.The package substrate of any of claims 1-6, wherein the package substrate comprises a plurality of signal traces (208a-c) in the second metal layer and a plurality of ground traces (212a-c) in the second metal layer (M2 106), and wherein a number of signal traces is equal to a number of ground traces.A method (700) of forming a package substrate comprising:forming (704) a substrate ground plane in a third metal layer of a substrate;forming (706) a plurality of traces comprising a predetermined trace width in a second metal layer of the substrate,forming a signal via (708) on a first subset of traces of the plurality of traces, wherein formingthe signal via comprises forming the signal via to a width of substantially similar width as the predetermined trace width, and wherein the first subset of traces comprises alternating traces;forming a ground via (708) on a second subset of traces of the plurality of traces, the second subset different from the first subset of traces, wherein forming the ground via comprises forming the ground via to a width of substantially similar width as the predetermined trace width, and wherein the second subset of traces comprises alternating traces; andforming a surface ground plane (710) on a first metal layer, the surface ground plane on the first metal layer electrically connected to at least one ground trace by the ground via.The method of claim 10, further comprising forming a signal pad on the first metal layer, the signal pad electrically connected to at least one signal trace by the signal via.The method of any of claims 10-11, further comprising forming a substrate ground via in the second metal layer, the substrate ground via electrically connected to the substrate ground plane and to the surface ground plane.The method of any of claims 10-11, wherein forming the surface ground plane comprises an additive processing to form a patterned metal layer on the first metal layer of the package substrate.A computing device (800) comprising:a processor (804) mounted on a substrate according to claim 1;a communications logic unit (808) within the processor; and a memory within the processor (806).The computing device of claim 14, wherein the ground trace is a first ground trace and the ground via is a first ground via, the substrate further comprising:a second ground trace residing in the second metal layer, the first signal trace between the first ground trace and the second ground trace, the second ground trace comprising electrically coupled to the ground plane by a second ground via, the second ground via comprising a width substantially similar to a width of the second ground trace; anda third ground trace residing in the second metal layer, the second signal trace between the first ground trace and the third ground trace, the third ground trace comprising electrically coupled to the ground plane by a third ground via, the third ground via comprising a width substantially similar to a width of the third ground trace. |
BACKGROUNDPackaging sizing for semiconductor products can contribute to overall device scale. For mobile devices, packaging size can facilitate an overall form factor reduction. Packaging size can limit product performance due to restrictions in board layout and densityPrior art document US2014/0071646 A1 discloses a routing structure that includes alternating signal traces and ground traces separated by a dielectric. The routing structure also includes pads to which the traces are coupled.BRIEF DESCRIPTION OF THE DRAWINGSFIG. 1A is a schematic diagram of a cross-sectional view of a package substrate that includes a via architecture in accordance with some embodiments of the present disclosure.FIG. 1B is a schematic diagram of a cross-sectional view of another example package substrate that includes a via architecture in accordance with some embodiments of the present disclosure.FIG. 2 is a schematic diagram of top view of a package substrate that includes a via architecture and a ground plane in accordance with some embodiments of the present disclosure.FIG. 3 is a schematic diagram of a perspective cutaway view of an example package substrate in accordance with some embodiments of the present disclosure.FIG. 4 is a schematic diagram of a perspective cutaway view of another example package substrate in accordance with some embodiments of the present disclosure.FIG. 5 is a schematic diagram of a perspective cutaway view of another example package substrate in accordance with some embodiments of the present disclosure.FIG. 6 is a schematic diagram of a perspective cutaway view of another example package substrate in accordance with some embodiments of the present disclosure.FIG. 7 is a process flow diagram for forming a package substrate that includes self-aligned or zero-misaligned vias and a top metal layer in accordance with some embodiments of the present disclosure.FIG. 8 is a schematic diagram of a computing device in accordance with some embodiments of the present disclosure.Figures may not be drawn to scale.DETAILED DESCRIPTIONA package substrate is disclosed as recited in claim 1. A method of forming a package substrate is disclosed as recited in claim 10. A computing device comprising a processor mounted on a substrate according to claim 1 is disclosed as recited in claim 14. Further embodiments are recited in the dependent claims.Described herein are via configurations with surface ground plane designs for increasing routing trace densities. In the following description, various aspects of the illustrative implementations will be described using terms commonly employed by those skilled in the art to convey the substance of their work to others skilled in the art. However, it will be apparent to those skilled in the art that the present disclosure may be practiced with some of the described aspects. For purposes of explanation, specific numbers, materials and configurations are set forth in order to provide a thorough understanding of the illustrative implementations. However, it will be apparent to one skilled in the art that the present disclosure may be practiced without the specific details. In other instances, well-known features are omitted or simplified in order not to obscure the illustrative implementations.Various operations will be described as multiple discrete operations, in turn, in a manner that is most helpful in understanding the present disclosure, however, the order of description should not be construed to imply that these operations are necessarily order dependent. In particular, these operations need not be performed in the order of presentation.The terms "over," "under," "between," and "on" as used herein refer to a relative position of one material layer or component with respect to other layers or components. For example, one layer disposed over or under another layer may be directly in contact with the other layer or may have one or more intervening layers. Moreover, one layer disposed between two layers may be directly in contact with the two layers or may have one or more intervening layers. By contrast, a first layer "on" a second layer is in direct contact with that second layer. Similarly, unless explicitly stated otherwise, one feature disposed between two features may be in direct contact with the adjacent features or may have one or more intervening layers.Package z-height is a differentiator for today's semiconductor products, especially in the mobile arena. Two of the limiting factors for z-height reduction include power delivery and input/output (I/O) routing.This disclosure describes an architecture for the top package substrate layers that include a via configuration that can increase I/O routing density that can lead to z-height reduction due to layer count reduction and/or layer thickness reduction by using self-aligned vias (SAVs) or zero-misaligned vias (ZMVs) and a patterned top metal layer. The use of lithographically defined SAVs or ZMVs and the use of a closely spaced ground plane allows for single ended or differential pair high speed I/Os to be routed in a single metal layer instead of across multiple metal layers. The via configuration facilitates a 1:1 ground-to-signal trace ratio in the routing layer. This via configuration allows for a wide range of impedances to be closely matched. For example, by increasing the distance to the top ground (GND) plane, impedance can be changed, while changing the thickness distance to the bottom GND changes the impedance less-so. The via configuration described herein also decreases crosstalk between neighboring signal traces.FIG. 1A is a schematic diagram of a cross-sectional view of a package substrate 100 that includes a via architecture in accordance with embodiments of the present disclosure. The package substrate 100 includes a substrate 102. The package substrate can include a plurality of metallization interconnect layers for integrated circuits. A package substrate may include alternating metal and dielectric layers. Among the metal layers, some may form ground or power planes and others may be used for signal traces.The substrate 102 includes metallization interconnect layers for integrated circuits. Based on aspects of the present disclosure, the number of metal layers can be reduced (e.g., by a metal layer pair, such as a top and bottom metal layer). In FIG. 1A , the substrate 102 includes three metal layers: M1 104, M2 106, and M3 108, each separated by a dielectric layer. In at least some embodiments, the substrate 102 includes interconnects, for example, vias, configured to connect the metallization layers M1 104, M2 106, and M3 108.The M3 metal layer 108 is typically formed first. Here, the M3 metal layer generally includes a M3 ground plane 110. The M3 ground plane 110 can be interconnected to upper layers by a via 112. The M3 metal layer 108 also includes power routing lines 118 and corresponding vias. The M3 ground plane 110 can also be coupled to the M2 metal layer 106 by a ground via 114. In the M2 metal layer, a ground pad 116 can electrically couple the M2 ground traces (e.g., ground trace 120) to the M3 ground plane 110.The M2 metal layer 106 generally includes the high speed input/output signal traces (e.g., signal trace 124) and the ground traces (e.g., ground trace 120). The signal trace 124 is electrically coupled to the M1 signal pad 132 by a SAV or ZMV 126. Likewise, the ground trace 120 can be electrically coupled to the M1 metal layer ground plane 130 by a SAV or ZMV 122. The M2 metal layer also includes other vias and interconnects, such as the M2 ground landing pad 128 and the M2 power landing pad 129.The top metal layer or M1 metal layer 104 can include first level interconnect (FLI) pads, such as the signal pad 132 and the power interconnect pad 134. The M1 metal layer 104 can also include a surface metal that can serve as an M1 ground plane 130. Solder bumps 144a-144c can be used to interconnect the various circuit elements to other chips. The M1 metal layer 104 can also include a solder resist 142.The M1 metal layer 104 is coupled to the M2 metal layer 106 by SAV or ZMV coupled to traces in the M2 metal layer 106. The SAV or ZMV 126 connects the signal trace 124 to signal bump 144b. The ground trace 120 is coupled to the M1 ground plane 130 by an SAV or ZMV 291122. Certain ground traces in the M2 metal layer 106 can be coupled to the ground pad 116 in the M2 metal layer 106. These ground traces would be coupled to the M1 metal layer ground plane 130 by the M3 ground plane 110. The M1 metal layer ground plane 130 is connected to ground bump 144a on the die as well as to the M3 ground plane 110 in the substrate. This helps adjust impedance, tie all ground lines to the same potential, reduce cross talk, and enable the high speed input/output (HSIO) SAV/ZMV I/O to reach optimum performance.FIG. 1B is a schematic diagram of a cross-sectional view of another example package substrate 150 that includes a via architecture in accordance with embodiments of the present disclosure. The via architecture of package substrate 150 is similar to that shown in FIG. 1A . The surface metal of the M1 ground plane 130 in the embodiments illustrated by FIG. 1A can be a standard thickness metal layer (which is usually 10-15 µm thick). However, if such metal thickness is not required (for instance all I/Os, whether high speed or low speed, can be routed on M2) the top metal can serve only as first level interconnect (FLI) pads. For example, pad 162 can serve as a signal pad, while pad 164 can serve as a power pad. The M1 ground plane 160 can replace the thicker M1 metal layer ground plane 130 shown in FIG. 1A . The FLI pads can be made as thin as possible considering FLI requirement. To facilitate signaling, a copper thickness of only 1.5 µm is sufficient as an effective ground plane. To have a stable FLI, this copper (Cu) layer can be followed by a barrier layer of nickel (Ni) and then palladium (Pd) and gold (Au) (thin layers). The total thickness can be at or below 5 µm, which can further reduce the package thickness.The thin metal layer for the M1 ground plane 160 on top has a thickness between 2-6 µm and can be formed from copper. Other metals typical for a surface finish can also be used depending on the application.The SAV and ZMV do not need a large pad to land, so the density of the traces can be increased and can be formed on a single metal layer (e.g., M2 106). Because the traces are on a single metal layer, ground traces can be formed between each signal trace (except for differential pairs). For example, ground trace 120 resides between signal trace 156 and signal trace 152. Signal trace 152 resides between ground traces 120 and 154. To provide grounding, the ground traces can be connected to the top/surface ground layer (e.g., M1 metal layer ground plane 130) and by a ground plane below (e.g., M3 ground plane 110).In general, the via architecture described herein lowers the z-height of the package substrate and reduces near-end and far-end cross talk. Due to the use of SAV or ZMV, a higher I/O density can be achieved with the goal of routing all critical HSIO lines in a single layer. This is achieved without changing design rules, such as by using new or advanced patterning equipment.One of the constraints of an increase of the number of I/O lines in a single metal layer is that crosstalk can be increased as signaling lines get closer together. The increased line density of the via architecture described herein allows for the placement of ground lines on both sides of every signal line and ground lines on both sides of differential pair lines, satisfying impedance targets and improving far- and near-end crosstalk.To meet the impedance requirements and improve signaling, the ground lines should have the same potential. Since there is no alignment margin for downward vias (that are not SAV or ZMV), the thin metal layer/surface finish of the M1 ground plane 160 used for FLI attach is used to connect all ground lines. The ground connectivity is completed by vias going down to the package substrate GND layers (e.g., through M3 ground plane 110) wherever alignment margin allows it. This is what is illustrated by the ground pad 116 and corresponding ground via 114 in FIG. 1A-1B .Electrical signals, such as power and/or input/output (I/O) signals, may be routed to and/or from devices through the one or more metal (interconnect) layers. The one or more interconnect layers M1 104, M2 106, and M3 108 may form a metallization stack (also referred to as an "interlayer dielectric stack") of the package substrate.The routing traces (e.g., signal trace 124 and ground trace 120) may be arranged within the M2 metal layer 106 to route electrical signals according to a wide variety of designs. In some embodiments, the routing traces may include traces filled with an electrically conductive material such as a metal.The interconnect layers may include a dielectric material disposed between the interconnect structures. For example, the M2 metal layer 106 can include a dielectric material 158 between the traces and other M2 metal layer structures. In some embodiments, the dielectric material 158 disposed between the interconnect structures in different ones of the interconnect layers M1 104, M2 106, and M3 108 may have different compositions; in other embodiments, the composition of the dielectric material 158 between different interconnect layers M1 104, M2 106, and M3 108 may be the same.The package substrate 100 may include a solder resist material 142 (e.g., polyimide or similar material) and one or more conductive contacts 131, 132, and 134 formed on the M1 metal layer 104. In FIG. 1A , the conductive contacts 131, 132, and 134 are illustrated as taking the form of bond pads. The conductive contact 132 may be electrically coupled with the SAV/ZMV 126 and configured to route the electrical signals using signal trace 124 in the M2 metal layer 106. Likewise, conductive contact 131 may be electrically coupled with the SAV/ZMV 122 and configured to be a ground line routed by ground trace 120.Solder bump 144a may be formed on the ground conductive contacts 131 to mechanically and/or electrically couple a package including the package substrate 100 with another component (e.g., a circuit board). The package substrate 100 may include additional or alternate structures to route the electrical signals from the metal layers 104-108; for example, the conductive contacts may include other analogous features (e.g., posts) that route the electrical signals to external components.FIG. 2 is a schematic diagram of top view 200 of a package substrate 100 that includes a via architecture and a ground plane in accordance with embodiments of the present disclosure. FIG. 2 illustrates a package substrate 202 that includes a bump field that includes solder bump landing pads (e.g., signal pads 204a-b, and ground pad 216). The M1 ground plane 206 is illustrated. The ground traces and signal are also illustrated, though it is understood that the ground traces are in the M2 metal layer and are shown for illustrative purposes. The SAV and ZMV are also illustrated, and likewise, it is understood that the SAV and ZMV are in the M2 metal layer.For example, FIG. 2 illustrates a signal trace 208a routed to solder bump signal pad 204a and connected by a SAV/ZMV 210a, signal trace 208b routed to solder bump 204b and connected by a SAV/ZMV 210b. The signal traces are in the M2 metal layer and are presented for illustrative purposes.FIG. 2 also illustrates a ground trace between each signal trace. For example, signal trace 208a is adjacent to ground traces coupled to ground SAV/ZMV 214a and SAV/ZMV 214b. Signal trace 208a is shown to wind between the adjacent ground traces to reach the signal pad 204a. The ground traces may extend as far as they can before termination.FIG. 2 also illustrates the ground trace routing. A ground pad 216 can be electrically connected to the M1 ground plane 206. The ground pad 216 can be connected to the M2 ground trace by a ground SAV/ZMV 214a (the ground trace is not shown). The M1 ground plane 206 can be patterned to connect to the ground pad 216. FIG. 2 shows an M1 patterned ground line 212a coupling the grounding pad 216 to the M1 ground plane 206. FIG. 2 illustrates how the M1 ground plane can be patterned to accommodate the increased density of signal lines. Another example is shown as ground SAV/ZMV 214c, which couples to the M1 ground plane 206 by a patterned M1 metal line 212c and couples to the M1 ground plane 206 at a location 218. The signal trace 208b is in a lower layer (e.g., M2 layer) than the patterned M1 metal line 212c, highlighting the ability to couple M2 layer ground traces to a common ground using the SAV/ZMV configuration.FIGS. 3-6 illustrate various embodiments for the M1 metal layer ground plane configuration. Each embodiment facilitates a decrease in near-end and far-end crosstalk. It is understood that FIGS. 3-6 illustrate example configurations, and are not limiting. Other ground plane configurations can also be used to achieve similar results. FIGS. 3-6 further illustrate how the signal traces are adjacent to ground traces, with the exception of differential pair traces, which are two signal traces adjacent to a ground trace (shown in FIG. 6 ).FIG. 3 is a schematic diagram of a perspective cutaway view of an example package substrate 300 in accordance with embodiments of the present disclosure. Package substrate 300 includes small patches of metal on the surface 302 that are connected to the ground traces with SAV/ZMV. The surface metal configuration of FIG. 3 uses minimal surface finish to tie the ground traces on the routing layer (M2 metal layer) to the main ground structure and to the ground bump 304.For example, a patterned ground line 320a can electrically couple ground trace 306a with ground trace 306b. Likewise, patterned ground line 320b can electrically couple ground trace 306c with ground traces 306d and 306e. Surface ground plane patches 322a and 322b can be coupled with an M3 ground plane (not shown) by a via.FIG. 4 is a schematic diagram of a perspective cutaway view of another example package substrate 400 in accordance with embodiments of the present disclosure. The surface ground plane 404 resides on the package surface 402 and extends from the location of die-level ground bump field 408 to the edge of the die 410. This surface ground plane 404 also has a slot 414 (i.e. opening) over differential pair signal lines 412. The surface ground plane 404 also includes a pad 406 for connecting the surface ground plane 404 to the M3 metal ground plane (not shown).FIG. 5 is a schematic diagram of a perspective cutaway view of another example package substrate 500 in accordance with embodiments of the present disclosure. The package substrate 500 is similar to the package substrate 400. The package substrate 500 includes a larger surface ground plane 502 that does not include a slot for differential pair traces. The surface ground plane 502 extends from the bump field 508 for the entire length of the traces to the location where the single traces via down to the second layer interconnect field. The surface ground plane 502 also includes a pad 504 for connecting the surface ground plane 502 to the M3 metal ground plane (not shown).FIG. 6 is a schematic diagram of a perspective cutaway view of another example package substrate 600 in accordance with embodiments of the present disclosure. Package substrate 600 can be considered as a combination of the surface ground plane configuration illustrated in FIGS. 4 and 5 . The surface ground plane 602 extends from the bump field 608 to the end of the routing. The surface ground plane 602 includes a slot 610 over differential pair signal traces 612. The surface ground plane 602 also includes a pad 604 for connecting the surface ground plane 602 to the M3 metal ground plane (not shown).FIG. 7 is a process flow diagram 700 for forming a package substrate that includes self-aligned or ZMVs and a top metal layer in accordance with embodiments of the present disclosure. A core metal material can be provided (702). The core metal material can be patterned to form the M3 metal layer structures, such as the M3 ground plane (704). The core metal material can be further processed to form the M2 metal layer structures. For example, the M2 metal layer routing traces can be patterned and formed (706). The M2 metal layer SAVs and/or ZMVs can be patterned and formed (708). The formation of SAV and ZMV can be performed by known techniques, as can the patterning and formation of the routing traces. The formation of SAV or ZMV can result in a via that has a width that is substantially similar to a width of the connected trace. The length of the SAV or ZMV can be changed to suit the connections and trace routing. The z-height of the via can be controlled based on a desired overall z-height of the M2 metal layer and/or the overall package z-height.By way of an example, a zero-misaligned via (ZMV) formation process can use a dual-tone photoresist that includes two layers of a photomask. The photomask is rigid and substantially planar, and can be formed using known techniques that are more precise than standard via-pad registration techniques. Therefore, via-pad misalignment can be small, which allows the size of the pad to be reduced to a size the same as, or similar to, the size of the via. In some example cases, the use of a ZMV can facilitate an I/O connection density of greater than 20 I/O/mm/layer, such as between 50-80 I/O/mm/layer and above, including as many as 100-250 I/O /mm/layer.Similarly, a mask can be used to form self-aligned vias (SAVs). Self-aligned vias can be formed using known techniques. For example, an SAV can be created by forming an Mx+1 layer over the Mx layer traces (and insulating layer(s)). The Mx+1 layer can be patterned using a hardmask or via mask to form a trench exposing the Mx metal layer trace. The SAV metal (e.g., copper) can be deposited within the trench on the trace using known metal deposition techniques. The resulting via (i.e., SAV) can have the same or similar width as the underlying trace. The length and height of the SAV can be controlled based on implementation choices.The M1 metal layer (e.g., M1 ground plane) can be patterned and formed (710). The patterning and formation of the top metal layer M1 can be achieved using substrate semi-additive manufacturing (including seed layer deposition, lithography, plating, resist removal, and seed layer etch), or using subtractive or additive processing approaches. One advantage of additive manufacturing may be that the process flow is simplified by combining the deposition and patterning into one step, instead of requiring the multiple steps used in conventional semi-additive manufacturing. Thus, the M1 metal layer ground plane with patches and slots can be created in a single step.Some examples of additive processing include:1. Cold spray, in which powders of the conductive material to be deposited are accelerated through a nozzle at high speeds, forming a mechanical bond upon impact with the substrate surface. Patterning can be achieved by controlling the nozzle dimensions and movement, and/or by spraying the powders through a shadow mask with fine features. This approach is likely to produce high conductivity films due to the absence of organic binders or solvent, and the ability to keep the substrate at room temperature during spraying, thus reducing oxidation.2. Inkjet printing in which conductive inks are printed (e.g., using an aerosol jet printer) directly on the substrate and subsequently cured or sintered to remove the solvent. This approach is likely to produce very thin films and small feature sizes (e.g., ∼12 um line width has been demonstrated using an aerosol jet printer).3. Stencil printing of a conductive paste.4. Laser assisted selective electroless plating, in which the regions to be patterned with the conductive layer are first functionalized using self-assembled monolayers and laser exposure, followed by electroless plating which only occurs in the functionalized areas.The package substrate can then undergo solder resist patterning, surface finishing, and solder bump formation (712).The use of zero-misalignment via-pad structures or self-aligned via-pad structures, as described herein, substantially decreases the via and pad sizes while increasing achievable density such as input/output connections/mm/layer. Aspects of the present embodiments have advantages, such as a decrease in manufacturing costs, a decrease in z-height, and increased electrical performance for off-package I/O connections. Embodiments to provide self-aligned or zero-misaligned via-pad structures as described herein advantageously enable 2.5D packaging (e.g., co-packaging at least two of a central processing unit (CPU), a memory, and a graphics processing unit (GPU); die splitting; quasi-monolithic integration; and other 2.5D packaging techniques). Embodiments can facilitate a reduction in manufacturing cost, decreased package z-height, increased electrical performance, and increased scalability.FIG. 8 is a schematic diagram of a computing device in accordance with embodiments of the present disclosure. The computing device 800 can include a processor, as well as a memory and communications circuitry. The processor and other circuitry can be supported by a package substrate. The substrate can include routing traces in a single metal layer (e.g., the M2 metal layer) by using self-aligned or ZMVs as well as a surface ground plane (e.g., M1 metal layer ground plane). The routing traces can alternate between signal traces and ground traces so that the density of the traces increases while also providing ground shielding against cross talk between signal traces.The computing device 800 illustrated in FIG. 8 in accordance with one embodiment of the disclosure may include a number of components. In one embodiment, these components are attached to one or more motherboards. In an alternate embodiment, some or all of these components are fabricated onto a single system-on-a-chip (SoC) die. The components in the computing device 800 include, but are not limited to, an integrated circuit chip 802 and at least one communications logic unit 808. In some implementations, the communications logic unit 808 is fabricated within the integrated circuit chip 802 while in other implementations the communications logic unit 808 is fabricated in a separate integrated circuit chip that may be bonded to a substrate or motherboard that is shared with or electronically coupled to the integrated circuit chip 802. The integrated circuit chip 802 may include a CPU 804 as well as on-die memory 806, often used as cache memory, that can be provided by technologies such as embedded DRAM (eDRAM) or spin-transfer torque memory (STTM or STT-MRAM).Computing device 800 may include other components that may or may not be physically and electrically coupled to the motherboard or fabricated within an SoC die. These other components include, but are not limited to, volatile memory 810 (e.g., DRAM), nonvolatile memory 812 (e.g., ROM or flash memory), a GPU 814, a digital signal processor (DSP) 816, a crypto processor 842 (a specialized processor that executes cryptographic algorithms within hardware), a chipset 820, an antenna 822, a display (e.g., a touchscreen display) 824, a touchscreen controller 826, a battery 828 or other power source, a power amplifier (not shown), a voltage regulator (not shown), a global positioning system (GPS) device 830, a compass, a motion coprocessor or sensors 832 (that may include an accelerometer, a gyroscope, and a compass), a speaker 834, a camera 836, user input devices 838 (such as a keyboard, mouse, stylus, and touchpad), and a mass storage device 840 (such as hard disk drive, compact disc (CD), digital versatile disk (DVD), and so forth).The communications logic unit 808 enables wireless communications for the transfer of data to and from the computing device 800. The term "wireless" and its derivatives may be used to describe circuits, devices, systems, methods, techniques, communications channels, etc., that may communicate data through the use of modulated electromagnetic radiation through a non-solid medium. The term does not imply that the associated devices do not contain any wires, although in some embodiments they might not. The communications logic unit 808 may implement any of a number of wireless standards or protocols, including but not limited to Wi-Fi (IEEE 802.11 family), WiMAX (IEEE 802.16 family), IEEE 802.20, long term evolution (LTE), Ev-DO, HSPA+, HSDPA+, HSUPA+, EDGE, GSM, GPRS, CDMA, TDMA, DECT, Bluetooth, derivatives thereof, as well as any other wireless protocols that are designated as 3G, 4G, 5G, and beyond. The computing device 800 may include a plurality of communications logic units 808. For instance, a first communications logic unit 808 may be dedicated to shorter range wireless communications such as Wi-Fi and Bluetooth and a second communications logic unit 808 may be dedicated to longer range wireless communications such as GPS, EDGE, GPRS, CDMA, WiMAX, LTE, Ev-DO, and others.In various embodiments, the computing device 800 may be a laptop computer, a netbook computer, a notebook computer, an ultrabook computer, a smartphone, a tablet, a personal digital assistant (PDA), an ultra mobile PC, a mobile phone, a desktop computer, a server, a printer, a scanner, a monitor, a set-top box, an entertainment control unit, a digital camera, a portable music player, or a digital video recorder. In further implementations, the computing device 800 may be any other electronic device that processes data.It is understood that the subject matter of the present description is not necessarily limited to specific applications illustrated in FIGS. 1-8 . The subject matter may be applied to other microelectronic device and assembly applications, as well as any appropriate heat removal application, as will be understood to those skilled in the art.The following paragraphs provide examples of various ones of the embodiments disclosed herein.Example 1 is a package substrate that includes a substrate including a first metal layer and a second metal layer; a ground plane residing on the first metal layer; a first signal trace residing in the second metal layer, the first signal trace electrically coupled to a first signal pad residing in the first metal layer by a first signal via, the first signal via including a width substantially similar to a width of the first signal trace; a second signal trace residing in the second metal layer, the second signal trace electrically coupled to a second signal pad residing in the first metal layer by a second signal via, the second signal via including a width substantially similar to a width of the second signal trace; and a ground trace residing in the second metal layer between the first signal trace and the second signal trace, the ground trace electrically coupled to the ground plane by a ground via, the ground via including a width substantially similar to a width of the ground trace.Example 2 may include the subject matter of example 1, wherein the ground trace is a first ground trace electrically coupled to the ground plane by a first ground via, the package substrate further including a second ground trace residing in the second metal layer, the second ground trace electrically coupled to the ground plane by a second ground via, the second ground via including a width substantially similar to a width of the second ground trace; wherein the first signal trace resides between the first ground trace and the second ground trace.Example 3 may include the subject matter of example 2, wherein the first ground trace is electrically connected the second ground trace by the ground plane.Example 4 may include the subject matter of example 3, wherein the ground plane includes a patterned metal line electrically coupled to the first ground via and the second ground via.Example 5 may include the subject matter of example 3, wherein the ground plane includes a ground plane on the first metal layer spanning an area of the first metal layer that covers the first signal trace.Example 6 may include the subject matter of example 5, package substrate includes two signal traces in the second metal layer, the two signal traces defining a differential pair of signal traces; and wherein the ground plane includes a gap in a region of the first metal layer above the differential pair of signal traces.Example 7 may include the subject matter of any of examples 1-6, wherein the ground plane is a first ground plane, the package substrate further including a third metal layer, the third metal layer including a second ground plane, the second metal layer between the first metal layer and the third metal layer, the second ground plane electrically connected to the ground trace by the first ground plane in the first metal layer.Example 8 may include the subject matter of any of examples 1-7, wherein the first ground plane is electrically coupled to the first ground plane by a via traversing the second metal layer.Example 9 may include the subject matter of any of examples 1-8, wherein the ground via includes one of a self-aligned via or a zero-misaligned via.Example 10 may include the subject matter of any of examples 1-9, wherein the first signal via and the second signal via include one of a self-aligned via or a zero-misaligned via.Example 11 may include the subject matter of any of examples 1-10, wherein the package substrate includes a plurality of signal traces in the second metal layer and a plurality of ground traces in the second metal layer, and wherein a number of signal traces is equal to a number of ground traces.Example 12 may include the subject matter of any of examples 1-11, wherein the ground plane includes a thickness between 10-15 µm thick.Example 13 may include the subject matter of any of examples 1-12, wherein the ground plane includes a thickness below 6 µm.Example 14 may include the subject matter of any of examples 1-13, wherein the ground plane includes copper.Example 15 may include the subject matter of any of examples 1-14, and can also include a signal solder bump electrically coupled to the first signal pad; a ground pad on the first metal layer, the ground pad electrically coupled to the ground plane; and a ground solder bump electrically coupled to the ground pad.Example 16 may include the subject matter of example 15, wherein the first signal pad is a first level interconnect (FLI).Example 17 may include the subject matter of example 16, wherein the FLI includes copper of a thickness between 1.4 µm and 1.6 µm.Example 18 may include the subject matter of any of examples 1-17, wherein the first signal trace and the second signal trace are high speed input/output traces.Example 19 may include the subject matter of any of examples 1-18, wherein the package substrate includes a die edge, and wherein the ground plane includes a surface metal on the first metal layer extending to the die edge.Example 20 is a method of forming a package substrate that includes forming a substrate ground plane in a third metal layer of a substrate; forming a plurality of traces including a predetermined trace width in a second metal layer of the substrate, forming a signal via on a first subset of traces of the plurality of traces, wherein forming the signal via includes forming the signal via to a width of substantially similar width as the predetermined trace width, and wherein the first subset of traces includes alternating traces; forming a ground via on a second subset of traces of the plurality of traces, the second subset different from the first subset of traces, wherein forming the ground via includes forming the ground via to a width of substantially similar width as the predetermined trace width, and wherein the second subset of traces includes alternating traces; and forming a surface ground plane on a first metal layer, the surface ground plane on the first metal layer electrically connected to at least one ground trace by the ground via.Example 21 may include the subject matter of example 20, and can also include forming a signal pad on the first metal layer, the signal pad electrically connected to at least one signal trace by the signal via.Example 22 may include the subject matter of any of examples 20-21, further including forming a substrate ground via in the second metal layer, the substrate ground via electrically connected to the substrate ground plane and to the surface ground plane.Example 23 may include the subject matter of any of examples 20-22, wherein forming the surface ground plane includes an additive processing to form a patterned metal layer on the first metal layer of the package substrate.Example 24 may include the subject matter of example 23, wherein the additive processing includes one or more of cold spray, inkjet printing, stencil printing of a conductive paste, laser assisted selective electroless plating.Example 25 is a computing device that includes a processor mounted on a substrate; a communications logic unit within the processor; and a memory within the processor. The substrate can include a first metal layer and a second metal layer; a ground plane residing on the first metal layer; a first signal trace residing in the second metal layer, the first signal trace electrically coupled to a first signal pad residing in the first metal layer by a first signal via, the first signal via including a width substantially similar to a width of the first signal trace; a second signal trace residing in the second metal layer, the second signal trace electrically coupled to a second signal pad residing in the first metal layer by a second signal via, the second signal via including a width substantially similar to a width of the second signal trace; and a ground trace residing in the second metal layer between the first signal trace and the second signal trace, the ground trace electrically coupled to the ground plane by a ground via, the ground via including a width substantially similar to a width of the ground trace.Example 26 may include the subject matter of example 25, wherein the ground trace is a first ground trace and the ground via is a first ground via. The substrate can include a second ground trace residing in the second metal layer, the first signal trace between the first ground trace and the second ground trace, the second ground trace comprising electrically coupled to the ground plane by a second ground via, the second ground via comprising a width substantially similar to a width of the second ground trace; and a third ground trace residing in the second metal layer, the second signal trace between the first ground trace and the third ground trace, the third ground trace comprising electrically coupled to the ground plane by a third ground via, the third ground via comprising a width substantially similar to a width of the third ground trace. |
Systems and methods for controlling isochronous data streams are disclosed. Particular aspects of the present disclosure are designed to be used with almost any isochronous data stream, but are well-suited for use with the Universal Serial Bus (USB) protocol. Further, aspects of the present disclosure are flexible to accommodate existing configuration possibilities within the USB protocol as well as accommodate proposed future changes in the USB protocol. The flexibility of the systems and methods is provided by calculating: (1) drift between a USB host system time and the application and (2) drift between the USB host system and a USB device clock. Based on these two drift calculations, a time stamp may be synthesized to program a next delivery schedule. Using this time stamp, jitter correction can take place and uniformly-sized packets may be assembled to pass to an application processor. |
What is claimed is:1. A method for controlling communication in a Universal Serial Bus (USB) system, comprising:receiving variably- sized packets at a first processor having a USB driver;assembling uniformly- sized packets at the first processor; andpassing the uniformly- sized packets to a second processor for use by applications at an application layer in a protocol stack.2. The method of claim 1, wherein the first processor and the second processor are integrated into a single integrated circuit.3. The method of claim 1, wherein receiving the variably- sized packets at the first processor comprises receiving the variably- sized packets at a microprocessor.4. The method of claim 1, wherein receiving the variably- sized packets at the first processor comprises receiving the variably- sized packets at an audio digital signal processor (ADSP).5. The method of claim 1, wherein receiving the variably- sized packets at the first processor comprises receiving the variably-sized packets at an intermediate device between a peripheral and a host.6. The method of claim 1, wherein receiving the variably-sized packets comprises receiving the variably- sized packets at a processor in a peripheral.7. The method of claim 1, wherein assembling the uniformly- sized packets comprises using a bus frequency and a samples per packet to calculate a size.8. The method of claim 1, wherein assembling the uniformly- sized packets comprises using a sampling frequency of content. Qualcomm Ref. No. 163404WO9. The method of claim 1, wherein assembling the uniformly- sized packets comprises receiving a time stamp from a high resolution timer.10. A host comprising:an application processor;Universal Serial Bus (USB) hardware; andan audio digital signal processor (ADSP) configured to:receive variably-sized packets at the ADSP through the USB hardware; assemble uniformly-sized packets at the ADSP; andpass the uniformly- sized packets to the application processor for use by applications at an application layer in a protocol stack.11. A host comprising:an application processor;Universal Serial Bus (USB) hardware; anda system on a chip (SoC) comprising a plurality of processors configured to: receive variably-sized packets at a first processor;assemble uniformly-sized packets at the first processor; and pass the uniformly- sized packets to a second processor for use by applications at an application layer in a protocol stack.12. The host of claim 11, wherein the first processor comprises a microprocessor.13. The host of claim 11, wherein the first processor comprises an audio digital signal processor (ADSP).14. The host of claim 11, wherein the first processor is configured to assemble the uniformly- sized packets by using a bus frequency and a samples per packet to calculate a size.15. The host of claim 11, wherein the first processor is configured to assemble the uniformly- sized packets by using a sampling frequency of content. Qualcomm Ref. No. 163404WO16. The host of claim 11, wherein the first processor is configured to assemble the uniformly- sized packets by receiving a time stamp from a high resolution timer.17. A method for detecting drift in a Universal Serial Bus (USB) system, comprising:determining that a fractional sampling rate is used on a USB bus between an audio peripheral and a host;determining a first fractional remainder associated with the fractional sampling rate over a service interval;based on the first fractional remainder, calculating a whole number corresponding to a number of intervals required to have no fractional remainder; andchecking drift each whole number of intervals.18. The method of claim 17, further comprising applying a drift correction based on checking the drift.19. A processor comprising:an input; anda control system configured to:determine that a fractional sampling rate is used on a USB bus between an audio peripheral and a host;determine a first fractional remainder associated with the fractional sampling rate over a service interval;based on the first fractional remainder, calculate a whole number corresponding to a number of intervals required to have no fractional remainder; andcheck drift each whole number of intervals.20. The processor of claim 19 integrated into a device selected from the group consisting of: a set top box; an entertainment unit; a navigation device; a communications device; a fixed location data unit; a mobile location data unit; a global Qualcomm Ref. No. 163404WO positioning system (GPS) device; a mobile phone; a cellular phone; a smart phone; a session initiation protocol (SIP) phone; a tablet; a phablet; a server; a computer; a portable computer; a mobile computing device; a wearable computing device; a desktop computer; a personal digital assistant (PDA); a monitor; a computer monitor; a television; a tuner; a radio; a satellite radio; a music player; a digital music player; a portable music player; a digital video player; a video player; a digital video disc (DVD) player; a portable digital video player; an automobile; a vehicle component; avionics systems; a drone; and a multicopter.21. A method to synthesize a time stamp, comprising:receiving a run command from a data delivery handler; andsumming an output from a high resolution timer and a computed absolute time stamp.22. The method of claim 21, further adding drift correction to the summing to synthesize the time stamp.23. The method of claim 22, further comprising performing in-band drift detection.24. The method of claim 22, further comprising performing out-of-band drift detection.25. The method of claim 22, further comprising adding a device drift accumulator output to a local clock drift accumulator output.26. The method of claim 21, wherein summing comprises summing in a processor in a mobile terminal.27. The method of claim 21, wherein summing comprises summing in a dongle. Qualcomm Ref. No. 163404WO28. A processor comprising:an audio data buffer; anda Universal Serial Bus (USB) audio client (UAC) configured to:receive variably-sized packets;assemble uniformly-sized packets; andpass the uniformly- sized packets to a second processor for use by applications at an application layer in a protocol stack.29. The processor of claim 28 wherein the processor is positioned within a USB peripheral.30. The processor of claim 28, wherein the processor is positioned in an intermediate device configured to sit between a peripheral and a host.31. The processor of claim 28, wherein the processor is positioned in a host.32. The processor of claim 28 integrated into a device selected from the group consisting of: a set top box; an entertainment unit; a navigation device; a communications device; a fixed location data unit; a mobile location data unit; a global positioning system (GPS) device; a mobile phone; a cellular phone; a smart phone; a session initiation protocol (SIP) phone; a tablet; a phablet; a server; a computer; a portable computer; a mobile computing device; a wearable computing device; a desktop computer; a personal digital assistant (PDA); a monitor; a computer monitor; a television; a tuner; a radio; a satellite radio; a music player; a digital music player; a portable music player; a digital video player; a video player; a digital video disc (DVD) player; a portable digital video player; an automobile; a vehicle component; avionics systems; a drone; and a multicopter. |
SYSTEMS AND METHODS FOR CONTROLLING ISOCHRONOUS DATASTREAMSPRIORITY CLAIMS[0001] The present application claims priority to U.S. Patent Provisional Application Serial No. 62/355,166 filed on June 27, 2016 and entitled "PROGRAMMABLE RATE-MATCHED DATA RATE OUTPUT REGULATOR FOR ISOCHRONOUS DATA STREAMS," the contents of which is incorporated herein by reference in its entirety.[0002] The present application also claims priority to U.S. Patent Provisional Application Serial No. 62/517,247 filed on June 9, 2017 and entitled "ISOCHRONOUS DATA STREAM CONTROL SYSTEMS AND METHODS," the contents of which is incorporated herein by reference in its entirety.[0003] The present application also claims priority to U.S. Patent Application Serial No. 15/631,807, filed on June 23, 2017 and entitled "SYSTEMS AND METHODS FOR CONTROLLING ISOCHRONOUS DATA STREAMS," the contents of which is incorporated herein by reference in its entirety.BACKGROUNDI. Field of the Disclosure[0004] The technology of the disclosure relates generally to handling arbitrary data streams on a data bus.II. Background[0005] Computing devices have become ubiquitous in contemporaneous living. The popularity of computing devices has exploded in part because of the ever increasing functionality available on the computing devices. Concurrent with the increase in functionality has been an increase in the numbers and types of supplemental devices that may be associated with the computing devices. In some cases the supplemental devices may be integrated into the computing devices, such as the integration of a camera into a smart phone. In other cases, the supplemental devices may be peripherals, such as audio headsets that are coupled to a computing device through some form of external interface. In both cases various protocols have arisen to allow applications running on the computing device to interact with the supplemental devices as needed.[0006] One popular protocol is the Universal Serial Bus (USB) protocol. USB exists in various flavors including full speed (FS), high speed (HS), and super speed (SS). Additionally, USB allows for various clock synchronization schemes between a host and a peripheral device. In particular, USB contemplates synchronizing to a clock from the peripheral device (referred to as asynchronous), synchronizing to a clock from the host (referred to as synchronous), and sharing clock synchronization responsibilities between the host and the peripheral device (referred to as adaptive). While the various flavors and clock synchronization schemes allow for design flexibility to increase the number of devices using the USB protocol, the myriad options make some design decisions more difficult.[0007] Such design decisions are further complicated when audio and/or video streams are being transmitted through a USB interface. Because of the universal nature of the USB form factor, a USB host is expected to be able to accommodate both audio/video capture from and audio/video playback to a peripheral. In particular, the USB host is expected to be able to accommodate different speeds, different clock synchronization schemes, different sampling rates, and variably-sized data. Conventional systems place the burden on such accommodation at the application layer, which requires substantial buffering and complicated algorithms on the part of applications in the application layer. Additionally, there are current proposals to increase service intervals, which may impose additional burdens on the application processor that handles the application layer. Accordingly, there is a need for a way to provide a USB compatible system that allows for greater flexibility in handling variable data streams both those currently implemented and that has the flexibility to handle differing input parameters.SUMMARY OF THE DISCLOSURE[0008] Aspects disclosed in the detailed description include systems and methods for controlling isochronous data streams. Particular aspects of the present disclosure are designed to be used with almost any isochronous data stream, but are well-suited for use with the Universal Serial Bus (USB) protocol. Further, aspects of the present disclosure are flexible to accommodate existing configuration possibilities within the USB protocol as well as accommodate proposed future changes in the USB protocol. The flexibility of the systems and methods is provided by calculating: (1) drift between a USB host system time and the application and (2) drift between the USB host system and a USB device clock. Based on these two drift calculations, a time stamp may be synthesized to program a next delivery schedule. Using this time stamp, jitter correction can take place and uniformly-sized packets may be assembled to pass to an application processor. The use of such uniformly-sized packets may eliminate the need for buffers in an application layer, which may improve user experience when a data stream is an audio data stream.[0009] In this regard in one aspect, a method for controlling communication in a USB system is disclosed. The method includes receiving variably- sized packets at a first processor having a USB driver. The method also includes assembling uniformly- sized packets at the first processor. The method also includes passing the uniformly- sized packets to a second processor for use by applications at an application layer in a protocol stack.[0010] In another aspect, a host is disclosed. The host includes an application processor. The host also includes USB hardware. The host also includes an audio digital signal processor (ADSP). The ADSP is configured to receive variably-sized packets at the ADSP through the USB hardware. The ADSP is also configured to assemble uniformly-sized packets at the ADSP. The ADSP is also configured to pass the uniformly-sized packets to the application processor for use by applications at an application layer in a protocol stack.[0011] In another aspect, a host is disclosed. The host includes an application layer. The host also includes USB hardware. The host also includes a system on a chip (SoC) including a plurality of processors. The plurality of processors is configured to receive variably- sized packets at a first processor. The plurality of processors is also configured to assemble uniformly-sized packets at the first processor. The plurality of processors is also configured to pass the uniformly-sized packets to a second processor for use by applications at an application layer in a protocol stack.[0012] In another aspect, a method for detecting drift in a USB system is disclosed. The method includes determining that a fractional sampling rate is used on a USB bus between an audio peripheral and a host. The method also includes determining a first fractional remainder associated with the fractional sampling rate over a service interval. Based on the first fractional remainder, the method also includes calculating a whole number corresponding to a number of intervals required to have no fractional remainder. The method also includes checking drift each whole number of intervals.[0013] In another aspect, a processor is disclosed. The processor includes an input. The processor also includes a control system. The control system is configured to determine that a fractional sampling rate is used on a USB bus between an audio peripheral and a host. The control system is also configured to determine a first fractional remainder associated with the fractional sampling rate over a service interval. Based on the first fractional remainder, the control system is also configured to calculate a whole number corresponding to a number of intervals required to have no fractional remainder. The control system is also configured to check drift each whole number of intervals.[0014] In another aspect, a method to synthesize a time stamp is disclosed. The method includes receiving a run command from a data delivery handler. The method also includes summing an output from a high resolution timer and a computed absolute time stamp.[0015] In another aspect, a processor is disclosed. The processor includes an audio data buffer. The processor also includes a USB audio client (UAC). The UAC is configured to receive variably-sized packets. The UAC is also configured to assemble uniformly- sized packets. The UAC is also configured to pass the uniformly-sized packets to a second processor for use by applications at an application layer in a protocol stack.BRIEF DESCRIPTION OF THE FIGURES[0016] Figure 1 is a simplified perspective view of a mobile communication device with a remote audio peripheral coupled through a Universal Serial Bus (USB) cable and connector according to an exemplary aspect of the present disclosure;[0017] Figure 2 is a block diagram of a conventional audio flow from a USB peripheral to an application layer within a processor; [0018] Figure 3 is a block diagram of an audio flow within a USB system according to exemplary aspects of the present disclosure;[0019] Figures 4A and 4B show two USB systems with alternate placements of a data regulator of the present disclosure;[0020] Figure 5 is a block diagram of a data regulator;[0021] Figure 6 is a signal flow diagram showing how packet size is calculated and how packets are passed to an application layer;[0022] Figure 7 is a block diagram of an in-band drift reporting process from a microphone to a USB host;[0023] Figure 8 is a block diagram of an out-of-band drift reporting process from a microphone to a USB host;[0024] Figure 9 is a block diagram of an in-band drift reporting process from a microphone to a host and how the host uses same for playback to a speaker;[0025] Figure 10 is a block diagram of an out-of-band drift reporting process from a microphone to a host and how the host uses same for playback to a speaker; and[0026] Figure 11 is a block diagram of an exemplary processor-based system that can include the USB system of Figure 3.DETAILED DESCRIPTION[0027] With reference now to the drawing figures, several exemplary aspects of the present disclosure are described. The word "exemplary" is used herein to mean "serving as an example, instance, or illustration." Any aspect described herein as "exemplary" is not necessarily to be construed as preferred or advantageous over other aspects.[0028] Aspects disclosed in the detailed description include systems and methods for controlling isochronous data streams. Particular aspects of the present disclosure are designed to be used with almost any isochronous data stream, but are well-suited for use with the Universal Serial Bus (USB) protocol. Further, aspects of the present disclosure are flexible to accommodate existing configuration possibilities within the USB protocol as well as accommodate proposed future changes in the USB protocol. The flexibility of the systems and methods is provided by calculating: (1) drift between a USB host system time and the application and (2) drift between the USB host system and a USB device clock. Based on these two drift calculations, a time stamp may be synthesized to program a next delivery schedule. Using this time stamp, jitter correction can take place and uniformly- sized packets may be assembled to pass to an application processor. The use of such uniformly- sized packets may eliminate the need for buffers in an application layer, which may improve user experience when a data stream is an audio data stream.[0029] Before addressing particular aspects of the present disclosure, a brief overview of an exemplary system which may implement the systems and methods for controlling isochronous data streams is disclosed. As noted above, while applicable to various isochronous data streams, exemplary aspects are particularly applicable to USB audio streams. Thus, the exemplary system is a USB digital audio system.[0030] In this regard, Figure 1 is a simplified perspective view of a mobile communication device 100 with a USB Type-C receptacle 102 configured to couple to a USB Type-C connector 104 on a USB cable 106. At a distal end of the USB cable 106 is a digital audio headset 108 having plural speakers 110 in headphones 112 and a microphone 114. Digital audio signals may pass between the mobile communication device 100 and the digital audio headset 108 through the USB cable 106. Audio from the microphone 114 may be unevenly distributed in a time domain as speech patterns are rarely periodic. Likewise, the mobile communication device 100 does not know a priori what data speed the digital audio headset 108 supports nor does the mobile communication device 100 know a priori what synchronization format the digital audio headset 108 uses.[0031] While exemplary aspects of the present disclosure are well suited for audio environments such as the digital audio headset 108 of Figure 1, the present disclosure is not so limited, and may be used with an audio/video signal that passes between a computing device, such as the mobile communication device 100, and a virtual reality headset having a display, speakers, and a microphone (or a display having speakers and a microphone). Likewise, while a USB Type-C cable is disclosed above, the present disclosure is readily usable with other versions of USB. In fact, being able to handle any of the USB speeds (e.g., full speed (FS), super speed (SS), high speed (HS)) is one of the advantages of the present disclosure. [0032] Figure 2 provides a simplified block diagram of how audio (and perhaps video) data is handled in a mobile communication device 200 that does not implement aspects of the present disclosure. The mobile communication device 200 may be coupled to a USB peripheral 202, such as a digital audio headset. The USB peripheral 202 may support asynchronous, synchronous, adaptive, or mixed clock synchronization modes and may include one or more phase locked loops (PLLs, two illustrated) or delay locked loops (DLLs, not illustrated). The USB peripheral 202 may receive data (referenced as Data IN), such as through a microphone (sometimes referred to as capture), as well as output data (referenced as Data OUT), such as through a speaker in a headphone (sometimes referred to as playback). The data is passed to and from the mobile communication device 200, such as through a USB cable 206, and through an appropriate receptacle (not illustrated in Figure 2) to a USB hardware controller 208 within the mobile communication device 200. The USB hardware (sometimes referenced as HW in the drawings) controller 208 is communicatively connected to a system on a chip (SoC) 210. The SoC 210 may include an audio digital signal processor (ADSP) 212 and an application processor (referred to in the drawings as AP) 214. The ADSP 212 may include a USB Audio Client (UAC) driver 216. The data from the USB peripheral 202 is received at the USB hardware controller 208 and passed to the SoC 210. Note that the data from the USB peripheral 202 is jittery and includes variable data frame sizes (symbolically illustrated by the variously-sized boxes between the USB hardware controller 208 and the UAC driver 216). Further variability may occur if the one or more PLLs of the USB peripheral 202 run fast or slow. Still further variability may occur, because in the USB protocol, there is no requirement that there be a fixed number of samples within a frame. While such variability is part of what contributes to the flexibility and appeal of the USB protocol, such variability is generally difficult to handle in audio processing. When the USB hardware controller 208 has data in its internal buffers (not shown), the USB hardware controller 208 generates an interrupt for the UAC driver 216. The USB hardware controller 208 does not have a time stamping function. The UAC driver 216 receives the interrupt, drains the buffer of the USB hardware controller 208, and attempts to provide a constant amount of data to the application processor 214. When there is fractional audio sampling, such as the common sampling rate of 44.1 kilohertz (kHz), which is fractional relative to one millisecond (corresponding to a common USB bus transfer speed of 1000 Hz), the UAC driver 216 will send data with 44 samples in nine out of ten packets and one packet with 45 samples. Data processing circuitry 218 in the application processor 214 uses its buffers 220 in conjunction with a high resolution system timer 222 to smooth out the variability before the data is provided to application layer algorithms 224. An asynchronous sample rate converter (ASRC) 226 may assist in this process of correcting drift and a jittery cluster of samples over a time duration. This arrangement places a burden on the application processor 214 and requires additional programming for the application layer algorithms 224. Note that while the ADSP 212 and the application processor 214 are described as being separate processors, both devices may be integrated into a single integrated circuit (IC). While not illustrated, a hardware direct memory access (DMA) controller may generate a data interrupt, and a hardware latched time stamp from the high resolution system timer 222 gets stored in a hardware register. This time stamp is not readily associated with the USB packets and thus is not readily available to assist in drift detection.[0033] Exemplary aspects of the present disclosure provide error free drift detection from which jitter correction may be applied and from which a synthesized time stamp may be calculated. Using this synthesized time stamp, a next delivery schedule may be calculated which is used to drain the buffers. Further, by repositioning the calculation outside the application processor, uniform data frame sizes may be provided to the application processor, which in turn may improve audio quality and potentially provide power savings opportunities. One of the benefits of the present disclosure is the flexibility of the disclosure to accommodate any form of clock synchronization approach (asynchronous, synchronous, or adaptive) between the host and the device as well as various data speeds, different sampling rates, variably-sized data, different USB speeds (HS, FS, SS), and differing service intervals. While the present disclosure may be implemented strictly in hardware, the flexibility of the present disclosure is improved through the use of software, where the variables are more readily adjusted to accommodate any configuration. Before exploring the particulars of the system of the present disclosure and the various signaling that may be used to implement aspects of the present disclosure, an overview of the equations used to create the flexibility are presented. [0034] The following section is math intensive and preserved for the interested reader, but may not be critical to understand exemplary aspects of the present disclosure. For the readers who prefer not to let math clutter their understanding of the disclosure, the discussion of exemplary aspects begins again below with reference to Figure 3.[0035] The basic drift compensated rate matched audio buffer delivery model that is used by the host may be expressed as:tickSnext = tickSreference + ticks0ffset + Di + D2. . .+DM(Eq. 1)[0036] In Eq. 1, ticksnext (also referred to as "Tnext") is the synthesized time stamp that is effectively used to program the next delivery schedule. ticksreference (also referred to as "Tref ') is the timestamp of the first synthesized timestamp. Ticks0ffset (also referred to as "Toffset") is the delta from the ticksreference used for the delivery of buffers and also serves as timing of the picking up of buffers for playback and capture. In Eq. 1, each Di represents the total drift between a device clock and a USB time reference. In most situations, there are only three clocks to consider, the USB host clock, the audio application clock, and the USB device clock. The USB host clock serves as the system time reference for both of the other two clocks, and thus, Eq. 1 will typically simplify to:ticksnext— ticksreference+ ticks0fset + Dapp-usb ~ Ddevice-usb (Eq. 2)[0037] Eq. 2 works for both audio capture and audio playback paths. Dapp_USb is the time difference between the audio application clock and the USB host clock. Ddevice-usb is how fast the USB device clock is going with reference to the USB host clock. Together these values give the net system drift (i.e., is the audio sample moving faster or slower). For the audio capture path, when Ddevice-usb is positive, the device is delivering audio samples faster than the USB host is clearing them. When Dapp_USb is positive the audio application is retrieving audio samples faster than the USB host is delivering them. On the audio playback path, when Dapp_USb is positive, the audio application is delivering audio samples faster than the USB host is clearing them. When Ddevice-usb is positive, the device is retrieving audio samples faster than the USB host is delivering them. This value is passed to an asynchronous sample rate converter (ASRC) to synthesize and/or interpolate audio allowing the ASRC to know how much to correct. [0038] The drift Ddevice-usb for the capture and playback paths may be determined explicitly or implicitly. The drift is obtained based the direction of the data flow (i.e., device-to-host (usually capture) or host-to-device (usually playback)). The source of the drift information is dependent on what the USB advertises and which isochronous synchronization mode is selected for a USB endpoint pair by the high level operating system (HLOS). In fact, there are twenty combinations of isochronous synchronization modes between the capture and playback paths.[0039] The source of drift information is summarized in Table 1 below. Ddevice-usb is abbreviated Ddevice in Table 1.Table 1 Source of Drift Information[0040] Table 1 assumes that the audio application clock is in phase with the USB host clock (Dapp_USb = 0). This assumption causes all synchronous and adaptive playback (Out) paths to have Out: Ddevice = 0.[0041] Exemplary aspects of the present disclosure provide techniques to detect drift for essentially any variation of sampling frequency, sampling interval, sample size, bus speed, clock synchronization mode, or the like. This flexibility is achieved through generic equations that accommodate these variable inputs and allow for the appropriate drift detection.[0042] It should be appreciated that the quality, environment, and manufacturing precision all affect one asynchronous clock's ability to keep time compared to another asynchronous clock in the system. There are systems where there are multiple clocks along the capture path and multiple clocks on the playback path. The net drift for a path is the sum of the time differentials between each subsystem clock along the path. The present disclosure illustrates that by measuring drift at the appropriate frequency, error free drift detection is enabled and needless measurements are avoided, which may allow power savings.[0043] Audio streaming in a USB system adds difficulty in that such audio streaming is expected to use the isochronous transfer mode. It is a real-time dedicated bandwidth mode with no error checking or retries. Audio samples are bundled in the form of an audio packet and an audio packet may be sent once every (micro)frame. Each such frame is either 125 μ8 or 1 ms depending on whether a HS or FS USB transfer mode is selected by the physical layer. The USB protocol supports sending such frames in bursts for power savings and for handling large network latencies. The number of frames per service interval is described by 2bmterval_1where binterval is currently a value between one and 16. Discussions have been made amongst the governing body for the USB protocol for expanding this number. The number of frames per service interval is fixed, but the number of audio samples sent per burst can be variable.[0044] A factor that has been considered as pertinent to evaluating drift includes keeping the accumulated drift using the source unit of measurement. Conversions from one unit to another unit generally involve a division operation which may introduce rounding or truncation errors. Accumulation of such truncation errors may lead to a divergence in the interpretation of time between the host and the device. By keeping the accumulation in the source unit of measurement, any truncation error is temporary and should be seen by the system as insignificant jitter.[0045] A further factor is the maximum tolerable system jitter. A reasonable tolerable system jitter is less than one audio sample of accuracy to avoid being interpreted as real drift by the audio system. Thus, the tolerable system jitter may be a function of the audio sampling frequency. If the tolerable jitter is sufficiently small, hardware assistance may be necessary as a pure software implementation may not be able to react fast enough to service an interrupt to timestamp an event.[0046] Given these considerations, Eq. 6 may be derived when considering a USB audio device's instantaneous frequency feedbacks as a clock source. In such instance, Ffis the average number of audio samples per frame that the USB device reports to the USB host. An instantaneous frequency Ffis reported to the host in the FS USB transfer mode on every:Periodps = 210-bRefreshframes (Eq. 3)[0047] Or in the HS USB transfer mode on every:Periodns = 2(binterval _1microframes (Eq. 4)[0048] The instantaneous drift is thusAdrift = Fik- Ffk_l(Eq. 5)[0049] and is computed when the host receives a feedbackTicksconv(D) = (D* 1000)/fs* 19.2 MHz (Eq. 6)[0050] Where fsis the sampling frequency. Note that 19.2 MHz is the speed of one exemplary high resolution system timer. If the high resolution system timer has a different speed, a different value should be substituted herein, which turns Eq. 6 into the following generic equation.Ticksconv(D) = (D*1000)/fs* ftimer(Eq. 6A)[0051] There are challenges in recovering a clock from a USB 2.0 signal resulting from the definitional equivalence of the virtual packet being one virtual frame. Accordingly, a solution to recover a clock from a non-linear data stream is required. Such solution follows, with the assumption that each clock crystal has at least 500 ppm of accuracy. The number of samples per virtual frame is defined asnumSamplesPerVirtualFrame = fs/ft*2(binterval l)(Eq. 7)[0052] Where fsis the sampling frequency, ftis the service interval frequency, and binterval is as defined above. For ease of notation, the numSamplesPerVirtualFrame may be abbreviated as NSPVF[0053] Additionally, an alignment multiplier is needed, and defined as follows:„ , , , , 1000000alig&nmentMultivplier = GCD(MOD( -JVSPVF*1000000,1000000),1000000) (Eq. 8 ') [0054] where 1000000 is arbitrarily chosen as a very large base 10 value to increase fractional precision. From Eq. 7 and 8, an expected number of samples may be calculated as follows:expectedNumSamples = NSPVF* alignmentMultiplier (Eq. 9)[0055] The alignmentMultiplier represents the least number of virtual frames needed by the host before a stable drift determination is possible. The expectedNumSamples is the number of samples expected to be received. The NSPVF is an intermediate variable for visual clarity and not a floating point. For each alignmentMultiplier number of virtual frames received, the Adrift is computed by:Adrift = numSamplesReceived - expectedNumSamples (Eq. 10)[0056] Thus, the net drift from the beginning of the audio session is computed by: D = Dnet drift+ Adrift (Eq. 11)[0057] The conversion of D audio samples to system timer (sometimes referred to as Qtimer) ticks is:Ticksconv(D) = Dnet drift/fs* 19.2 MHz (Eq. 12)[0058] Again, note that 19.2 MHz is the speed of the high resolution system timer. If the high resolution system timer has a different value, then such different value should be substituted herein, resulting in:Ticksconv(D) = D/fs*ftimer(Eq. 12A)[0059] With the drift information and the clock detection information outlined above, rate matching may be done. With rate matching, uniform sample sizes may be created and sent to the application processor as outlined below. However, before addressing the uniform sample sizes, more math is presented to explain the rate matching. In particular, this helps define how to calculate ticks0ffset- [0060] Remember, absent drifttickSnext = tickSreference + ticks0ffset (Eq. 13)[0061] Where ticks0ffset is defined asticksoffset= L-*£** i (Eq. 14)°nsetl fcftz fsM'Where fdis the delivery frequency and i increments on every ticknextand wraps around when i = f s to avoid i from overflowing. At the wrap around point, ticksreference= tickSnextand then i = 0. [0062] Armed with the math set forth above, exemplary aspects of the present disclosure are now set forth. In this regard, Figure 3 is a simplified block diagram of how audio (and perhaps video) is handled in a mobile communication device 300 that implements exemplary aspects of the present disclosure.[0063] The mobile communication device 300 includes an application processor 302 and an ADSP 304. In an exemplary aspect, the application processor 302 and the ADSP 304 may be in a single SoC 306. Likewise, while described as conceptually distinct processors, these processors may be part of a single host processor. Still further, while ascribed specific functions such as "application processor" or "ADSP," it should be appreciated that other processors that are traditionally not referred to by such appellations may still implement comparable functionality without departing from the scope of the present disclosure. The application processor 302 may communicate with a USB hardware controller 308, which communicates with a USB peripheral 310, such as a headset, through a USB interface 312, which may include USB receptacles, USB connectors, and a USB cable.[0064] As with the USB peripheral 202 of Figure 2, the USB peripheral 310 may support asynchronous, synchronous, adaptive, or mixed clock synchronization modes and may include one or more PLLs (two illustrated) or DLLs (not illustrated). The USB peripheral 310 may receive data (referenced as Data In), such as through a microphone (as noted above, sometimes referred to as capture), as well as output data (referenced as Data Out), such as through a speaker in a headphone (as noted above, sometimes referred to as playback). The data is passed to and from the mobile communication device 300 through the USB interface 312.[0065] The ADSP 304 may include a UAC driver 314. The UAC driver 314 may use a host controller interface (HCI) (not illustrated) to communicate with the USB hardware controller 308. In conventional systems, there is no HCI in the UAC driver 314, because the ADSP 304 does not communicate with the USB hardware controller 308. However, exemplary aspects of the present disclosure allow for communication between the USB hardware controller 308 and the ADSP 304. Accordingly, an HCI may be provided to effectuate such communication. The UAC driver 314 receives unstable and variably-sized data frames from the USB hardware controller 308. [0066] Exemplary aspects of the present disclosure add one or more buffers 316 to the UAC driver 314 as well as couple a high resolution system timer 318 to the UAC driver 314, which allows the UAC driver 314 to pass stable, precise, and fixed data frame sizes to data processing circuitry 320 in the application processor 302 (or other processor that handles applications). Still further, the UAC driver 314 may provide net playback and capture delays to the data processing circuitry 320 through a signal 322. By providing uniform data frames to the data processing circuitry 320, application layer algorithms 324 do not have to buffer the data as heavily or perform the corrections associated with the data processing circuitry 218 of Figure 2. Even though the application layer algorithms 324 receive uniform data frames, the application processor 302 may include an ASRC 326 that may assist in processing the signal 322 to act on drift correction information and/or jitter issues. Again, note that the application processor 302 may be merged with the ADSP 304 as a single microprocessor or may be provided different names by different vendors.[0067] While Figure 3 contemplates positioning the UAC driver 314 in the ADSP 304, it should be appreciated that other positions are also possible as illustrated in Figures 4 A and 4B.[0068] In this regard, Figure 4A illustrates a headset 400 (or other USB peripheral) with a digital audio converter (DAC) 402 that captures data from a microphone or the like and provides the data to a UAC data regulator (UAC data reg) 404. The UAC reg 404 makes the packet size uniform and provides packets to a hardware controller 406, which in turn passes the packets over a cable 408 to a USB host 410. The USB host 410 receives the packets with a host hardware controller 412. Applications 414 (labeled APP in the Figures) in the application layer (not specifically illustrated) receive the uniform packets and process them as is well understood. In such an arrangement, the USB host 410 may operate similarly to the USB host of Figure 2, but benefits from the uniform packets that the headset 400 sends to the USB host 410. The increased circuitry in the headset 400 may increase the cost of the headset 400, but may provide benefits to legacy USB hosts.[0069] In Figure 4B, the USB host 410 remains unchanged, but instead of placing a data regulator in the headset 400, a UAC data regulator 418 is provided in an intermediate device 420, such as a dongle 420. The dongle 420 can be on a host side 422A or a peripheral side 422B of a cable 422. That is, the cable 422 may extend between the dongle 420 and a headset 424 with the dongle 420 inserted into a USB receptacle of the USB host 410, or the cable 422 may extend between the USB host 410 and the dongle 420 with the dongle 420 inserted into the USB receptacle of the headset 424. As still another possibility (illustrated), the dongle may be in the cable 422 and the cable 422 inserts into the respective receptacles of the USB host 410 and the headset 424.[0070] Figure 5 is a block diagram of a data regulator that may be implemented inside the UAC driver 314 of Figure 3. The buffer 316 (also referred to as a FIFO in Figure 5), receives a variably- sized data packet 500. An in-band drift detector 502 reads the size of the data packet 500 in the buffer 316 when it receives a data available interrupt signal 504. Alternatively, an out-of-band drift detector 506 receives an asynchronous feedback packet signal 508 and the data available interrupt signal 504. One of the detectors 502 or 506 is read by a multiplexer 510. The multiplexer 510 selects between outputs of the detectors 502 and 506 by a set detection type signal 512. The multiplexer 510 outputs a signal to a device drift accumulator 514. Concurrently, the data available interrupt signal 504 is provided to a local clock drift detector 516, which provides a signal to a local clock drift accumulator 518. A summer 520 subtracts the device drift accumulator 514 output (Ddevice-usb) from the output of the local clock drift accumulator 518 (Dapp_USb) and outputs a signal 522. The signal 522 corresponds toDapp-usb ~ Ddevice-usb*[0071] With continued reference to Figure 5, the data available interrupt signal 504 is also provided to an initial reference handler 524. The initial reference handler 524 outputs a read counter to a high resolution clock function 526. The high resolution clock function 526 also receives a read counter from the local clock drift detector 516. The high resolution clock function 526 may also receive a set Hi-res Timer Ftvalue which would allow the clock value to be varied. Note that it is unlikely that this value changes in mid-operation, but can be set at system initialization or the like. The high resolution clock function 526 interoperates with the high resolution system timer 318. The initial reference handler 524 also is added to a jitter delay element 528 and used to set an initial Tref to start a time stamp plus delay signal 530. [0072] The buffer 316 outputs a data signal 532 (labeled "read data") to a data delivery handler 534, which also receives an output 536 of the high resolution system timer 318. The data delivery handler 534 may also receive a set output buffer size command (perhaps from the ASRC 326) indicating what size buffers the ASRC expects to process. The signal 530 is provided to a summer 538 which adds Tref thereto and generates an intermediate signal 540, to which is added Toffset, to generate a signal 542, which is passed to a summer 544 (which essentially performs either Eq. 6 or Eq 12 as appropriate). The summer 544 adds the signal 542, the signal 522, and the output 536 to generate a synthesized time stamp 546 (essentially Eq. 2). The data delivery handler 534 outputs a run command for the summer 544 and provides a fixed number of samples to the ASRC 326. The ASRC 326 also receives the synthesized time stamp 546 and outputs resampled data 548. While not specifically illustrated, a set sampling frequency command may also be received to assist in calculations as noted above.[0073] In an exemplary aspect, this data regulator is implemented as software. In another exemplary aspect, this data regulator may be implemented in hardware.[0074] Figure 6 is a signal flow diagram representing signals and processes that may occur when an application in a data processor wants to use the UAC driver 314 of Figure 3. Initially, an application provides setup information in an activation setup stage. The setup information may include sampling rate, bus transfer frequency, buffer size, clock recovery mode, and the like. This setup information is provided to a data rate regulator (see Figure 5) of the UAC driver 314. The data rate regulator calculates how to deliver data from the USB hardware controller 308 accurately and stably (without jitter) at the rate that has been requested. The process for this calculation is explained above. The timer/clock element in this diagram is the high resolution system timer 318 of Figure 3, but other timers could also be used.[0075] Figure 6 is a signal flow diagram 600 representing signals and processes that may occur when an application in a data processor such as the application processor 302 wants to use the UAC driver 314. Initially, the application provides the setup information in the activation setup stage (block 602). The application processor 302 sets the input and output sampling frequency at the ASRC 326, and sends the input sampling rate frequency, the bus transfer frequency, service interval (which is greater than or equal to the bus transfer frequency), output buffer size, clock recovery mode (asynchronous, adaptive, or synchronous), any hardware interface specific setup parameters, and register any physical memory for the buffer(s) 316 to the UAC driver 314 and particularly to a data regulator in the UAC driver 314. Finally an activate command (signal 604) is sent to the data regulator. The data regulator passes the hardware interface specific setup parameters to the USB hardware controller 308 (signal 606) and programs the next free buffer space to write (signal 608). The USB hardware controller 308 sends a data ready event signal 610 to the data regulator. This signal 610 causes the data regulator to read the high resolution system timer 318 (signal 612), read the data size (signal 614) from the USB hardware controller 308, and perform a series of actions including: store the clock value into Tref, add Tjitter (derived from the buffer size and if not explicitly feedback driven, the received data size) to Tref, initialize i = 0;and compute the next Toffset; compute Tnext (Eq. 2) (see generally block 616). The data regulator then programs Tnext for the high resolution system timer 318 (signal 618) and programs the next free buffer space to write (signal 620).[0076] With continued reference to Figure 6, the system enters a steady state and the data regulator receives a next data ready event (signal 622) from the USB hardware controller 308, which triggers a read clock signal 624 and a read data size signal 626 which allows the data regulator to update the net drift (Ddevice-usb and Dapp_USb) (see generally block 628).[0077] At some point, the USB hardware controller 308 may send an asynchronous clock feedback event (signal 630) to the data regulator, which causes the data regulator to update Ddevice-usb (see generally block 632).[0078] At some other time, the high resolution system timer 318 may send a timer expired event signal 634 to the data regulator. Responsive to this signal 634, the data regulator may increment i by one, and if i equals the sampling frequency, set Tref to Tnext and i = 0; compute the next Toffset; and compute Eq. 2 (see generally block 636). The data regulator may send a data available signal 638 to the application processor 302, and program Tnext (signal 640), and program the next free buffer space to write (signal 642). The application processor 302 reads the net drift or time stamp from the data regulator (signal 644) and reads data from the buffer(s) 316 in the UAC driver 314 (signal 646) and/or the USB hardware controller 308 (signal 646A). [0079] The application processor 302 computes the number of samples to correct from the new net drift and the previous net drift (block 648), and writes data into its file system, such as by using a write command with data, data length, samples to correct, and duration to correct variables. Note that the data may be voice packets. If necessary, the drift correction may be stretched out over a configurable period to reduce perceivable glitches. However, even with the stretched-out period, it is expected that such correction takes place on the order of 25 ms instead of 10 seconds as is sometimes used in conventional systems. The process then deactivates (block 660).[0080] Note further that additional aspects of the present disclosure provide techniques to provide error free drift detection and support future planned power saving initiatives. In this regard, it should be appreciated that fractional sampling rates, such as the relatively common 44.1 kHz, lend themselves to false detections of drift because of the phase mismatch between accumulators at the peripheral device and accumulators at the host. In contrast to signaling protocols that include time stamps to assist in drift detection, the USB protocol does not include time stamps from the peripheral device to the host. Rather, the host only receives packetized USB data. Inside each USB packet, the amount of data is variable. The problem with the fractional sampling rate and unknown packet size has been well documented in the industry. The usual solution is to time average the samples over a long period, such as ten minutes, and then perform correction of the drift. The long delay in assembling the time average of samples results in latency before correction is applied. Until the correction is applied, the user may experience a degraded audio experience. Likewise, the granularity of the correction may not be appropriate for instantaneous or random drift events.[0081] Exemplary aspects of the present disclosure allow for error free drift detection. This is best explained through the use of an example. Assuming that the sampling frequency (Fs) is 44.1 kHz and that the USB bus transfer speed is 1000 Hz (i.e., 1 sample per millisecond), and a binterval (samples per packet) of 11, the host would expect to receive 45158.4 samples per interval. The fractional sample cannot be sent under USB rules. The peripheral device accumulator begins when the samples are transmitted to the host, but the host accumulator is delayed until after reception, so the accumulators are out of phase. At the second interval the peripheral accumulator is 90316.8. Again, it is the fractional sample which shows up as drift relative to the host accumulator. Over time, without external drift, this drift will toggle between 1 and 0, but may on occasion cause a correction to be made that is not needed.[0082] Instead of time averaging the drift as in previous solutions, exemplary aspects of the present disclosure evaluate the fractional remainder and find the number of intervals required to arrive at a whole number. In the present example, if the fractional remainder is 0.4, then the number of intervals required to arrive at a whole number is 5. (0.4 = 2/5, the denominator is 5, so five intervals). The UAC driver 314 may check the accumulator at a boundary determined by the number of intervals so calculated. Thus, in this example, the UAC driver 314 checks the drift every five intervals. The phantom drift caused by the fractional sampling rate is not present, so if drift is detected, that is real drift for which a correction must be made (i.e., interpolation or decimation or the like). Further, by ignoring drift in the intermediate samples, calculations may be forgone, which may result in power savings.[0083] The USB protocol contemplates two forms of drift reporting. The first is an implicit drift detection where in-bound signals are examined and compared to known values to determine a drift. The second is an explicit out-of-band signaling of drift sent by the peripheral device to the host, where the peripheral device compares samples received to an expected number of samples and reports back any drift between these two values. The USB protocol is silent as to how implicit drift detection is performed, and the USB protocol is also silent on how the host may correct for any drift detected (either implicitly or explicitly). The present disclosure has set forth several equations above and a process for handling drift detection and correction thereof. Figures 7-10 illustrate the two possible drift reporting possibilities for both audio sources (Figures 7 and 8) and audio sinks (Figures 9 and 10) and the correction process. In particular, Figure 7 illustrates an in-band drift reporting process for an audio source, namely, a microphone 700. Data is captured by the microphone 700 and passed in variably- sized data packets (block 702) at a constant rate through a USB device driver 704 to a USB host driver 706 in a USB host. The USB host driver 706 derives the drift information implicitly from data from the microphone 700 and the extracted drift information is used to determine Tref+Toffset for timing the delivery to the audio client and program timer (block 708) while the data is stored in a buffer 710. The formula for determining Tref+Toffset is set forth above. At a timer trigger 712 based on the output of block 708, a fixed number of packets at a variable rate (block 714) are sent to an ASRC 716 from the buffer 710. Concurrently, the drift information is used to report net playback delay (block 718) and generate a synthesized timestamp (block 720). The ASRC 716 outputs resampled data (block 722). While the fixed number of packets is, in fact, fixed, varying the rate allows the drift to be corrected. That is, packet delivery may be accelerated to correct one drift, or slowed down to correct drift in the other direction.[0084] Similarly, Figure 8 is substantially similar but reflects an out-of-band drift reporting process for a microphone 800. In particular, the drift detection is performed by a USB device driver 802 based on output of the microphone 800. The USB device driver 802 then outputs an out-of-band drift report (block 804) and also sends variably- sized data packets at a constant rate (block 806). Both the drift information and the data are provided to a USB host driver 808 in a USB host. The drift information is used to determine Tref+Toffset for timing the delivery to an audio client and program timer (block 810) using the equations set forth above while the data is stored in a buffer 812. At a timer trigger 814 based on the output of block 810, the buffer 812 sends a fixed number of packets at a variable rate (block 816) to an ASRC 818. Concurrently, the drift information is used to report net playback delay (block 820) and generate a synthesized timestamp (block 822). The ASRC 818 outputs resampled data (block 824). Again, use of the variable rate allows for drift correction.[0085] In contrast, Figures 9 and 10 explore the impact of drift on the playback path. In this regard, Figure 9 illustrates an in-band drift reporting process. A microphone 900 may act as the microphone 700 of Figure 7, but of greater interest is speaker 902. The speaker 902 receives data from a USB device driver 904. The USB device driver 904 receives data from a USB host driver 906. The USB host driver 906 compares the data coming into the USB host driver 906 to the USB reference as described above to determine drift information. This drift information is used to determine Tref+Toffset for timing the delivery to an audio client and program timer (block 908) using the equations described above. This determination is used to help generate a timer trigger (block 910), report net recording delay (block 912), and create a synthesized timestamp (block 914). At the timer trigger (block 910), a fixed number of packets at a variable rate are fetched (block 916) and provided to an audio module 918, which buffers the packets in a buffer 920. The buffer 920 releases variably- sized data packets at a constant rate (block 922) and provides them to the USB host driver 906, which passes them to the speaker 902 through the USB device driver 904. The use of the variably-sized data packets allows for drift to be corrected. Correction of drift in speaker direction can be inferred from drift detected at the USB host driver 906 via an in-band drift detector, provided both the microphone 900 and the speaker 902 are clocked via the same source.[0086] Similarly, Figure 10 illustrates an out-of-band drift reporting process. A microphone 1000 may act as the microphone 800 of Figure 8 describe above. Of more interest is speaker 1002. The speaker 1002 passes out-of-band drift information and data (block 1004) to a USB device driver 1006. The USB device driver 1006 receives data from a USB host driver 1008 and likewise passes the out-of-band drift information to the USB host driver 1008. This drift information is used to determine Tref+Toffset for timing the delivery to an audio client and program timer (block 1010). This determination is used to help generate a timer trigger (block 1012), report net recording delay (block 1014), and create a synthesized timestamp (block 1016). At the timer trigger (block 1012), a fixed number of packets at a variable rate are fetched (block 1018) and provided to an audio module 1020, which buffers the packets in a buffer 1022. The buffer 1022 releases variably- sized data packets at a constant rate (block 1024) and provides them to the USB host driver 1008, which passes them to the speaker 1002 through the USB device driver 1006. Again, the use of the variably-sized data packets allows for drift correction.[0087] As noted above, exemplary aspects also allow for future contemplated power savings. This possibility is enabled by the generic (sometimes referred to as agnostic) algorithms used to handle the variable data and sampling rates. That is, in the equations above, the equations start with the agnostic fsas the sampling rate and ftas the bus transfer speed (which already contemplates FS, SS, and HS). By using these agnostic values in the application layer algorithms 324, other new sampling rates or other nonstandard sampling rates are accommodated. The agnostic approach allows proper estimation of a DLL. It should be appreciated that an increase in binterval (the number of samples per packet) increases the size of the packet and also increases the time that it takes to fill the buffer(s) 316. Since the application processor 302 is idle while the buffer(s) 316 is being filled, the application processor 302 may be put into a low-power mode or sleep mode. The longer it takes to fill the buffer(s) 316 (i.e., a larger number of samples per packet), the longer the application processor 302 may be in the sleep mode. The longer the application processor 302 is in the sleep mode, the more power is saved. Thus, there is pressure in the industry to increase the number of samples per packet. By having a generic binterval in the application layer algorithms 324, exemplary aspects of the present disclosure may accept larger binterval values in the audio device descriptor and thus accommodate any future changes in the number of samples per packet and thus allow for future power savings.[0088] The systems and methods for controlling isochronous data streams according to aspects disclosed herein may be provided in or integrated into any processor-based device. Examples, without limitation, include a set top box, an entertainment unit, a navigation device, a communications device, a fixed location data unit, a mobile location data unit, a global positioning system (GPS) device, a mobile phone, a cellular phone, a smart phone, a session initiation protocol (SIP) phone, a tablet, a phablet, a server, a computer, a portable computer, a mobile computing device, a wearable computing device (e.g., a smart watch, a health or fitness tracker, eyewear, etc.), a desktop computer, a personal digital assistant (PDA), a monitor, a computer monitor, a television, a tuner, a radio, a satellite radio, a music player, a digital music player, a portable music player, a digital video player, a video player, a digital video disc (DVD) player, a portable digital video player, an automobile, a vehicle component, avionics systems, a drone, and a multicopter.[0089] In this regard, Figure 11 illustrates an example of a processor-based system 1100 that can employ a USB system that performs the drift detection, rate matching and uniform packet assembly described herein. In this example, the processor-based system 1100 includes one or more central processing units (CPUs) 1102, each including one or more processors 1104. The CPU(s) 1102 may have cache memory 1106 coupled to the processor(s) 1104 for rapid access to temporarily stored data. The CPU(s) 1102 is coupled to a system bus 1108 and can intercouple master and slave devices included in the processor-based system 1100. As is well known, the CPU(s) 1102 communicates with these other devices by exchanging address, control, and data information over the system bus 1108. For example, the CPU(s) 1102 can communicate bus transaction requests to a memory controller 1110 as an example of a slave device. Although not illustrated in Figure 11, multiple system buses 1108 could be provided, wherein each system bus 1108 constitutes a different fabric.[0090] Other master and slave devices can be connected to the system bus 1108. As illustrated in Figure 11, these devices can include a memory system 1112, one or more input devices 1114, one or more output devices 1116, one or more network interface devices 1118, and one or more display controllers 1120, as examples. The input device(s) 1114 can include any type of input device, including, but not limited to, input keys, switches, voice processors, etc. The output device(s) 1116 can include any type of output device, including, but not limited to, audio, video, other visual indicators, etc. The network interface device(s) 1118 can be any devices configured to allow exchange of data to and from a network 1122. The network 1122 can be any type of network, including, but not limited to, a wired or wireless network, a private or public network, a local area network (LAN), a wireless local area network (WLAN), a wide area network (WAN), a BLUETOOTH™ network, and the Internet. The network interface device(s) 1118 can be configured to support any type of communications protocol desired. The memory system 1112 can include one or more memory units 1124(0-N).[0091] The CPU(s) 1102 may also be configured to access the display controller(s) 1120 over the system bus 1108 to control information sent to one or more displays 1126. The display controller(s) 1120 sends information to the display(s) 1126 to be displayed via one or more video processors 1128, which process the information to be displayed into a format suitable for the display(s) 1126. The display(s) 1126 can include any type of display, including, but not limited to, a cathode ray tube (CRT), a liquid crystal display (LCD), a plasma display, a light emitting diode (LED) display, etc.[0092] Those of skill in the art will further appreciate that the various illustrative logical blocks, modules, circuits, and algorithms described in connection with the aspects disclosed herein may be implemented as electronic hardware, instructions stored in memory or in another computer readable medium and executed by a processor or other processing device, or combinations of both. The devices described herein may be employed in any circuit, hardware component, integrated circuit (IC), or IC chip, as examples. Memory disclosed herein may be any type and size of memory and may be configured to store any type of information desired. To clearly illustrate this interchangeability, various illustrative components, blocks, modules, circuits, and steps have been described above generally in terms of their functionality. How such functionality is implemented depends upon the particular application, design choices, and/or design constraints imposed on the overall system. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present disclosure.[0093] The various illustrative logical blocks, modules, and circuits described in connection with the aspects disclosed herein may be implemented or performed with a processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or any combination thereof designed to perform the functions described herein. A processor may be a microprocessor, but in the alternative, the processor may be any conventional processor, controller, microcontroller, or state machine. A processor may also be implemented as a combination of computing devices (e.g., a combination of a DSP and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration).[0094] The aspects disclosed herein may be embodied in hardware and in instructions that are stored in hardware, and may reside, for example, in Random Access Memory (RAM), flash memory, Read Only Memory (ROM), Electrically Programmable ROM (EPROM), Electrically Erasable Programmable ROM (EEPROM), registers, a hard disk, a removable disk, a CD-ROM, or any other form of computer readable medium known in the art. An exemplary storage medium is coupled to the processor such that the processor can read information from, and write information to, the storage medium. In the alternative, the storage medium may be integral to the processor. The processor and the storage medium may reside in an ASIC. The ASIC may reside in a remote station. In the alternative, the processor and the storage medium may reside as discrete components in a remote station, base station, or server.[0095] It is also noted that the operational steps described in any of the exemplary aspects herein are described to provide examples and discussion. The operations described may be performed in numerous different sequences other than the illustrated sequences. Furthermore, operations described in a single operational step may actually be performed in a number of different steps. Additionally, one or more operational steps discussed in the exemplary aspects may be combined. It is to be understood that the operational steps illustrated in the flowchart diagrams may be subject to numerous different modifications as will be readily apparent to one of skill in the art. Those of skill in the art will also understand that information and signals may be represented using any of a variety of different technologies and techniques. For example, data, instructions, commands, information, signals, bits, symbols, and chips that may be referenced throughout the above description may be represented by voltages, currents, electromagnetic waves, magnetic fields or particles, optical fields or particles, or any combination thereof.[0096] The previous description of the disclosure is provided to enable any person skilled in the art to make or use the disclosure. Various modifications to the disclosure will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other variations without departing from the spirit or scope of the disclosure. Thus, the disclosure is not intended to be limited to the examples and designs described herein, but is to be accorded the widest scope consistent with the principles and novel features disclosed herein. |
In some embodiments, a method and apparatus for automatically parallelizing a sequential network application through pipeline transformation are described. In one embodiment, the method includes the configuration of a network processor into a D-stage processor pipeline. Once configured, a sequential network application program is transformed into D-pipeline stages. Once transformed, the D-pipeline stages are executed in parallel within the D-stage processor pipeline. In one embodiment, transformation of a sequential application program is performed by modeling the sequential network program as a flow network model and selecting from the flow network model into a plurality of preliminary pipeline stages. Other embodiments are described and claimed. |
1.A method for parallel network applications, including:One or more processors are configured as a class D processor pipeline;Transform the sequential network application into a D pipeline stage that collectively executes the infinite packet processing stage (PPS) loop of the sequential network application; andThe D pipeline stages are executed in parallel in the D-stage processor pipeline to provide parallel execution of the infinite PPS cycle of the sequential network application.2.The method of claim 1, wherein the operation of transforming the sequential application includes:Construct a streaming network model for the sequential application;Selecting multiple preliminary pipeline stages from the streaming network model; andThe preliminary pipeline stage is modified to perform the transfer of control flow and variables between them to form the D pipeline stage.3.The method of claim 2, wherein the operation of constructing the flow network model comprises:Transform the application into a static single assignment form;Establishing a control flow graph for the loop body of the application program;Establish a correlation graph based on the induction graph of the control flow graph and the identified strong connected component (SSC) of the control flow graph; andThe flow network model is constructed according to the induction graph of the correlation graph and the identified SSC nodes of the correlation graph.4.The method of claim 3, wherein the operation of constructing the flow network model comprises:Assign a unique source node and a unique sink node to the flow network model;For each SSC node identified in the induction graph of the correlation graph, add a program node to the flow network model;For each variable defined and used by multiple program nodes, add variable nodes to the flow network model;For each SSC node identified as a source point of control correlation in the induction graph of the correlation graph, add a control node C to the flow network model;Generate edges with associated weights to connect corresponding program nodes to corresponding variable nodes;Generate edges with associated weights to connect the corresponding program nodes to the corresponding control nodes; andAn edge is generated between the program node and one of the source node and the sink node.5.The method of claim 4, wherein generating edges with associated weights to connect the corresponding program nodes to the corresponding variable nodes further includes:(i) Select the program node N that defines the variable node V:(ii) add a defined edge with weight VCost from node N to node V to the flow network model;(iii) For each program node N that defines the variable node V, repeat (i)-(ii);(iv) Select the program node M using the variable node W;(v) adding an edge with an assigned infinite weight from the node W to the program node M to the flow network model; and(vi) Repeat (iv)-(v) for each program node M using variable node W.6.The method of claim 4, wherein generating edges with associated weights to connect the corresponding program node to the corresponding control node further comprises:(i) Select the program node N with the associated control node C;(ii) add a defined edge from the selected node N to the associated control node C;(iii) Associate the weight CCost to the edge;(iv) Repeat (i)-(iii) for each program node with associated control node;(v) Select the program node N, which has a controlled dependency on another program node M;(vi) Associate M with the control node C;(vii) adding an edge from the associated control node C to the selected program node N;(viii) assign unlimited weights to the edges; and(ix) Repeat (v)-(viii) for each node N that has a controlled correlation with another program node M.7.The method of claim 4, wherein the operation of generating the edge between a program node and one of the source node and the sink node includes:(i) Select a program node without a precursor node in the flow network model;(ii) add an edge from the source node to the selected program node;(iii) assign 0 weight to the edge;(iv) Repeat (i)-(iii) for each program node without a predecessor;(v) Select a program node with no successor in the streaming network;(vi) add the selected program node to the edge of the sink node;(vii) assign 0 weight to the added edge; and(viii) Repeat (v)-(vii) for each program node in the flow network model that has no successor nodes.8.The method of claim 2, wherein the operation of selecting the plurality of preliminary pipeline stages includes:The flow network model is divided into D-1 consecutive cuts, so that each cut is a balanced minimum cost cut.9.The method of claim 8, wherein the segmentation operation is performed using an iterative balanced push-relabel algorithm.10.The method of claim 2, wherein the operation of modifying the preliminary pipeline stage includes:(a) Select the preliminary pipeline level;(b) modify the selected preliminary pipeline stage to enable proper transmission of active variables and control the flow to and from the selected preliminary pipeline stage; and(c) Perform (a)-(b) for each preliminary pipeline stage to form the D pipeline stage of the parallel network application.11.A device for parallel network application, including:A device for configuring one or more processors as a class D processor pipeline;Means for transforming a sequential network application into a D pipeline stage that collectively executes an infinite packet processing stage (PPS) loop of the sequential network application;A device for executing the D pipeline stage in parallel in the D-stage processor pipeline to provide parallel execution of the infinite PPS cycle of the sequential network application12.The apparatus of claim 11, wherein the means for transforming the sequential application includes:A device for constructing a streaming network model for the sequential network application program;A device for selecting a plurality of preliminary pipeline stages from the streaming network model; andA device for modifying the preliminary pipeline stage to perform transmission of control flows and variables between them to form the D pipeline stage.13.The apparatus of claim 12, wherein the means for constructing the flow network model includes:A device for transforming the application into a static single assignment form;A device for establishing a control flow graph for the loop body of the application program;Means for establishing a correlation graph based on the induction graph of the control flow graph and the identified strong connected component (SSC) of the control flow graph; andAn apparatus for constructing the flow network model according to the induction graph of the correlation graph and the identified SSC nodes of the correlation graph.14.The apparatus of claim 13, wherein the means for constructing the flow network model includes:A device for assigning a unique source node and a unique sink node to the flow network model;Means for adding a program node to the flow network model for each SSC node identified in the induction graph of the correlation graph;Means for adding variable nodes to the flow network model for each variable defined and used by multiple program nodes;Means for adding a control node C to the flow network model for each SSC node identified as a source point of control correlation in the induction graph of the correlation graph;A device for generating edges with associated weights to connect corresponding program nodes to corresponding variable nodes;Means for generating edges with associated weights to connect corresponding program nodes to corresponding control nodes; andA device for generating an edge between the program node and one of the source node and the sink node.15.The apparatus of claim 14, wherein the means for generating edges with associated weights to connect corresponding program nodes to corresponding variable nodes further comprises:(i) A device for selecting a program node N that defines a variable node V;(ii) a device for adding a defined edge with a weight VCost from node N to node V to the flow network model;(iii) a device for repeating (i)-(ii) for each program node N that defines a variable node V;(iv) A device for selecting a program node M using a variable node W;(v) means for adding edges with assigned unlimited weights from the node W to the program node M to the flow network model; and(vi) Means for repeating (iv)-(v) for each program node M using variable node W.16.The apparatus of claim 14, wherein the means for generating edges with associated weights to connect the corresponding program node to the corresponding control node further comprises:(i) a device for selecting a program node N with an associated control node C;(ii) means for adding a defined edge from the selected node N to the associated control node C;(iii) a device for associating the weight CCost to the edge;(iv) means for repeating (i)-(iii) for each program node with an associated control node;(v) a device for selecting a program node N, which has a controlled correlation with another program node M;(vi) a device for associating M with the control node C;(vii) means for adding an edge from the associated control node C to the selected program node N;(viii) means for assigning infinite weights to the edges; and(ix) Means for repeating (v)-(viii) for each node N having a controlled correlation with another program node M.17.The apparatus of claim 14, wherein the means for generating the edge between a program node and one of the source node and the sink node includes:(i) a device for selecting a program node without a precursor node in the flow network model;(ii) a device for adding an edge from the source node to the selected program node;(iii) a device for assigning 0 weight to the edge;(iv) a device for repeating (i)-(iii) for each program node without a precursor;(v) a device for selecting a program node without a successor in the streaming network;(vi) means for adding an edge from the selected program node to the sink node;(vii) means for assigning 0 weight to the added edge; and(viii) Means for repeating (v)-(vii) for each program node in the flow network model that has no successor nodes18.The apparatus of claim 12, wherein the means for selecting the plurality of preliminary pipeline stages includes:A device for dividing the flow network model into D-1 consecutive cuts, so that each cut is a balanced minimum cost cut.19.The apparatus of claim 18, wherein the means for segmentation is performed using an iterative balanced push-relabel algorithm.20.The apparatus of claim 12, wherein the means for modifying the preliminary pipeline stage includes:The device used to select the preliminary pipeline stage;Means for modifying the selected preliminary pipeline stage to enable proper transmission of active variables to and from the selected preliminary pipeline stage;Means for modifying the selected preliminary pipeline stage to enable appropriate transmission to and from the selected preliminary pipeline stage; andA device for repeating the selection, modification, and modification operations for each preliminary pipeline stage to form the D pipeline stage of a parallel network application.21.A method for parallel network applications, including:Construct a streaming network model from sequential network applications;Split the streaming network model into multiple preliminary pipeline stages; andTransform the preliminary pipeline stage to perform control flow and variable transmission between them to form a D pipeline stage that collectively executes an infinite packet processing stage (PPS) loop of the sequential network application to enable the sequence The parallel execution of the infinite PPS cycle of the web application.22.The method of claim 21, wherein the operation of transforming the preliminary application level comprises:(i) Select the preliminary application level;(ii) The control flow graph selected to be cyclically generated for the packet processing level (PPS) corresponding to the selected preliminary application level;(iii) If the instruction is not included in the selected preliminary pipeline stage, remove the instruction from the control flow diagram;(iv) Transform the selected control flow graph according to the variables and control objects transmitted from the previous level;(v) reconstruct the PPS cycle from the transformed control flow graph to form a pipeline stage; andFor each preliminary pipeline stage, repeat (i)-(v) to form the D pipeline stage of the parallel network application.23.The method of claim 22, wherein the operation of transforming the control flow further comprises:At the entrance of the control flow graph, select values for the control objects transmitted from the previous pipeline stage;For each control object received from the previous waterline stage, use the control object to construct a conditional instruction; andReplacing the corresponding condition node in the CFG with the condition instruction.24.The method of claim 22, wherein the operation of transforming the control flow further comprises:Select values for variables transmitted from the previous waterline level; andFor each variable transmitted to the next pipeline stage, after the definition of the variable in the control flow graph, the value of the variable is set as a unique temporary variable.25.The method of claim 22, wherein the operation of transforming the control flow graph further comprises:For each control object to be transmitted to the next pipeline stage, in the control flow graph, the replaceable of the control object is placed in each replaceable successor node of the condition node associated with the control object value.The active set data is transmitted to the next pipeline stage at the exit of the control flow graph.26.A device for parallel network application, including:Device for constructing flow network model from sequential network application program;A device for dividing the streaming network model into a plurality of preliminary pipeline stages; andD pipeline stage for transforming the preliminary pipeline stage to perform control flow and variable transmission between them to form an infinite packet processing stage (PPS) loop that collectively executes the sequential network application to enable all An apparatus for parallel execution of the infinite PPS cycle of the sequential network application.27.The apparatus of claim 26, wherein the means for transforming the preliminary application includes:(i) The device used to select the preliminary application level;(ii) means for selecting a control flow graph generated cyclically for a packet processing level (PPS) corresponding to the selected preliminary application level;(iii) means for removing the instruction from the control flow diagram if the instruction is not included in the selected preliminary pipeline stage;(iv) a device for transforming the selected control flow graph based on the variables and control objects transmitted from the previous level;(v) means for reconstructing the PPS cycle from the transformed control flow graph to form a pipeline stage; andA device for repeating (i)-(v) for each preliminary pipeline stage to form a parallel pipeline application D pipeline stage.28.The apparatus of claim 26, wherein the means for transforming the control flow graph further comprises:A device for selecting values at the entrance of the control flow graph for the control objects transmitted from the previous pipeline stage;A device for constructing a conditional instruction using the control object for each control object received from the previous waterline stage; andA device for replacing the corresponding condition node in the CFG with the condition instruction.29.The apparatus of claim 26, wherein the means for transforming the control flow graph further comprises:A device for selecting values for variables transmitted from the previous waterline level;For each variable transmitted to the next pipeline stage, a device for setting the value of the variable as a unique temporary variable after the definition of the variable in the control flow diagram.30.The apparatus of claim 28, wherein the means for transforming the control flow graph further comprises:For each control object to be transmitted to the next pipeline level, for placing the control object in each replaceable successor node of the condition node associated with the control object in the control flow graph Replaceable value device.A device for transmitting the active set data to the next pipeline stage at the exit of the control flow graph.31.A device for parallel network application, including:processor;A memory coupled to the controller, the memory includes a compiler, and the compiler includes:Means for transforming a network sequential application program into a D pipeline stage of an infinite packet processing stage (PPS) that collectively executes the sequential network application program; andAn apparatus for executing the D pipeline stage in parallel in a D-stage processor pipeline to provide parallel execution of the infinite PPS cycle of the sequential network application.32.The apparatus of claim 31, wherein the compiler further comprises:A device for constructing a streaming network model for the sequential application program;A device for selecting a plurality of preliminary pipeline stages from the streaming network model; andA device for modifying the preliminary pipeline stage to perform transmission of control flow and variables between them to form the D pipeline stage.33.The apparatus of claim 32, wherein the compiler results in D-1 consecutive cuts of the flow network model such that each cut is a balanced minimum cost cut that forms the preliminary D pipeline stage .34.A system for parallel network applications, including:processor;A memory coupled to the processor; andDDR SRAM memory coupled to the processor, the memory includes a compiler, and the compiler includes:Means for transforming a sequential network application into a D application level that collectively executes an infinite packet processing level (PPS) loop of the sequential network application; andAn apparatus for executing the D application program level in parallel in a D-level processor pipeline to provide parallel execution of the infinite PPS cycle of the sequential network application program.35.The system of claim 34, wherein the compiler further comprises:A device for constructing a streaming network model for the sequential application program;A device for selecting a plurality of preliminary pipeline stages from the streaming network model; andA device for modifying the preliminary pipeline stage to perform transmission of control flow and variables between them to form the D pipeline stage.36.The system of claim 35, wherein the compiler further comprises:D-1 consecutive cuts for causing the stream network model, so that each cut is a means of forming a balanced minimum cost cut for the preliminary D pipeline stage. |
Device and method for automatically paralleling network application programs through pipeline transformationField of inventionOne or more embodiments of the invention generally relate to the field of network processor applications. More specifically, one or more embodiments of the present invention relate to a method and apparatus for automatically parallelizing network applications through pipeline transformation.Background of the inventionThe network processor (NP) is specifically designed to perform packet processing. Conventionally, a network processor can be used as a core element of a high-speed communication router to perform such packet processing. In order to solve the unique problem of high-speed network processing, modern NP generally has a highly parallel multi-processor architecture. For example, the family of Internet exchange processors belonging to theInternetExchangeTM (Internet Exchange Architecture, IXA) NP family includes NPs that use micro-engine clusters to process packets. The micro-engine cluster can be composed of multiple micro-engines (programmable processors with packet processing capabilities) running in parallel.However, compared to the highly parallel multiprocessor architecture utilized by network processors, traditional network applications are easier to encode using sequential semantics. In general, such network applications are typically coded to use a packet processing unit (packet processing stage (PPS)) that is always running. Therefore, when a new packet arrives, the PPS performs a series of tasks (such as packet reception, routing table lookup, and enqueuing of the packet). Therefore, it is usually expressed as an infinite loop (or PPS loop) that processes different packets per iteration.Therefore, there is a large gap between the parallel architecture of network processors and the order semantics of network applications. One way to solve this problem is to adapt the paradigm of parallel programming for coding traditional network applications. As those skilled in the art understand, parallel programs involve partitioning applications into subtasks, managing synchronization and communication between different subtasks, and mapping each subtask onto a multiprocessor system. Unfortunately, this paradigm of parallel programming is unconventional and unfamiliar to most people.Brief description of the drawingsIn the drawings, various embodiments of the present invention are illustrated by way of examples, not by way of limitation, and in which:FIG. 1 is a block diagram of a computer system that implements a parallel compiler to perform pipeline conversion of sequential applications according to an embodiment of the present invention.2A-2B illustrate a pipelined transformation of sequential network applications according to one embodiment of the present invention.3A-3C describe live variable transmission between pipeline stages formed from sequential packet processing stages according to one embodiment of the present invention.FIG. 4 shows an initial transformation of the sequential PPS cycle of FIG. 3A according to an embodiment of the present invention.FIG. 5 shows a control flow graph (CFG) formed according to the PPS loop body of FIG. 3A according to an embodiment of the present invention.6 illustrates a correlation graph formed from the induction graph of the CFG of FIG. 5 according to an embodiment of the present invention.FIG. 7 shows a control flow model formed from the induction graph of the directed graph of FIG. 6 according to one embodiment of the present invention.8 is a block diagram showing a network processor configured to provide a class D processor pipeline according to one embodiment of the present invention.9 is a flowchart illustrating a method for pipeline transformation of sequential network applications according to an embodiment of the present invention.FIG. 10 is a block diagram showing a flowchart for building a flow network model according to an embodiment of the present invention.11 is a flowchart showing a method for constructing a streaming network according to an embodiment of the present invention.12 is a flowchart showing a method for constructing a streaming network according to an embodiment of the present invention.13 is a flowchart showing a method for selecting balanced minimum cost cuts from a flow network model according to an embodiment of the present invention.FIG. 14 is a flowchart illustrating a method for performing balanced minimum cost segmentation of a network flow model using an iterative balanced push-relabel algorithm according to an embodiment of the present invention.FIG. 15 is a flowchart illustrating a method for transforming the minimum cut of the flow network model into D pipeline stages according to an embodiment of the present invention.FIG. 16 is a flowchart showing a method for transforming the minimum cut of a flow network model into a D pipeline stage according to an embodiment of the present invention.Detailed DescriptionDescribe a method and apparatus for automatically parallel sequential network applications through pipeline transformation. In one embodiment, the method includes configuring the network processor as a D-stage processor pipeline. Once configured, the sequential network application is transformed into D-pipeline stage (D-pipeline stage). Once transformed, the D pipeline stage is executed in parallel in the D-stage processor pipeline. In one embodiment, the transformation of the network application is performed by modeling the network application into a streaming network model and splitting the streaming network model into D pipeline stages, so that D-1 segmentation results in D pipeline stages.In the following description, certain terms are used to describe the features of the present invention. For example, the term "logic" represents hardware and / or software configured to perform one or more functions. For example, "hardware" embodiments include, but are not limited or limited to, integrated circuits, finite state machines, or even combinational logic. The integrated circuit may take the form of a processor such as a microprocessor, an application specific integrated circuit, a digital signal processor, a microcontroller, and so on.Examples of "software" include executable code in the form of applications, applets, routines, or even instruction strings. The software may be stored in any computer or machine-readable medium, such as programmable electronic circuits, including volatile memory (such as random access memory, etc.) and / or non-volatile memory (such as any type of read-only memory "ROM ", Flash memory) semiconductor memory devices, floppy disks, optical disks (such as compact disks or digital video disks" DVD "), hard drive disks, etc.In one embodiment, the present invention may be provided as an article of manufacture that may include a machine or computer-readable medium having instructions stored thereon, which may be used to program a computer (or other electronic device) according to the present invention An embodiment of the implementation process. Computer-readable media may include, but is not limited to, floppy disks, optical disks, compact disk read-only memory (CD-ROM) and magneto-optical disks, read-only memory (ROM), random access memory (RAM), erasable programmable read-only Memory (EPROM), electrically erasable programmable read-only memory (EEPROM), magnetic or optical cards, flash memory, etc.systemFIG. 1 is a block diagram of a computer system 100 including a parallel compiler 200 according to an embodiment of the present invention. As shown, the computer system 100 includes a CPU 110, a memory 140, and a graphics controller 130 coupled to a memory controller hub (MCH) 120. As described herein, MCH 120 may be referred to as Northbridge, and in one embodiment, MCH 120 may be referred to as a memory controller. In addition, the computer system 100 includes an I / O (input / output) controller center (ICH) 160. As described herein, ICH 160 may be referred to as a Southbridge or I / O controller. Southbridge (or ICH160) is coupled to local I / O 150 and hard disk drive device (HDD) 190.In the illustrated embodiment, the ICH 160 is coupled to the I / O bus 172, and the I / O bus 172 is coupled to multiple I / O devices, such as PCI or peripheral component interconnect (PCI) devices 170, including PCI-express, PCI-X, third-generation I / O (3GIO) or other similar interconnection protocols. In general, MCH 120 and ICH 160 are called chipset 180. As described herein, the term "chipset" is used to describe the various devices coupled to the CPU 110 to perform the desired system functionality as a whole in a manner well known to those skilled in the art. In one embodiment, the main memory 140 is volatile memory, including but not limited to, random access memory (RAM), synchronous RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double data rate ( DDR) SDRAM (DDR SDRAM), Rambus DRAM (RDRAM), direct RDRAM (DRDRAM), etc.Compared with conventional computer systems, the computer system 100 includes a parallel compiler 200 for transforming sequential network applications into D pipeline stages (parallel network applications). Therefore, the compiler 200 can bridge the gap between the parallel architecture of the network processor and the sequential programming model used to encode conventional network applications. One way to solve this problem is to use the parallel programming paradigm to encode network applications. Unfortunately, in general, this parallel programming paradigm is unconventional and unfamiliar to network programmers. According to one embodiment of the present invention, a parallel compiler 200 is provided to automatically transform sequential network applications into parallel network applications, as shown in FIGS. 2A and 2B.Referring to FIG. 2A, a sequential packet processing stage (PPS) 280 for sequential network applications is shown. As described in FIG. 2B, the PPS 280 can be transformed into three pipeline-level parallel network application pipelines 300 for execution in a class D processor pipeline, such as the network processor 500 of FIG. In one embodiment, the sequence PPS of network applications is transformed into D-pipeline-level parallel network applications through pipeline transformation, as shown for example with reference to FIGS. 3A-3C.Typically, the PPS 290 is divided into D-PPS pipeline stages of D = 2 (FIGS. 3B and 3C), each of which includes part of the functionality from the original PPS 290. In one embodiment, the D pipeline stage selection is performed by modeling PPS 290 according to the network flow model. If the graph G = (V, W) has two distinguished points (distinguished, vertex)-source point s and sink point (sink) t, and if for each edge (v, w) ∈ E, it has positive real Value capacity (capacity) c (v, w), then the graph G = (V, W) is a flow network. Directed flow network N = (V, E) vw splitis a bipartite operation that splits V into X and, so that v ∈ X andhas the starting node in X and has in NER5_ The edge of the terminal node is called the forward edge. The capacity ofis the sum of the capacity from X to the forward edge ofonly.As described herein, the term "split" refers to a set of control flow points that divide the PPS loop body into two pieces. In general, one or more splits performed on the PPS loop body form multiple PPS pipeline stages. In one embodiment, if the PPS loop is divided into D levels, then D-1 cuts are selected from the PPS loop 290. In one embodiment, the cuts are non-overlapping. In one embodiment, the conversion of network applications to D pipeline-level parallel network applications begins with the initial conversion of the network applications.In one embodiment, the web application is transformed into a static single assignment (SSA) form. Typically, the sequence PPS 290 (FIG. 3A) is transformed into an SSA code sequence 400 as shown in FIG. Once transformed, a control flow diagram for the PPS body of the PPS loop 290 of FIG. 3A is formed from the SSA code sequence of FIG. 4. In one embodiment, the PPS loop body of FIG. 3A is modeled as a control flow graph (CFG), as shown in FIG. 5. As described herein, CFG is a graphical representation of the control flow of a program, where each vertex represents a basic block, and each edge shows the potential control flow between basic blocks. CFG has a unique source node (entry).Typically, once all splits are applied, each node in the control flow graph is required to be in a pipeline stage. In one embodiment, the strongly connected component (SSC) node of CFG 420 of FIG. 5 is identified. SSC is a subset S of the nodes of the directed graph, so that any node in S is reachable from any other node in S, and S is not a subset of any larger such set. Once identified, the CFG 420 summary is formed. In one embodiment, the identification of the SSC nodes in the summary graph is used to eliminate control dependencies from the later stages to the earlier stages. Therefore, in one embodiment, as described herein, the pipeline transformation should not split any SSC nodes in the CFG 420 that are potential loops.As shown in FIG. 6, a correlation graph is formed from the induction graph of CFG420 of FIG. 5. In one embodiment, a correlation map (DG) 460 is used to eliminate data correlation from the earlier level to the later level. In one embodiment, in addition to acyclic bearer data and control correlation, DG 460 also shows PPS cyclic bearer flow correlation. Therefore, in general, the source and sink of PPS cycle bearer flow correlation are in the same SSC node of DG460. A directed graph induction is formed from the directed graph 460, which also identifies the SSC node therein. Therefore, the SSC node of the correlation graph 460 ensures that the pipeline transformation is limited to segmentation considerations that place the entire SSC on one or more side-by-side cuts.As shown with reference to FIG. 7, in one embodiment, the control flow model 480 is formed from the induction graph of the directed graph 460 of FIG. 6. The flow network model includes a unique source node and a unique sink node, and multiple program nodes including instructions. In addition to a unique source node and a unique sink node and multiple program nodes including instructions, for each object that can be included in a live set, a variable node and a control node are introduced in the flow network. After the SSA transformation (Figure 4), each variable has only one definition point, and therefore only one definition edge. The same is true for the control node.Therefore, the weight (capacity) associated with the defined edge (VCost for variables and CCost for control objects) correctly simulates the cost of transmitting the associated variable or control object if the edge is cut. In addition, the weight of the edge flowing from the source and the edge flowing into the sink is set to 0, because splitting such an edge will not incur any active set data transmission. All other edges have infinite weights, so they are not affected by the split. From the flow network model 480 of FIG. 7, a cut that produces a balanced code size can be selected.In one embodiment, in general, the selected cuts are required to satisfy one or more of the following criteria. The selected cut eliminates any data or control dependencies from the later stages to the earlier stages. In addition, one embodiment requires that data that is alive at the boundary between adjacent stages is minimized. As described herein, data valid at the boundary between adjacent stages is called "active set data". In a further embodiment, the selection of active splits is required to provide a balanced code size between application levels. In one embodiment, it is required to select a cut that provides a balanced minimum cost cut. In one embodiment, a heuristic iterative balanced push-relabel algorithm is used to select the balanced minimum cost cut in the flow network model of FIG. 7.8 is a block diagram of a network processor 100 configured to provide a class D processor pipeline according to one embodiment of the present invention. Typically, two or more processors are organized into a pipeline, where each stage includes a portion of the original PPS cycle. Therefore, each processor resource (eg, cache) can be more fully utilized. By pipelining each packet processing, the constrained performance budget of the packet processing can be distributed to all pipeline levels. As a result, the throughput of network applications is improved. Eliminating the correlation from the later stages to the earlier stages avoids the complex synchronization between each iteration of the original PPS cycle. By choosing a balanced minimum cost cut, the communication between stages is reduced. The process method for implementing the embodiments of the present invention will now be described.operating9 is a flowchart illustrating a method 600 for pipeline transformation of sequential applications (eg, sequential network applications) according to an embodiment of the present invention. At processing block 602, a flow network model is constructed for sequential network applications. Once constructed, at processing block 660, the streaming network model is divided into multiple (D) preliminary pipeline stages. In one embodiment, the streaming network model is divided into D pipeline stages for execution within a D-stage processor pipeline such as NP 500 of FIG. 8. In one embodiment, the flow network model may be formed as shown in flow network model 480 of FIG. 7. At processing block 700, the D preliminary pipeline stage is modified to perform the transfer of control flow and variables between them to form the D pipeline stage of a parallel network application such as application 300 of FIG. 2B.FIG. 10 is a flowchart illustrating a method 604 for constructing the flow network model of processing block 602 of FIG. 9 according to one embodiment of the present invention. At processing block 606, the sequential application is transformed into a static single assignment (SSA) form as described in FIG. At processing block 608, a control flow graph (CFG) shown with reference to FIG. 5, for example, is created from the loop body of the application. At processing block 512, a correlation graph (DG) shown with reference to FIG. 7, for example, is established based on the induction graph of CFG formed at processing block 610 and the identified strong connected component (SSC) of the CFG. At processing block 616, a flow node model is constructed based on the induction graph of DG formed at processing block 614 and the identified SSC nodes of DG. In one embodiment, the flow network model as shown with reference to FIG. 7 is generated from the sequential application 290 of FIG. 3A.11 is a flowchart illustrating a method 618 for constructing the flow network model of processing block 616 of FIG. 10 according to one embodiment of the present invention. At processing block 620, the flow network model is assigned a unique source point and a unique sink node. Once added, at processing block 622, for each SSC node identified in the DG's induction graph, a program node is added to the flow network model. Once a program node is added, at processing block 624, for each application program variable defined and used by multiple program nodes, the variable node is added to the flow network.At processing block 626, a control node is added to the flow network model for each SSC node identified as a source point with controlled dependency in the DG's induction graph. At processing block 628, edges are generated to connect the corresponding program nodes to the corresponding variable nodes. At processing block 630, edges are generated to connect the corresponding program nodes to the corresponding control nodes. In one embodiment, each generated edge is assigned a weight. At processing block 632, edges are generated between the program node and one of the source node and the sink node. In one embodiment, the flow network model is formed according to a flowchart illustrating method 636 as described in FIG.In one embodiment, once the flow network model is formed, the weights (or capacities) associated with the defined edges (VCost for variables and CCost for control objects) are correctly simulated if the corresponding edges in the flow network model are segmented The cost of transferring associated variables or control objects. Similarly, in one embodiment, once the flow network model is formed, the flow network model is divided into D (pipeline degree) degrees. Thus, the transform operation applies D-1 consecutive cuts to, for example, the packet processing level (PPS) of a network application, so that each cut is a balanced minimum cost cut.FIG. 13 is a flowchart describing a method for performing segmentation of the flow network model of processing block 660 of FIG. 9 according to an embodiment of the present invention. At processing block 662, the weight (W (N)) of each program node is set to the number of instructions included in the corresponding node. At processing block 664, each non-program node in the flow network model is set to a weight of zero. At processing block 665, the sum of the weights (W (N)) of each program node in the flow network model is stored in a value (T). At processing block 668, the variable i is set to the value 1 and the variable d is set to the value D (number of pipelined levels). At processing block 670, it is determined whether the variable i is less than the variable d or the degree of pipelining. Thus, at processing block 672, a balanced minimum cost segmentation algorithm is used to select the segmentation in the flow network model, thereby:(i-e) · T / d≤W (N) ≤ (i + e) · T / d (1)In one embodiment, d is the degree of balance, and the predefined constant e ranging from 1 to 0 is the balance variance. The balance variance reflects the trade-off between balance and weight of the cut. If the balance variance is close to 0, the algorithm searches for more balanced cuts instead of smaller weight cuts. Alternatively, if the balance variance is close to 1, then the algorithm searches for more weighted cuts instead of less balanced cuts, and the minimization of weights is considered more important. In one embodiment, the optimal value of the balance variance can be easily determined during the operation of the present invention. In processing block 698, variable i and variable d and variable T are updated, thereby repeating the operations of processing block 672 to enable the selection of balanced minimum cost cuts.In one embodiment, a heuristic iterative balanced push-relabel algorithm is used to select a balanced minimum cost cut in the flow network model. In one embodiment, the algorithm is adapted from AVGoldberg and REtarjan in "A New Approach to The Maximum Flow (New Method for Maximum Flow)" (Proc. 18th ACM STOC, 1986, pp. 136-146) Iterative balanced push-relabel algorithm described in. Therefore, FIG. 14 is a flowchart illustrating a method 674 of selecting the minimum cost cut of processing block 672, such as H. Yang and DFWong in “Efficient Flow Based Min-Cut Balanced Partitioning (based on Min-Cut balanced partitioning "Effective Flow" (Proc. IEEE Int'1 Conf. Computer-Aided Design, 1994, PP.50-55).15 is a flowchart of a method 702 for transforming a preliminary pipeline stage into a D pipeline stage of a parallel application according to an embodiment of the present invention. At processing block 704, a preliminary pipeline stage is selected. Once selected, at processing block 706, the control flow graph of the PPS cycle corresponding to the selected level is selected. At processing block 708, instructions not included in the selected preliminary stage are removed from the selected control flow graph. At processing block 710, the control flow graph is transformed according to the variables and control objects transmitted from the previous level to the selected preliminary level. At processing block 712, the PPS loop body is reconstructed from the transformed control flow graph to form a pipeline stage.Accordingly, by repeating processing blocks 704-712 for each of the D preliminary pipeline stages, sequential network applications are transformed into D pipeline stages of parallel network applications. In an alternative embodiment, the preliminary pipeline-level transformation is performed according to the method 720 shown in the flowchart described in FIG. 16. In one embodiment, the control correlation is established from the summarized CFG. However, the conditional in the generalized CFG may be a loop including multiple basic blocks. At processing block 730, different values are assigned to the corresponding control objects in each successor block of the loop. Furthermore, at processing block 726, the reconstruction of the conditional node should replace the loop by branching to all subsequent blocks, as shown in processing block 726.In an alternative embodiment, the effective implementation of this heuristics does not require the push-relabel algorithm to be executed from the beginning in each iteration. Typically, the push-relabel algorithm can be implemented incrementally as follows: (a) use the general push-relabel algorithm to find the initial minimum cut for the flow network, and (b) the node is compressed to the source or sink point , Use the push-relabel algorithm to locate the updated minimum cut with the following initial state: (i) set the pre-flow of all outflow edges of the source point to their capacity and update the excess accordingly , Leaving the preflow of other edges unchanged; (ii) setting the label of the source point to the new number of nodes; and (iii) if the node is compressed to the source point, leaving the labels of other nodes unchanged; Otherwise, set the node to 0.Alternative embodimentsSeveral aspects of one implementation of a parallel compiler have been described for providing operations to transform sequential network applications into D pipeline stages (parallel network applications). However, various implementations of parallel compilers provide many features including, complementing, supplementing, and / or replacing the aforementioned features. In the implementation of different embodiments, the features may be implemented as part of multiple processors or as part of a network processor. In addition, for explanatory purposes, the foregoing description uses specific terminology to provide a thorough understanding of embodiments of the invention. However, those skilled in the art will understand that the specific details are not required to practice the embodiments of the present invention.Furthermore, although the embodiments described herein are dedicated to using pipeline network analysis to select the D pipeline stage, those skilled in the art will recognize that other graph theory heuristics can be used to perform the D pipeline stage selection. In fact, as defined in the appended claims, heuristic methods for segmenting network application models, such as data flow analysis or other similar graph-theoretic heuristics, fall into the implementation for selecting the D pipeline stage in. In order to best explain the principles of the embodiments of the present invention and its practical application, the above embodiments are selected and described. These embodiments are selected, thereby enabling others skilled in the art to make the best use of the present invention and various embodiments with various modifications suitable for the specific use anticipated.It should be understood that although many features and advantages of various embodiments of the present invention and details of the structure and function of various embodiments of the present invention have been set forth in the foregoing description, this disclosure is merely illustrative. In some cases, only one such embodiment is used to describe certain subassembly in detail. However, it should be recognized and expected that such sub-assemblies may be used in other embodiments of the invention. Within the principle of the embodiments of the present invention, to the extent specified by the broad general meaning of the sentences expressed in the appended claims, changes may be made in details, particularly in the structure and management of the components.Exemplary embodiments and best modes have been disclosed, and modifications and changes can be made to the disclosed embodiments while still falling within the scope of the embodiments of the present invention as defined by the appended claims. |
An apparatus and method for providing efficient floor planning, power, and performance tradeoffs of memory accesses. A dual read port and single write port memory bit cell uses two asymmetrical read access circuits for conveying stored data on two read bit lines. The two read bit lines are pre-charged to different voltage reference levels. The layout of the memory bit cell places the two read bit lines on an opposed edge from the single write bit line. The layout uses a dummy gate placed over both p-type diffusion and n-type diffusion between the edges. The layout has a same number of p-type transistors as n-type transistors despite using asymmetrical read access circuits. The layout also has a contacted gate pitch that is one more than the number of p-type transistors. |
WHAT IS CLAIMED IS 1. A circuit comprising: an array of memory bit cells for storing data, wherein a given memory bit cell of the array 5 comprises: a data storage circuit; and a first asymmetrical read access circuit comprising only p-type transistors; wherein in response to receiving an indication of a first read operation, the first asymmetrical read access circuit is configured to: 10 access data stored by the data storage circuit; and convey the data to the first read bit line. 2. The circuit as recited in claim 1, wherein the circuit further comprises first pre-charge circuitry configured to pre-charge the first read bit line to a ground reference level. 15 3. The circuit as recited in claim 1, wherein the given memory bit cell further comprising a second asymmetrical access circuit comprising only n-type transistors. 4. The circuit as recited in claim 3, wherein the circuit further comprises circuitry configured to 20 pre-charge the second read bit line to a power supply reference level. 5. The circuit as recited in claim 3, wherein in response to receiving, concurrently with the first read operation, a second read operation targeting a same row of the array targeted by the first read operation, the given memory bit cell, via the second asymmetrical read access circuit, is 25 configured to: access the data stored by the data storage circuit; and convey the data to the second read bit line. 6. The circuit as recited in claim 1, wherein the first asymmetrical read access circuit comprises: 30 a first p-type transistor configured to receive, on its gate terminal, a complementary value of the data stored by the data storage circuit; and a second p-type transistor in series with the first p-type transistor configured to: receive, on its gate terminal, a read word line as the indication of the first read operation; and 35 receive, on its drain terminal, the first read bit line.7. The circuit as recited in claim 1, wherein the first pre-charge circuitry comprises only n-type transistors. 5 8. A method comprising: storing data in an array of memory bit cells; and wherein in response to receiving an indication of a first read operation, performing, by a first asymmetrical read access circuit of a given memory bit cell of the array, wherein the first asymmetrical read access circuit comprises only p-type transistors: 10 accessing data stored by a data storage circuit of the given memory bit cell; and conveying the data to the first read bit line. 9. The method as recited in claim 8, further comprising pre-charging the first read bit line to a ground reference level. 15 10. The method as recited in claim 8, wherein the given memory bit cell further comprises a second asymmetrical read access circuit comprising only n-type transistors. 11. The method as recited in claim 10, further comprising pre-charging the second read bit line to 20 a power supply reference level. 12. The method as recited in claim 10, wherein in response to receiving, concurrently with the first read operation, a second read operation targeting a same row of the array targeted by the first read operation, the method further comprises performing by the second asymmetrical read 25 access circuit: accessing the data stored by the data storage circuit; and conveying the data to the second read bit line. 13. The method as recited in claim 8, further comprising: 30 receiving, by a gate terminal of a first p-type transistor of the first asymmetrical read access circuit, a complementary value of the data stored by the data storage circuit; receiving, on a gate terminal of a second p-type transistor in series with the first p-type transistor, a read word line as the indication of the first read operation; and receiving, by a drain terminal of the second p-type transistor, the first read bit line. 3514. The method as recited in claim 8, wherein the first pre-charge circuitry comprises only n-type transistors. 15. A standard cell layout comprising: 5 a plurality of memory bit cells including one or more memory bit cells that comprise: a first metal gate placed over only p-type diffusion at a first edge of the standard cell layout configured to receive a first read word line; a second metal gate placed over only n-type diffusion at the first edge of the standard cell configured to receive a second read word line different from the first read 10 word line; and a dummy gate placed over both p-type diffusion and n-type diffusion between the first edge and a second edge of the standard cell layout. 16. The standard cell layout as recited in claim 15, wherein one or more memory bit cells further 15 comprise: a number of p-type transistors equal to a number of n-type transistors; and wherein a contacted gate pitch of the standard cell layout is one more than the number of p- type transistors. 20 17. The standard cell layout as recited in claim 16, wherein one or more memory bit cells further comprise: a first read bit line placed, at the first edge, as a drain region over only the p-type diffusion; and a second read bit line different from the first read bit line placed, at the first edge, as a drain region over only the n-type diffusion. 25 18. The standard cell layout as recited in claim 17, wherein one or more memory bit cells further comprise a write bit line placed, at the second edge, as drain regions over both the p-type diffusion and the n-type diffusion. 30 19. The standard cell layout as recited in claim 18, wherein one or more memory bit cells further comprise: a third metal gate placed over only p-type diffusion at the second edge configured to receive a write word line; and a fourth metal gate placed over only n-type diffusion at the second edge configured to receive 35 a complementary value of the write word line.20. The standard cell layout as recited in claim 17, further comprising a first memory bit cell of the plurality of memory bit cells placed with the first edge abutted to the first edge of a second memory bit cell of the plurality of memory bit cells placed in a mirrored manner of the first 5 memory bit cell allowing sharing of the first read bit line and the second read bit line by the first memory bit cell and the second memory bit cell. |
DUAL READ PORT LATCH ARRAY BIT CELL BACKGROUND Description of the Relevant Art 5 [0001] Generally speaking, a variety of semiconductor chips include at least one processing unit coupled to a memory. The processing unit processes instructions by fetching instructions and data, decoding instructions, executing instructions, and storing results. The processing unit sends memory access requests to the memory for fetching instructions, fetching data, and storing results of computations. In some designs, the processing unit and the memory are on a same die such as 10 a system-on-a-chip (SOC), whereas, in other designs, the processing unit and the memory are on different dies within a same package such as a multi-chip-module (MCM) system-in-a-package (SIP). Static random access memory (SRAM) is commonly used for the memory. The SRAM includes an array of many memory bit cells and surrounding circuitry used for accessing values stored in the array. 15 [0002] The die or the package may include other units or components in addition to the processing unit and the memory. The dimensions of the individual components have limits in order to place all of the components on a same die or a same package. For several types of memory, such as the SRAM, the dimensions may exceed limits for efficient placement. The dimensions of the memory, such as the height and/or the width, may be large enough that they interfere with the placement of 20 other components. In some cases, the other components may not even fit within the same die or the same package. Consequently, the chip may be rendered inoperable without significant redesign. [0003] In view of the above, efficient methods and apparatuses for providing efficient floor planning, power, and performance tradeoffs of memory accesses are desired. 25 BRIEF DESCRIPTION OF THE DRAWINGS [0004] FIG.1 is a generalized diagram of a memory bit cell that includes asymmetrical read access circuits and dual read ports. [0005] FIG. 2 is a generalized diagram of one implementation of semiconductor layout of a memory bit cell that includes asymmetrical read access circuits and dual read ports. 30 [0006] FIG. 3 is a generalized diagram of one implementation of adjacent memory bit cells that include asymmetrical read access circuits and dual read ports. [0007] FIG.4 is a generalized diagram of one implementation of semiconductor layout of adjacent memory bit cells that include asymmetrical read access circuits and dual read ports. [0008] FIG. 5 is a generalized diagram of one implementation of pre-charging circuitry of a 35 memory that utilizes memory bit cells with asymmetrical read access circuits and dual read ports.
[0009] FIG.6 is a block diagram of an implementation of a memory bank that utilizes memory bit cells with asymmetrical read access circuits and dual read ports. [0010] FIG.7 is a generalized diagram of one implementation of a method for efficiently accessing data stored in a memory bit cell that includes asymmetrical read access circuits and dual read ports. 5 [0011] FIG.8 is a generalized diagram of one implementation of a method for efficiently creating semiconductor layout of a memory bit cell that includes asymmetrical read access circuits and dual read ports. [0012] While the invention is susceptible to various modifications and alternative forms, specific implementations are shown by way of example in the drawings and are herein described in detail. 10 It should be understood, however, that drawings and detailed description thereto are not intended to limit the invention to the particular form disclosed, but on the contrary, the invention is to cover all modifications, equivalents and alternatives falling within the scope of the present invention as defined by the appended claims. 15 DETAILED DESCRIPTION [0013] In the following description, numerous specific details are set forth to provide a thorough understanding of the present invention. However, one having ordinary skill in the art should recognize that the invention might be practiced without these specific details. In some instances, well-known circuits, structures, and techniques have not been shown in detail to avoid obscuring 20 the present invention. Further, it will be appreciated that for simplicity and clarity of illustration, elements shown in the figures have not necessarily been drawn to scale. For example, the dimensions of some of the elements are exaggerated relative to other elements. [0014] Apparatuses and methods for providing efficient floor planning, power, and performance tradeoffs of memory accesses are contemplated. A memory array (or array) utilizes multiple 25 memory bit cells arranged as multiple rows and multiple columns. At least a portion of these multiple memory bit cells utilize asymmetrical read access circuits and dual read ports. As used herein, “an asymmetrical circuit” refers to a circuit that includes a number of p-type transistors different from a number of n-type transistors. The memory bit cell utilizes at least a first asymmetrical read access circuit and a second asymmetrical read access circuit to provide 30 requested data on corresponding read bit lines. In some implementations, a first asymmetrical read access circuit of the memory bit cell conveys requested data on a first read bit line. This first read bit line was previously pre-charged to the ground reference level. This first asymmetrical read access circuit includes more p-type transistors than n-type transistors. In some implementations, the first asymmetrical read access circuit includes only p-type transistors.
[0015] A second asymmetrical read access circuit of the memory bit cell conveys requested data on a second read bit line that was previously pre-charged to the power supply reference level. This second asymmetrical read access circuit includes more n-type transistors than p-type transistors. By not using symmetrical read access circuits that include a same number of p-type transistors as 5 a number of n-type transistors, the memory bit cells reduce the on-die area used for placement of the memory bit cells in the floorplan. Additionally, each read bit line is connected to a diffusion region of a single transistor drain connection per pair of bit cells by sharing that diffusion region along the bit cell edge (either a p-type transistor or an n-type transistor). Therefore, the capacitive loading on the corresponding read bit line is reduced. 10 [0016] The semiconductor layout (or layout) of the memory bit cell that includes asymmetrical read access circuits uses drain regions on the outermost edges of the layout for placement of the two read bit lines. The placement of these drain regions allows sharing of nodes between adjacent memory bit cells. Further, the layout uses a dummy gate, which is a structure that includes an insulting layer, rather than an active region, underneath the metal gate. This insulating layer 15 provides electrical isolation between source/drain regions on either side of the metal gate of the dummy gate structure. The placement of metal layers and other structures in the layout provides a number of contacted gate pitches (CPP) of the layout that is one more than the number of p-type transistors in the layout. A further description of both the circuits and the layout of the adjacent memory bit cells is provided in the below discussion. 20 [0017] Turning to FIG.1, a generalized block diagram of one implementation of a memory bit cell 100 that includes asymmetrical read access circuits and dual read ports is shown. In the implementation shown, data storage by a latching element is provided by the memory bit cell 100. For example, the devices 102-112 provide data storage using a back-to-back configuration of an inverter and a tristate inverter. The inverter is implemented with devices 102-104. The tristate 25 inverter is implemented with devices 106-112. The devices 140, 142, 150 and 152 provide two read access circuits for the memory bit cell 100 such that memory bit cell 100 is a dual read port bit cell. In various implementations, the devices of memory bit cell 100 are transistors. In some implementations, the transistors are planar metal oxide semiconductor (MOS) field effect transistors (FETs). In other implementations, the devices (or transistors) in the memory bit cell 30 100 are non-planar transistors. Non-planar transistors are a recent development in semiconductor processing for reducing short channel effects. Tri-gate transistors, Fin field effect transistors (FETs) and gate all around (GAA) transistors are examples of non-planar transistors. [0018] The memory bit cell 100 is one implementation of a static RAM (SRAM). In other implementations, another one of various types of RAM cells is used. This “memory bit cell” may 35 also be referred to as the “memory bit cell” and the “SRAM bit cell.” In various implementations,
the memory bit cell 100 is copied many times and arranged in an array of rows and columns for a memory. The array includes external circuitry (not shown) such as one or more of row decoders, column decoders, a sense amplifier, pre-charge circuitry, and sequential elements such as latches or flip-flop circuits for storing read access data and write access data. 5 [0019] As used herein, a Boolean logic high level is also referred to as a logic high level. Similarly, a Boolean logic low level is also referred to as a logic low level. In various implementations, the logic high level is equal to a power supply reference level and the logic low level is equal to a ground reference level. As used herein, a circuit node or line is “asserted” when the node or line stores a voltage level that enables a transistor that receives the voltage level. For example, an n- 10 type transistor is enabled when the n-type transistor receives a positive non-zero voltage level on its gate terminal that is at least a threshold voltage above a voltage level on its source terminal. As used herein, the circuit node or line is “negated” when the node or line stores a voltage level that disables a transistor that receives the voltage level. An n-type transistor is disabled when the n- type transistor receives a voltage level on its gate terminal that is a threshold voltage below a 15 voltage level on its source terminal. Similarly, a p-type transistor is enabled when the p-type transistor receives a voltage level on its gate terminal that is at least a threshold voltage below a voltage level on its source terminal. The p-type transistor is negated when the p-type transistor receives a voltage level on its gate terminal that is at least a threshold voltage above a voltage level on its source terminal. 20 [0020] When the data storage node D 130 of the memory bit cell 100 has a logic high level, the n- type transistor 104 is enabled and the p-type transistor 102 is disabled. The enabled n-type transistor 104 discharges the node DX 132, which enables the p-type transistor 110 and disables the n-type transistor 108. When the data storage node D 130 of the memory bit cell 100 has a logic low level, the n-type transistor 104 is disabled and the p-type transistor 102 is enabled. The enabled 25 p-type transistor 102 charges the node DX 132, which enables the n-type transistor 108 and disables the p-type transistor 110. As used herein, an “n-type transistor” is also referred to as an “n-type device,” an “n-type MOSFET,” and an “nfet.” Additionally, a “p-type transistor” is also referred to as a “p-type device,” a “p-type MOSFET,” and a “pfet.” Therefore, the n-type transistor 108 is also referred to as the nfet 108 and the p-type transistor 110 is also referred to as the pfet 30 110. It is noted that the nfet 108 is also labeled as NFB0108 in FIG. 1. The labels used in FIG. 1, such as “NFB0108” helps to identify transistors and circuit nodes in the circuit diagram of FIG. 1 and equivalent transistors and nodes in semiconductor layout diagrams used in later descriptions such as at least FIG.2. [0021] When a write operation is not occurring, each of the write word line (WWL) 160 and the 35 complementary write word line (WWLX) 162 is negated. Accordingly, each of the n-type
transistor 122 and the p-type transistor 120 of the pass gate is disabled, which electrically disconnects the word line WBL 164 from the node D 130 of the memory bit cell 100. Additionally, each of the n-type transistor 106 and the p-type transistor 112 is enabled, which allows one of the n-type transistor 108 and the p-type transistor 110 to both drive a particular voltage level on the 5 node D 130 based on a voltage level of the node DX 132 and close the data storage loop of the memory bit cell 100. For example, when the node DX 132 stores a logic high level, the n-type transistor 108 is enabled and the p-type transistor 110 is disabled. The n-type transistor 106 is enabled due to the logic high level of WWLX 162, which is negated. The enabled n-type transistors 106 and 108 provide an electrical discharge path between the data storage node D 130 and the 10 ground reference level indicated by “VSS,” which maintains the logic low level on the data storage node D 130 and closes the data storage loop. Conversely, when the node DX 132 stores a logic low level, the n-type transistor 108 is disabled and the p-type transistor 110 is enabled. The p-type transistor 112 is enabled due to the logic low level of WWL 160, which is negated. The enabled p-type transistors 110 and 112 provide an electrical charging path between the data storage node 15 D 130 and the power supply reference level indicated by “VDD,” which maintains the logic high level on the data storage node D 130 and closes the data storage loop. [0022] When a write operation is occurring, a row decoder (not shown) receives address information and enables a single row word line of multiple row word lines. In implementations utilizing memory banks, the row decoder (not shown) receives the address information and enables 20 a particular word line of a targeted memory bank, which contains multiple row word lines. When the memory bit cell 100 is in the row corresponding to the enabled row word line, each of WWL 160 and WWLX 162 of memory bit cell 100 is asserted by external access circuitry. Accordingly, each of the p-type transistor 120 and the n-type transistor 122 of the pass gate is enabled. The enabled transistors 120 and 122 of the pass gate electrically connect the word line WBL 164 to the 25 node D 130 of the memory bit cell 100. Therefore, the WBL 164 drives a voltage level to be stored on the node D 130. The write word line WWL 160 is also connected to other memory bit cells in a corresponding row of the array. Each of the n-type transistor 106 and the p-type transistor 112 is disabled, which electrically disconnects the data storage nodes D 130 and DX 132 from one another. In this implementation, the memory bit cell 100 is a single-ended write bit cell with a 30 single write port. The bit line WBL 164 is driven with write data by an external sequential element and buffer circuitry that drives the write data on a column of the array. For write access operations, external circuitry drives a particular voltage level, such as a logic high level or a logic low level corresponding to input data, onto the bit line WBL 164 routed throughout a column. It is noted for memory bit cells not targeted by the write operation, the data storage remains unchanged.
[0023] For read access operations, in some implementations, external pre-charge transistors are disabled, the read word line is asserted, an external sense amplifier is enabled, and the external read latches are enabled to capture the data read from the targeted memory bit cells. The data stored by the latch element (transistors 102-112) of the memory bit cell 100 is gated from the read 5 bit line RBL0176 by the asymmetrical read access circuit 180. Similarly, the data stored by the latch element (transistors 102-112) of the memory bit cell 100 is gated from the read bit line RBL1 178 by the asymmetrical read access circuit 182. As used herein, “asymmetrical” refers to circuits that include a number of p-type transistors different from a number of n-type transistors. [0024] In various implementations, the asymmetrical read access circuit 180 includes more p-type 10 transistors than n-type transistors. In some implementations, the asymmetrical read access circuit 180 includes only p-type transistors. In such implementations, the asymmetrical read access circuit 180 does not include any n-type transistors. In the illustrated implementation, the asymmetrical read access circuit 180 includes two p-type transistors 140 and 142 connected in a series stack topology and zero n-type transistors. Therefore, the asymmetrical read access circuit 180 utilizes 15 a number of p-type transistors, which is 2 in this case, different from a number of n-type transistors, which is 0 in this case. The inputs to the asymmetrical read access circuit 180 are the node DX 132 and the read word line RWL0170. The output of the asymmetrical read access circuit 180 is the read bit line RBL0176. [0025] In various implementations, the asymmetrical read access circuit 182 includes more n-type 20 transistors than p-type transistors. In some implementations, the asymmetrical read access circuit 182 includes only n-type transistors. In such implementations, the asymmetrical read access circuit 182 does not include any p-type transistors. In the illustrated implementation, the asymmetrical read access circuit 182 includes two n-type transistors 150 and 152 connected in a series stack topology and zero p-type transistors. Therefore, the asymmetrical read access circuit 182 utilizes 25 a number of p-type transistors, which is 0 in this case, different from a number of n-type transistors, which is 2 in this case. The inputs to the asymmetrical read access circuit 182 are the node DX 132 and the read word line RWL1172. The output of the asymmetrical read access circuit 180 is the read bit line RBL1178. [0026] The bit line RBL0176 is pre-charged to a logic low level such as the ground reference level 30 “VSS.” After the pre-charge cycle (or phase) has ended, when the word line RWL0170 is asserted, the p-type transistor 140 becomes enabled. Whether the p-type transistor 142 is enabled is based on the binary value stored on the node DX 132. When both p-type transistors 140 and 142 are enabled and the node DX 132 stores a logic low level, this series stack of p-type transistors 140 and 142 charge the bit line RBL0176 to a logic high level.
[0027] Regarding the other asymmetrical read access circuitry of the memory bit cell 100, the bit line RBL1178 is pre-charged to a logic high level such as the power supply reference level “VDD.” After the pre-charge cycle (or phase) has ended, when the word line RWL1172 is asserted, the n- type transistor 150 becomes enabled. Whether the n-type transistor 152 is enabled is based on the 5 binary value stored on the node DX 132. When both n-type transistors 150 and 152 are enabled and the node DX 132 stores a logic high level, this series stack of n-type transistors 150 and 152 discharge the bit line RBL1178 to a logic low level. Therefore, p-type transistors 140 and 142 provide an asymmetrical read access circuit that relies on only p-type transistors. This asymmetrical read access circuit uses no n-type transistors. The n-type transistors 150 and 152 10 provide an asymmetrical read access circuit that relies on only n-type transistors. This asymmetrical read access circuit uses no p-type transistors. This topology of memory bit cell 100 uses less transistors than a bit cell that uses full complementary tristate inverters to implement dual read ports. [0028] Referring to FIG.2, a generalized block diagram of one implementation of semiconductor 15 standard cell layout 200 of a memory bit cell that includes asymmetrical read access circuits and dual read ports is shown. Signals and circuitry described earlier are numbered identically. It is noted that the dashed boxes for the asymmetrical read access circuits 180 and 182 are used to highlight the layout elements of these circuits, and the dashed boxes are not part of the layout 200. Here, the p-type transistors (p-type transistors) are at the top of the standard cell layout 200 (or 20 layout 200) and the n-type transistors (n-type transistors) are at the bottom of the standard cell layout 200. In the illustrated implementation, the standard cell layout 200 is for a dual read port and single write port memory bit cell with single-ended read and single-ended write capability. In various implementations, the standard cell layout 200 is used for the circuit topology of the memory bit cell 100 (of FIG. 1). As shown, the standard cell layout 200 uses metal gate 206 in a 25 vertical direction and diffusion regions 202 and 204 used to define active regions in a horizontal direction. For example, p-type diffusion region 202 defines a p-type active region in the layout 200, whereas, the n-type diffusion region 204 defines an n-type active region in the layout 200. It is noted that it is possible to rotate the standard cell layout 200 to have a different orientation. [0029] Similar to the transistors of memory bit cell 100 (of FIG.1), in some implementations, the 30 transistors in the layout 200 are planar metal oxide semiconductor (MOS) field effect transistors (FETs). In other implementations, the devices (or transistors) of the layout 200 are non-planar transistors such as tri-gate transistors, Fin field effect transistors (FETs), and gate all around (GAA) transistors. In some implementations, the source/drain regions are implemented with trench silicide contacts. Trench silicide contacts used for source/drain regions, signal routes in different 35 metal layers, contacts and vias, and so forth are not shown in layout 200 for ease of illustration.
As shown, the p-type transistors 102, 110, 112, 120, 140 and 142 are placed in a particular order. Similarly, the n-type transistors 104, 106, 108, 122, 150 and 152 are placed in a particular order. Despite using asymmetrical read access circuitry, the standard cell layout 200 includes both a number of p-type transistors equal to a number of n-type transistors and provides a contacted gate 5 pitch that is one more than the number of p-type transistors (or the number of n-type transistors). The metric of a number of contacted gate (poly) pitches (CPP) is one metric used to characterize a density of semiconductor layout. In the illustrated implementation, layout 200 has six p-type transistors and six n-type transistors despite using asymmetrical read access circuitry. The layout 200 has a density equivalent to seven CPP. 10 [0030] Dummy gates are typically used to provide electrical isolation between regions. Although in various implementations, a dummy gate uses a metal gate, the gate region is formed over an insulation layer, rather than an active silicon layer such as an n-type or p-type diffusion layer. The isolation layer uses a silicon nitride layer, a silicon oxide layer, such as a silicon dioxide layer, or another type of dielectric layer. Therefore, should voltage levels be applied on the dummy gate 15 and one or more of the regions on either side of the dummy gate, such as source/drain regions, no electrical path is provided and no current flows between the source/drain regions. The fabrication steps for the dummy gate ensures that an active transistor is not formed at the location in the layout of the dummy gate. In some implementations, standard cell layouts use dummy gates at the edges of the cell layout. In these cases, dummy gates are used to separate cells from one another. For 20 example, an edge of a cell has a last active metal gate, followed by active diffusion, and then a dummy gate. In some designs, two adjacent cells share a dummy gate. However, as shown in the illustrated implementation, the standard cell layout 200 has no dummy gates at the edges. Rather, the standard cell layout 200 uses a dummy gate 270 in the middle of the layout. [0031] At the left edge of the standard cell layout 200, the write bit line is placed. For example, at 25 the left edge of layout 200, the source/drain region WBL 210 of the p-type transistor 120 is placed. Similarly, at the left edge of layout 200, the source/drain region WBL 212 of the n-type transistor 122 is placed. At the right edge of the standard cell layout 200, the two read bit lines are placed. For example, at the top right edge of layout 200, the drain region RBL0240 of the p-type transistor 140 is placed. Similarly, at the bottom right edge of layout 200, the drain region RBL1242 of the 30 n-type transistor 150 is placed. Dummy gates are not placed at the left edge or the right edge of layout 200. [0032] The source/drain regions 210-242 of the layout 200 are electrically equivalent to signals named in a similar manner and used in the memory bit cell 100 (of FIG. 1). Similarly, the metal gates 250-284 of the layout 200 are electrically equivalent to signals named in a similar manner 35 and used in the memory bit cell 100 (of FIG. 1). However, here, the signals are physically
disconnected at the source/drain regions and at the metal gates until further layers and contacts are placed to electrically connect nodes to one another. Therefore, signals that are named identically to one another in FIG. 2 and named identically with signals described earlier in the memory bit cell 100 (of FIG. 1) are numbered differently in the layout 200 due to the signals identifying 5 different physical elements of layout 200. For example, the data storage nodes D 214 and D 216 are logically equivalent, but the p-type active region forming the source/drain region for node D214 does not physically abut with the n-type active region forming the source/drain region for node D 216. Therefore, the nodes D 214 and D 216 are not physically connected at the source/drain regions. However, the nodes D 214 and D 216 are physically connected after further metal layers, 10 vias and contacts are placed by semiconductor fabrication steps. [0033] When the semiconductor fabrication steps place the further metal layers, vias and contacts, which are not shown for ease of illustration, the nodes D 214 and D 216 become physically connected. This physical connection allows the nodes D 214 and D 216 to become electrically connected when voltage levels are applied to the layout 200. Similarly, the write word lines WWL 15 252 and WWL 256 are logically equivalent, but the metal gate of WWL 252 does not physically abut with the metal gate of WWL 256. Therefore, the write word lines WWL 252 and WWL 256 are not physically connected at the metal gates. However, the write word lines WWL 252 and WWL 256 are physically connected after further layers and contacts are placed by the by semiconductor fabrication steps. When the semiconductor fabrication steps place the further metal 20 layers, vias and contacts, the write word lines WWL 252 and WWL 256 become physically connected. This physical connection allows the write word lines WWL 252 and WWL 256 to become electrically connected when voltage levels are applied to the layout 200. [0034] Turning now to FIG. 3, a generalized block diagram of one implementation of adjacent memory bit cells 300 that include asymmetrical read access circuits and dual read ports is shown. 25 Signals and circuitry described earlier are numbered identically. In the illustrated implementation, two memory bit cells 380 and 382 are placed in an adjacent manner. In some implementations, the bit cells 380 and 382 are two adjacent bits of two different rows in a same column of an array. In one example, bit cell 380 is bit [4] of a data word stored in row 9 of a multi-row array and bit cell 382 is bit [4] of another data word stored in row 10 of the same multi-row array. Bit cells 380 30 and 382 share the read bit lines RBL0176 and RBL1178. Similarly, the bit cells share the write bit line WBL 164. The bit cell 380 uses the same transistors and topology as memory bit cell 100 (of FIG. 1). Similarly, the bit cell 382 uses the same transistors and topology as memory bit cell 100, but in a mirrored manner. As shown, bit cell 382 includes transistors 302-352 using a same electrical topology as transistors 102-152 of bit cell 380. Similarly, bit cell 382 receives control 35 signals 360-372 in a similar manner as bit cell 380 receives control signals 160-172.
[0035] Referring to FIG.4, a generalized block diagram of one implementation of semiconductor layout 400 of adjacent memory bit cells that include asymmetrical read access circuits and dual read ports is shown. Signals and circuitry described earlier are numbered identically. Here, the p- type transistors are at the top of the standard cell layout 400 and the n-type transistors are at the 5 bottom of the standard cell layout 400. In the illustrated implementation, the standard cell layout 400 is for two dual read port, single write port memory bit cells with single-ended write. In some implementations, the bit cells are two adjacent bits of two different rows in a same column of an array. In an implementation, the standard cell layout 400 provides layout of the memory bit cells 300 (of FIG. 3). As shown, the standard cell layout 400 (or layout 400) includes transistors 102- 10 152 and 302-352 utilizing source/drain regions 210-242 and 410-436 and receives control signals 250-284 and 450-484 received on metal gates. [0036] Similar to the layout 200, signals that are named identically to one another in FIG. 4 and named identically with signals described earlier in the memory bit cell 100 (of FIG. 1) and the memory bit cells 300 (of FIG.3) are numbered differently in the semiconductor layout 400 due to 15 the signals identifying different physical elements of the semiconductor layout 400. For example, the write word lines WWL 452 and WWL 456 are logically equivalent, but the metal gate of WWL 452 does not physically abut with the metal gate of WWL 456. Therefore, the write word lines WWL 452 and WWL 456 are not physically connected at the metal gates. However, the write word lines WWL 452 and WWL 456 are physically connected after further layers and contacts are 20 placed by the by semiconductor fabrication steps. When the semiconductor fabrication steps place the further metal layers, vias and contacts, the write word lines WWL 452 and WWL 456 become physically connected. This physical connection allows the write word lines WWL 452 and WWL 456 to become electrically connected when voltage levels are applied to the layout 400. [0037] Similar to the standard cell layout 200 (or layout 200), the layout 400 does not use dummy 25 gates at the outermost edges. Rather, the layout 400 uses dummy gates 270 and 470 in separate memory bit cells. In various implementations, dummy gate 470 is formed using similar fabrication steps used to form dummy gate 270. Similar to dummy gate 270, the dummy gate 470 in left floating in some implementations, whereas, in other implementations, one or more of the dummy gates 270 and 470 are connected to one of VDD and VSS. Despite using metal gates, the dummy 30 gates 270 and 470 are formed over a dielectric layer, and consequently, are incapable of conducting current. Therefore, the source/drain region DX 226 is electrically isolated from the source/drain region VDD 230. Similarly, the source/drain region DX 228 is electrically isolated from the source/drain region VSS 232. Further, the source/drain region DX 426 is electrically isolated from the source/drain region VDD 430, and the source/drain region DX 428 is electrically isolated from 35 the source/drain region VSS 432. The layout 400 provides sharing of the drain regions RBL0240
and RBL1242 used for the read bit lines. For example, the two p-type transistors 140 and 340 share the drain region RBL0240. In a similar manner, the two n-type transistors 150 and 350 share the drain region RBL1242. On both the left edge and the right edge, further sharing can occur with other layouts of other bit cells sharing the drain regions WBL 210, WBL 212, WBL 410 and 5 WBL 412. [0038] Turning now to FIG.5, a generalized block diagram of one implementation of pre-charging circuitry 500 is shown. Signal names previously described are numbered identically. For example, the read bit lines RBL0176 and RBL1178 are the read bit lines from the memory bit cell 100 (of FIG. 1). As shown, circuitry 500 includes pre-charging circuitry (or circuitry) for two read bit 10 lines. Circuitry 520 pre-charges the read bit line RBL1 178. The read bit line RBL1 178 is connected to an asymmetrical read access circuit (not shown) that uses only n-type transistors. As shown earlier, an example of this asymmetrical read access circuit that uses only n-type transistors is the asymmetrical read access circuit 182 (of FIG. 1). Circuitry 520 includes the pre-charge p- type transistor 502, an inverter 510, and the p-type transistors 512 and 514 in a series stack 15 topology. The pre-charge p-type transistor 502 receives a pre-charge control signal PCH1504. The transistor 514 receives the control signal LE1516. Circuitry 540 pre-charges the read bit line RBL0176. The read bit line RBL0176 is connected to an asymmetrical read access circuit (not shown) that uses only p-type transistors. As shown earlier, an example of this asymmetrical read access circuit that uses only p-type transistors is the asymmetrical read access circuit 180 (of FIG. 20 1). Circuitry 540 includes the pre-charge transistor 522, an inverter 530, and the n-type transistors 532 and the 534 in a series stack topology. The pre-charge transistor 522 receives a pre-charge control signal PCH0524, and the transistor 522 receives the control signal PCH0524. A further description of the operation of the circuitry 520 is provided in the below discussion. Similar steps are used to operate the circuitry 540. 25 [0039] During a pre-charge phase, the control signal PCH1504 is asserted, the p-type transistor 502 is enabled, and the enabled transistor 502 creates an electrically conducting path between the power supply voltage reference level “VDD” and the read bit line RBL1178. When RBL1178 is pre-charged to the power supply reference level, the inverter 510 discharges the gate terminal of the p-type transistor 512, which enables the transistor 512. The transistor 512 is used as a keeper 30 transistor. In some implementations, circuitry 520 uses a single keeper transistor, such as transistor 512, with no transistor 514. In other implementations, circuitry 520 uses the series stack as shown with the two p-type transistors 512 and 514 providing one of a variety of split keeper (or dual keeper) schemes. For example, the two p-type transistors 512 and 514 provide one of a variety of delayed onset keeper circuitry. During an evaluate phase, the control signal PCH1504 is negated,
and the transistor 502 is disabled. The voltage level on the read bit line RBL1178 is based at least on a voltage level provided by the asymmetrical read access circuitry of a selected bit cell. [0040] Turning now to FIG. 6, a generalized block diagram of one implementation of a memory bank 600 is shown. In various implementations, a memory is organized as multiple memory banks, 5 and a memory macro block includes both a left bank and a right bank. In some implementations, the bank 600 is one of the left bank or the right bank of the memory macro block. Although “left” and “right” are used to describe the memory banks, other notations may be used such as a “top bank” and a “bottom bank.” As shown, the memory bank 600 includes arrays 610A-610B, row decoders 620A-620B, sense amplifiers 630A-630B between the arrays 610A-610B, read and write 10 timing control logic 640A-640B, and read latches and write latches in block 650. It is noted that, in some implementations, multiple banks are accessed concurrently in a same clock cycle or a same pipeline stage. The access includes one of a read access and a write access. In such implementations, bank address decoders select the corresponding banks to access. [0041] In various implementations, each of the blocks 610A-610B, 620A-620B, 630A-630B, 15 640A-640B and 650 in the memory bank 600 is communicatively coupled to another one of the blocks. For example, direct connections are used wherein routing occurs through another block. Alternatively, staging of signals is done in an intermediate block. In various implementations, each of the arrays 610A-610B includes multiple memory bit cells 660 arranged in a tiled format. In some implementations, one or more of the bit cells include asymmetrical read access circuits. 20 For example, one or more of the arrays 610A and 610B provide a dual read port and single write port functionality. Accordingly, the memory bit cells include a stack of p-type transistors such as p-type transistors 140 and 142 (of FIG. 1) that control whether the stored binary value affects the pre-charged read bit line 166. In addition, the memory bit cells include a stack of n-type transistors such as p-type transistors 150 and 152 (of FIG. 1) that control whether the stored binary value 25 affects the pre-charged read bit line 168. [0042] The row decoders and word line drivers in blocks 620A-620B receive address information corresponding to an access request. For example, each of the blocks 620A-620B receives the information provided by the access request address 670. Each one of the blocks 620A- 620B selects a particular row, or entry, of the multiple rows in an associated one of the arrays 30 620A-620B. In some implementations, the blocks 620A-620B use an index portion of the address 660 for selecting a given row, or entry, in an associated one of the arrays 620A-620B. Each row, or entry, stores one or more memory lines. [0043] In the implementation shown, the rows, or entries, in the arrays 620A-620B are arranged in a vertical orientation. However, in other implementations, a horizontal orientation is used for 35 storage of the memory lines. For write access requests, the write latches are located in block 650.
The write data is driven into the arrays 610A-610B. The timing control logic 640A-640B updates the write latches with new data in block 650 and sets up the write word line driver logic. The write data is written into a row of bit cells that is selected by an associated one of the blocks 620A-620B. In some implementations, pre-charge circuitry is included in block 650. 5 [0044] For read access requests, the block 650 is used to pre-charge the read bit lines routed to the arrays 610A-610B. The timing circuitry in blocks 640A-640B is used for pre-charging and setting up the sense amplifiers in the blocks 630A-630B. The timing circuitry 640A-640B sets up the read word line driver logic. One of the row decoders 620A-620B selects a row to read out data, which will be provided on read bit lines that are sensed by the sense amplifiers. The read latches 10 capture the read data. [0045] Referring now to FIG.7, one implementation of a method 700 for efficiently accessing data stored in a memory bit cell is shown. For purposes of discussion, the steps in this implementation (as well as in Figure 8) are shown in sequential order. However, in other implementations some steps occur in a different order than shown, some steps are performed concurrently, some steps are 15 combined with other steps, and some steps are absent. [0046] An array of memory bit cells arranged as multiple rows and columns stores data (block 702). In various implementations, the values of the stored data are maintained by data storage loops within the memory bit cells. In addition, the values of the stored data are updated by write operations. In some implementations, the memory bit cells include pass gates and feedback 20 inverters (and feedback tristate inverters) to implement data storage loops and allow updating of the stored values during the write operations. In some implementations, the memory bit cells use the pass gates and feedback inverters of memory bit cell 100 (of FIG.1) and memory bit cells 380 and 382 (of FIG. 3). [0047] Circuitry external to the memory bit cells pre-charges a first read bit line to a ground 25 reference level (block 704). The circuitry pre-charges a second read bit line to a power supply reference level (block 706). If the array receives a first read operation that targets a first row of the array and targets data to be read out on the first read bit line (“yes” branch of the conditional block 708), then a first asymmetrical read access circuit, which includes more p-type transistors than n-type transistors, conveys data stored in a bit cell in the first row to the first read bit line30 (block 710). In some implementations, the first asymmetrical read access circuit includes only p- type transistors. For example, the memory bit cell is similar to the memory bit cell 100 (of FIG. 1) that includes the asymmetrical read access circuit 180. The asymmetrical read access circuit 180 includes a stack of p-type transistors such as p-type transistors 140 and 142 that control whether the stored binary value affects the pre-charged read bit line 176.
[0048] If the array does not receive a first read operation that targets the first row of the array and targets data to be read out on the first read bit line (“no” branch of the conditional block 708), then control flow of method 700 skips block 710 and moves to the conditional block 712. If the array receives a second read operation that targets the first row and targets data to be read out on the 5 second read bit line (“yes” branch of the conditional block 712), then a second asymmetrical read access circuit, which includes more n-type transistors than p-type transistors, conveys data stored in the bit cell in the first row to the second read bit line (block 714). In some implementations, the second asymmetrical read access circuit includes only n-type transistors. For example, the memory bit cell is similar to the memory bit cell 100 (of FIG.1) that includes the asymmetrical read access 10 circuit 182. The asymmetrical read access circuit 182 includes a stack of n-type transistors, such as n-type transistors 150 and 152 that control whether the stored binary value affects the pre- charged read bit line 178. [0049] If the array does not receive the second read operation that targets the first row of the array and targets data to be read out on the second read bit line (“no” branch of the conditional block 15 712), then control flow of method 700 skips block 714 and moves to the block 716. The bit cell maintains a stored binary value (block 716). As described earlier, the bit cell includes a latch element for storing the binary value until the binary value is modified by a write access operation. [0050] Referring now to FIG. 8, one implementation of a method 800 for efficiently creating semiconductor layout of a memory bit cell is shown. A first metal gate is placed over only p-type 20 diffusion at a first edge of a memory bit cell layout for receiving a first read word line (block 802). Therefore, the first metal gate is placed over a p-type active region used for creating p-type transistors. A second metal gate is placed over only n-type diffusion at the first edge of the memory bit cell layout for receiving a second read word line different from the first read word line (block 804). Therefore, the second metal gate is placed over an n-type active region used for creating n- 25 type transistors. A dummy gate is placed over both p-type diffusion and n-type diffusion within the cell layout away from the edges (block 806). [0051] Place a first read bit line, at the first edge, as a drain region over only the p-type diffusion (block 808). Place a second read bit line different from the first read bit line, at the first edge, as a drain region over only the n-type diffusion (block 810). Place a write bit line, at the second edge, 30 as drain regions over both the p-type diffusion and the n-type diffusion (block 812). [0052] Provide a contacted gate pitch of layout of a single memory bit cell as one more than the number of p-type transistors (block 814). Place a first memory bit cell with the first edge abutted to the first edge of a second memory bit cell placed in a mirrored manner of the first memory bit cell allowing sharing of the first read bit line and the second read bit line by the first memory bit 35 cell and the second memory bit cell (block 816).
[0053] It is noted that one or more of the above-described implementations include software. In such implementations, the program instructions that implement the methods and/or mechanisms are conveyed or stored on a computer readable medium. Numerous types of media which are configured to store program instructions are available and include hard disks, floppy disks, CD- 5 ROM, DVD, flash memory, Programmable ROMs (PROM), random access memory (RAM), and various other forms of volatile or non-volatile storage. Generally speaking, a computer accessible storage medium includes any storage media accessible by a computer during use to provide instructions and/or data to the computer. For example, a computer accessible storage medium includes storage media such as magnetic or optical media, e.g., disk (fixed or removable), tape, 10 CD-ROM, or DVD-ROM, CD-R, CD-RW, DVD-R, DVD-RW, or Blu-Ray. Storage media further includes volatile or non-volatile memory media such as RAM (e.g. synchronous dynamic RAM (SDRAM), double data rate (DDR, DDR2, DDR3, etc.) SDRAM, low-power DDR (LPDDR2, etc.) SDRAM, Rambus DRAM (RDRAM), static RAM (SRAM), etc.), ROM, Flash memory, non- volatile memory (e.g. Flash memory) accessible via a peripheral interface such as the Universal 15 Serial Bus (USB) interface, etc. Storage media includes microelectromechanical systems (MEMS), as well as storage media accessible via a communication medium such as a network and/or a wireless link. [0054] Additionally, in various implementations, program instructions include behavioral-level descriptions or register-transfer level (RTL) descriptions of the hardware functionality in a high 20 level programming language such as C, or a design language (HDL) such as Verilog, VHDL, or database format such as GDS II stream format (GDSII). In some cases the description is read by a synthesis tool, which synthesizes the description to produce a netlist including a list of gates from a synthesis library. The netlist includes a set of gates, which also represent the functionality of the hardware including the system. The netlist is then placed and routed to produce a data set 25 describing geometric shapes to be applied to masks. The masks are then used in various semiconductor fabrication steps to produce a semiconductor circuit or circuits corresponding to the system. Alternatively, the instructions on the computer accessible storage medium are the netlist (with or without the synthesis library) or the data set, as desired. Additionally, the instructions are utilized for purposes of emulation by a hardware based type emulator from such vendors as 30 Cadence®, EVE®, and Mentor Graphics®. [0055] Although the implementations above have been described in considerable detail, numerous variations and modifications will become apparent to those skilled in the art once the above disclosure is fully appreciated. It is intended that the following claims be interpreted to embrace all such variations and modifications. 35 |
A system may comprises an optimizer/scheduler to schedule on a set of instructions, compute a data dependence, a checking constraint and/or an anti-checking constraint for the set of scheduled instructions, and allocate alias registers for the set of scheduled instructions based on the data dependence, the checking constraint and/or the anti-checking constraint. In one embodiment, the optimizer is to release unused registers to reduce the alias registers used to protect the scheduled instructions. The optimizer is further to insert a dummy instruction after a fused instruction to break cycles in the checking and anti-checking constraints. |
CLAIMS: 1. A method, comprising: performing a schedule on a set of instructions; computing a data dependence for the set of scheduled instructions; computing a checking constraint for the set of scheduled instructions; and allocating alias registers for the set of scheduled instructions based on the data dependence and the checking constraint. 2. The method of claim 1, further comprising: computing an anti-checking constraint for the set of scheduled instruction; and allocating alias registers for the set of scheduled instructions further based on the anti-checking constraint. 3. The method of claim 1, further comprising: releasing an alias register allocated for a first scheduled instruction in the instruction set by rotation in response to the alias register has been checked; and allocating the released alias register to a second scheduled instruction. 4. The method of claim 1, further comprising: selecting a second instruction in the instruction set to perform the schedule in response to determining that a first instruction to be scheduled is to cause alias register overflow. 5. The method of claim 1, further comprising: fusing at least two instructions in the set of instructions to provide a fused instruction; and inserting a dummy instruction in the set of instructions to break one or more cycles in the checking constraints. 6. The method of claim 1, further comprising: inserting a dummy instruction after a fused instruction in the set of instruction, wherein the dummy instruction is to access the same memory as the fused instruction and is to use a different alias register from the fused instruction. 7. A system, comprising: a processor; and an optimizer to optimize a set of original codes to be executed by the processor, schedule the optimized codes into scheduled codes, and allocating a new alias register to the scheduled codes based on at least one of data dependence and constraint of the scheduled codes. 8. The system of claim 7, wherein the optimizer is further to compute the dada dependence and the constraint of the scheduled codes. 9. The system of claim 7, wherein the optimizer is further to rotate an allocated alias register to release the allocated alias register for the new alias register in response to determining that the allocated alias register is not to be checked by the scheduled codes. 10. The system of claim 7, wherein the optimizer is further to optimize the original codes to provide fused codes and, in response to the fused codes comprising C/P bits on multiple logical codes in the fused codes, insert dummy codes to partition the C/P bits has C/P bits. 11. The system of claim 7, wherein the optimizer is further to optimize the original codes to provide fused codes and inserting dummy codes after the fused codes to break one or more cycles in the constraint, wherein the dummy codes is to access the same memory as the fused codes and is to use a different alias register from the fused codes. 12. The system of claim 7, wherein the optimizer is further to delay the register allocation for the scheduled codes in response to detecting one or more cycles in the constraint prevent the alias register allocation for the scheduled codes. 13. The system of claim 12, wherein the optimizer is further to insert dummy codes after the register allocation delayed codes to break the cycles in the constraint, wherein the dummy codes is to access the same memory as the register allocation delayed codes and is to use a different alias register from the register allocation delayed codes. 14. The system of claim 12, wherein the constraint comprises a checking constraint or a anti-checking constraint. 15. The system of claim 7, wherein the optimizer is further to remove the constraint of the scheduled codes in response to the new alias register being allocated. 16. A machine-readable medium containing instructions which, when executed by a processing system, cause a computing system to schedule a set of instructions; compute a constraint for the set of scheduled instructions; allocate a new alias register to one of scheduled instructions based on the constraint. 17. The machine-readable medium of claim 16, further comprising a plurality of instructions that in response to being executed result in the computing system to: release an allocated alias register that is unused and allocate the released register for the new alias register. 18. The machine-readable medium of claim 16, further comprising a plurality of instructions that in response to being executed result in a computing device to: insert a dummy instruction in the set of scheduled instruction to break a cycles in the constraint. |
REGISTER ALLOCATION IN ROTATION BASED ALIAS PROTECTION REGISTER BACKGROUND Hardware/Software co-designed systems may leverage dynamic binary optimization to improve performance. For dynamic binary optimization on memory instructions, memory alias information may be required. Dynamic binary optimization may leverage hardware alias checking for speculative memory optimization in an atomic region. When a load instruction is speculatively reordered before a store instruction with possible memory alias between them, the load instruction may need to set up an alias protection register with its memory address stored in it. In response to the store instruction being executed, the store instruction may check against the alias protection register with its memory address to detect mis-speculations. Mis-speculations may lead to the rollback of the whole region and re-execution of non-optimized or less-optimized code. BRIEF DESCRIPTION OF THE DRAWINGS The invention described herein is illustrated by way of example and not by way of limitation in the accompanying figures. For simplicity and clarity of illustration, elements illustrated in the figures are not necessarily drawn to scale. For example, the dimensions of some elements may be exaggerated relative to other elements for clarity. Further, where considered appropriate, reference labels have been repeated among the figures to indicate corresponding or analogous elements. FIG. 1A is a block diagram of an exemplary system according to an embodiment of the invention. FIG. IB is a block diagram of another exemplary system according to an embodiment of the invention. FIG. 1C is a block diagram of yet another example system according to an embodiment of the invention. FIG. 2A-2K are schematic diagrams of register allocation in rotation based alias protection register according to some embodiments of the invention. FIG. 3 is a flow chart in accordance with some embodiments of the invention. DETAILED DESCRIPTION The following description describes techniques to provide alias register allocation algorithms to reduce register usage in rotation-based alias protection register. The implementation of the techniques is not restricted in computing systems; it may be used by any execution environments for similar purposes, such as, for example, any other digital/electronic device. In the following description, numerous specific details such as logic implementations, opcodes, means to specify operands, resource partitioning/sharing/duplication implementations, types and interrelationships of system components, and logic partitioning/integration choices are set forth in order to provide a more thorough understanding of the present invention. However, the invention may be practiced without such specific details. In other instances, control structures and full software instruction sequences have not been shown in detail in order not to obscure the invention. References in the specification to "one embodiment", "an embodiment", "an example embodiment", etc., indicate that the embodiment described may include a particular feature, structure, or characteristic, but every embodiment may not necessarily include the particular feature, structure, or characteristic. Moreover, such phrases are not necessarily referring to the same embodiment. Further, when a particular feature, structure, or characteristic is described in connection with an embodiment, it is submitted that it is within the knowledge of one skilled in the art to effect such feature, structure, or characteristic in connection with other embodiments whether or not explicitly described. Embodiments of the invention may be implemented in hardware, firmware, software, or any combination thereof. Embodiments of the invention may also be implemented as instructions stored on a machine-readable medium, which may be read and executed by one or more processors. A machine-readable medium may include any mechanism for storing or transmitting information in a form readable by a machine (e.g., a computing device). For example, a machine-readable medium may include read only memory (ROM); random access memory (RAM); magnetic disk storage media; optical storage media; flash memory devices; electrical, optical, acoustical or other forms of propagated signals (e.g., carrier waves, infrared signals, digital signals, etc.), and others. The following description may include terms, such as first, second, etc. that are used for descriptive purposes only and are not to be construed as limiting. FIG. 1A illustrates a block diagram of an exemplary embodiment of a system 100. The system 100 may comprise a processor 102. Processor 102 may comprise any type of processors capable of executing software and/or process data signals. In an embodiment, processor 102 may comprise a complex instruction set computer (CISC) microprocessor, a reduced instruction set computing (RISC) microprocessor, a very long instruction word (VLIW) microprocessor, a processor implementing a combination of instruction sets, or any other processor device, such as a digital signal processor such as a microprocessor, digital signal processor or microcontroller. Although FIG. 1A shows only one such processor 102, there may be one or more processors in the system 100 and one or more processors may include multiple threads, multiple cores, or the like. The present enhancement is not limited to computing systems. Alternative embodiments of the present invention can be used in any form factor devices that uses unified extensible firmware interface (UEFI) Basic Input/Output System (BIOS), such as handheld devices and embedded applications. Some examples of handheld devices include cellular phones, Internet Protocol devices, digital cameras, personal digital assistants (PDAs), or handheld PCs such as netbook or notebook. Embedded applications can include a micro controller, a digital signal processor (DSP), system on a chip, network computers (NetPC), set-top boxes, network hubs, wide area network (WAN) switches, or any other system. The processors 102 may be coupled to a system logic chip 104. For example, the system logic chip 104 in the illustrated embodiment may be a memory controller hub (MCH). In one embodiment, the MCH 104 may provide a memory path 120 to system memory 106 for instruction and data storage and/or for storage of, e.g., graphics commands, data and textures. The memory path 120 may comprise a memory bus. The MCH 104 may direct data signals between processor 102, system memory 106, and other components in the system 100 and bridge the data signals between processor 102, system memory 106, and system I/O. Memory 106 may be a hard disk, a floopy disk, random access memory (RAM), read only memory (ROM), flash memory, or any other type of medium readable by processor 102. MCH 104 may be coupled to an I/O controller hub (ICH) 108 via a local I/O interconnect. In an embodiment, the local I/O interconnect may be a high-speed I/O bus, such as peripheral component interconnect (PCI) bus. ICH 108 may provide connections to one or more I/O devices, e.g., via a local I/O interconnect. Some examples may comprise data storage device 118, audio I/O 120, keyboard/mouse I/O 122, and a network controller 116, or other integrated I/O components such as integrated driver electronics (IDE), local area network (LAN) and serial expansion port such as universal serial bus (USB), PCI slots (not shown), wireless transceiver, legacy I/O controller or the like. The data storage device 118 may comprise a hard disk drive, a floppy disk drive, a CD-ROM device, a flash memory device, or other mass storage device. Referring to FIG. 1A, non- volatile memory, such as flash memory 112, may be coupled to ICH 108 via, e.g., a low pin count (LPC) bus. The BIOS firmware 114 may reside in flash memory 112 and boot up may execute instructions from the flash memory, or firmware. Although FIG. 1A illustrates BIOS firmware 114 in flash memory 112, in some embodiments, BIOS firmware 114 may be stored in other non-volatile memory such as a firmware hub, or the like. In an embodiment, BIOS firmware 114 may be implemented by Unified Extensible Firmware Interface (UEFI) firmware or any other firmware. Although FIG.1A illustrates the system 100, the embodiments according to the invention may be used in any other hardware architecture such as a platform using a plurality of processor cores or a platform using a processor or a coprocessor, a platform using I/O hubs, or memory control embedded within the processors, or the like, may be used. FIG. IB illustrates an alternative embodiment of a system 140 which implements the principles of the present invention. The system 140 may comprise a processor 142. The processor 142 may comprise any type of processors capable of executing software and/or process data signals. In an embodiment, processor 142 may comprise any type of processors or processor devices as mentioned above with regard to processor 102. In an embodiment, processor 142 may be coupled to system memory 144 via a memory path (not shown) for instruction and data storage and/or for storage of, e.g., graphics commands, data and textures. In another embodiment, processor 142 may be coupled to one or more peripheral component interconnect (PCI) ports 160 via a PCI interconnect; however, in some embodiment, the PCI ports 160 may not be required. Memory 144 may be a hard disk, a floopy disk, random access memory (RAM), read only memory (ROM), flash memory, or any other type of medium readable by processor 142. Although FIG. IB shows only one such processor 142, there may be one or more processors in the system 140 and one or more processors may include multiple threads, multiple cores, or the like. The present enhancement is not limited to computer systems or data processing device systems. Alternative embodiments of the present invention can be used in any form factor devices that uses unified extensible firmware interface (UEFI) Basic Input/Output System (BIOS), such as handheld devices and embedded applications. Some examples of handheld devices include cellular phones, Internet Protocol devices, digital cameras, personal digital assistants (PDAs), handheld PCs such as netbook or notebook, or smart devices such as tablets or smart phones or the like. Embedded applications can include a micro controller, a digital signal processor (DSP), system on a chip, network computers (NetPC), set-top boxes, network hubs, wide area network (WAN) switches, or any other system. The processors 142 may be coupled to a system logic chip 146. For example, the system logic chip 146 in the illustrated embodiment may be a platform controller hub (PCH). In one embodiment, PCH 146 may provide connections to one or more I/O devices, e.g., via a local I/O interconnect. In an embodiment, the local I/O interconnect may be a high-speed I/O bus, such as peripheral component interconnect (PCI) bus. PCH 146 may direct data signals or other information between processor 142 and one or more other components in the system 140 and bridge the data signals or information between processor 142 and system I/O. Some examples of the one or more components may comprise data storage device 142, one or more PCI port 154, networking control 156, USB port 158. In one embodiment, data storage device 152 may comprise a hard disk drive, a floppy disk drive, a CD-ROM device, a flash memory device, or other mass storage device. Although FIG. IB shows some examples of the components, PCH 146 may provide connections to any other components, such as audio I/O, keyboard/mouse I/O, and other integrated I/O components such as integrated driver electronics (IDE), local area network (LAN) and other serial expansion port, wireless transceiver, legacy I/O controller or the like. Referring to FIG. IB, non-volatile memory, such as flash memory 148, may be coupled to PCH 146 via, e.g., a low pin count (LPC) bus. BIOS firmware 150 may reside in flash memory 148 and boot up may execute instructions from the flash memory, or firmware. Although FIG. IB illustrates BIOS firmware 150 in flash memory 148, in some embodiments, BIOS firmware 150 may be stored in other non-volatile memory such as a firmware hub, or the like. In an embodiment, BIOS firmware 150 may be implemented by Unified Extensible Firmware Interface (UEFI) firmware or any other firmware. Although FIG. IB illustrates the system 140, the embodiments according to the invention may be used in any other hardware and software architecture such as a platform using a plurality of processor cores or a platform using a processor or a coprocessor, a platform using I/O hubs, or memory control embedded within the processors, or the like, may be used. FIG. 1C illustrates another embodiment of a system 160 that may implement the principles of the present invention. The system 160 may comprise a processor 162. The processor 162 may comprise any type of processors capable of executing software and/or process data signals. The processor 162 may comprise any type of processors or processor devices as mentioned above with regard to processor 102. The system 160 may comprise a memory 163 that may couple to the processor 162 via an interconnect 168 or any other connection such as bus, memory path, etc. Examples of memory 163 may comprise a hard disk, a floopy disk, random access memory (RAM), read only memory (ROM), flash memory, volatile memory devices or non- volatile memory devices, or any other type of medium readable by processor 162. In another embodiment, processor 162 may be coupled to a network component 164 that may comprise, e.g., wired network connection and/or wireless network connection or any other network connection. Processor 162 may be further coupled to a I/O controller 165 that may be coupled to one or more I/O devices 166. Fig. 1C illustrates an embodiment of the system 160; in some embodiments, the system 160 may comprise one or more other components that may be implemented in hardware, software, firmware or any combination of them. In another embodiment, examples of the system 160 may comprise any form factor devices or apparatus as mentioned above with regard to Figs. 1A or IB. FIG. 2A is a schematic diagram according to an embodiment of the invention. Reference number 210 may refer to original codes or instructions that may have an order of store_0, store_l, and so on as shown in Fig. 2A. In one embodiment, the original codes may be within an atomic region; however, in some embodiment, the atomic region may not be necessary. In one embodiment, original codes 210 may be reordered or scheduled to scheduled codes or instruction 220 that may have a different sequence or order with regard to original codes. For example, instruction 220a may refer to a memory store instruction store_5 that may be scheduled to the first instruction to be executed by, e.g., an execution logic. The embodiment of FIG. 2A may use rotation-based alias protection registers that may allow each memory instruction to set up an alias protection register and check against a set of alias protection registers with a bit mask. In one embodiment, a rotation-based alias checking may be utilized. The alias protection registers may be organized in a circular buffer or a circular queue rotated based on AHPTR (Alias Head Pointer) 270 that may point to a head of the circular buffer. A memory instruction may specify an alias protection register number ORD 230 relative to current AHPTR 270 (with possible wrapping around). For example, referring to Fig. 2A, based on an order of the original codes, a first alias protection register with the register ORD of "0" for the first instruction store_0 may be set up at the head of the circular buffer, the second instruction store_l may specify a second alias protection register with the register ORD of "1", and so on. In another embodiment, a memory instruction may have a P bit to indicate that hardware such as processor 102 or 142 or any other execution logic may set up an alias protection register with register number AHPTR+ORD for the current instruction. In one embodiment, registers in range of [AHPTR+ORD, AHPTR) may be checked against. For example, if AHPTR=2, ORD=l and the total register number is 5 (e.g., 0 - 4), register 3, register 4, register 0 and register lin the range of [3, 2) may be checked. In one embodiment, wrapping around may be used. A memory instruction may have a C bit to indicate that hardware such as a processor or any other execution logic may check against all the alias protection registers with register number >= AHPTR+ORD (with possible wrapping around). In response to instruction scheduling, alias protection registers with number ORD 230 may be allocated for the scheduled instructions based on an original execution order of the instructions. For example, in Figure 2A, alias protection registers with ORD 230 may be allocated as the order in the original program execution. In another embodiment, a memory instruction may specify a rotation number ROT that may indicate the alias head pointer may to be rotated by an amount indicated in the ROT. In one embodiment, the rotation value ROT may be used to indicate that all the alias protection registers between AHPTR and AHPTR+ROT (with possible wrapping around, including AHPTR, excluding AHPTR + ROT) may be released, e.g., before execution of an instruction. In one embodiment, hardware such as processor 102 or 142 may rotate AHPTR by ROT and clear all the valid bits for the alias protection registers between AHPTR and AHPTR + ROT. In one embodiment, in response to setting up an alias protection register with a P bit, the hardware may set a valid bit for the register. For example, a valid bit with, e.g., a logical "1" may represent a valid alias protection register that may be checked against by, e.g., the hardware. In another embodiment, a valid bit with a logical value, e.g., "0", may indicate that the corresponding alias protection register may not be checked against. For example, numerical reference 240 of Fig. 2A may refer to "valid _all" that may comprise a set of one or more valid bits for a set of one or more alias protection registers. In one embodiment, the number of valid bits in "valid_all" 240 may be the same as the number of alias protection registers and/or the number of scheduled instructions; however, in some embodiments, the number of valid bits may be different from the number of scheduled instructions. For example, based on the order of original codes, the valid bit for the alias protection register with ORD of "5" for a last instruction store_5 may be set at a highest-order bit of valid_all 240, and the valid bit for the alias protection register with ORD of "0" for the first instruction store_0 may be set at a lowest-order bit of valid_all 240 and so on; however, in some embodiments, the valid bits in valid_all 240 may be provided in a different order. In some embodiments, the valid bits in valid_all 240 may have an order based on the ORD for the alias protection registers. For example, valid_all field 240a may be "000000" that may represent none of the alias protection registers may be checked against for instruction "store_5" that is the first instruction in the scheduled sequence. The valid_all field 240b may relate to a subsequent instruction "store_2" in the scheduled codes. The valid_all field 240b may be "100000", wherein the valid bit "1" may correspond to the previous instruction "store_5" and may indicate that the alias protection register for "store_5" may be checked against. And, the valid bits "00000" in 240b may indicate that the alias protection registers for store_2 itself, store_0, load_4, store_l, load_3 may not be checked against. Referring to FIG. 2A, valid_st field 250 may relate to a store instruction and may comprise a set of one or more valid bits. For example, valid_st 250 may be different from valid_all 240 in that a valid bit for an alias protection register for a load instruction may have a logical value of "0" in valid_st 250. In one embodiment, the hardware such as 102 or 142 or other execution logic may maintain the valid bits for, e.g., all the alias protection registers and compute the bit mask 260 for checking before execution of each instruction. A load instruction may not check against another load instruction. As seen from the bit mask 260a for load_3, there may not be valid bit (e.g., logical value "1") for load_4 or the valid bit for the alias protection register of load_4 may not be asserted. In another embodiment, for the bit mask 260a for load_3, there may not be valid bits for store_0, store_l, and store_2 that are ordered before load_3 in an original execution order. The hardware may maintain separate valid bits for all instructions (e.g., valid_all 240) and for store instructions only (e.g., valid_st 250). Store instructions may check against valid_all 240 and load instructions may only check against valid_st 250. FIG.2A illustrates an embodiment of a formula for hardware such as a processor or any other execution logic to maintain valid_all 240, valid_st 250 and compute the mask 260 based on ORD 230, e.g., in C language semantics. The algorithm for register allocation for rotation-based alias protection registers of FIG.2A may allocate a register for each instruction in their original program order, shown as "register ORD" 230 in FIG. 2A. The algorithm may be used to guarantee no false negative or false positive in alias checking. Referring to Fig. 2A, in one embodiment, each instruction may have a P/C bit (not shown) but may not have a ROT. In yet another embodiment, hardware such as a processor or any other execution logic may run the scheduled code 220 with ORD/P/C 230 and compute valid_all 240, valid_st 250 and valid_mask 260 to do alias checking. Referring to FIG. 2B, an embodiment of register allocation is illustrated. In one embodiment, the register allocation of Fig. 2B may be used to reduce a number of alias protection registers used in rotation-based alias protection register scheme and may not generate false positive or false negative. Embodiment of FIG. 2B may be integrated with the instruction scheduling and optimizations and may be used for dynamic optimizations. In one embodiment, not every memory instruction may set up an alias protection register and not every memory instruction may check against other alias protection registers. For example, in Figure 2B, none of store_0, store_l and load_3 may set up an alias protection register, because no instruction may check against them. Store_5 may not check against any other alias protection registers because it is scheduled to be the first instruction in the region. The embodiment of FIG. 2B may utilize three registers instead of six registers. Referring to FIG. 2B, store_5, store_2 and load_4 may need protection (e.g., their P bit may be set to 1 and their C bit (not shown) may be set to 0) and they may be assigned a register with ORD number 2, 0, and 1, respectively. Store_0, store_l and load_3 may not need protection and may only check against other alias registers (e.g., their P bits may be set to 0 and their C bit may be set to 1). Fig. 2B illustrates an embodiment to compute valid_all 240, valid_st 250 and mask 260 based on ORD, P and C. FIG. 2C shows another embodiment of register allocation. In the embodiment of Figure 2C, in response to determining store_l and load_4 may not access the same memory, e.g., by software analysis, store_l may not check against load_4. The embodiment of Fig. 2C may use two registers. Referring to FIG. 2C, store_5, store_2 (e.g., their P bit = 1) may be protected and they may be assigned registers with ORD 1, and 0, respectively. Store_0, load_4, store_l and load_3 (e.g., their P bits = 0 and C bit = 1) may only check against other alias registers. In the embodiments of Figs. 2A to 2C, the register allocation may follow an original program order. For example, if a first instruction is to be executed before a second instruction in original program, the register number for the first instruction may be no larger than the second instruction. Fig. 2D illustrates yet another embodiment of register allocation. Referring to Fig. 2D, an embodiment of a data dependence is shown. For example, dependence Al— >A2 may be defined, if 1) instruction A 1 is to be executed before instruction A2 in original program order; 2) Al and A2 may access the same memory; 3) at least one of Al and A2 is a store instruction. Fig. 2D further shows an embodiment of checking constraint. For example, in response to, e.g., instruction scheduler speculating that Al and A2 may not conflict at runtime, the instruction scheduler may move A2 before Al. For example, the instruction scheduler may be implemented by software. If Al— >A2 and instruction A2 is reordered to before Al, A2 may set up an alias protection register to be checked against by Al. A checking constraint Al^>c A2 may be defined, if 1) Al— >A2; 2) A2 is reordered to before Al by scheduling. In one embodiment, instruction Al may check against instruction A2 in response to Al^>c A2. In some embodiments, checking constraint may not be transitive. For example, Al^>c A2 and A2^>c A3 may not imply Al^>c A3. In one embodiment, checking constraints Al^>c A2 may determine which instruction may set up an alias protection register and which instructions may check against other instructions. For example, the checking constrain Al^>c A2 may determine instruction A2 may set up a new alias protection register with P bit and instruction Al (with C bit) may check against instruction A2. Fig. 2D illustrates an embodiment of register allocation in the original program order. For example, as shown in Fig. 2D, load_l and load_3 may set up new alias protection registers with P bit based on the corresponding checking constraints and/or data dependences as shown in Fig. 2D. Register 0 and 1 may be allocated to load_l and load_3, respectively, according to their original program order. Store_2 may check against the register 1 of instruction load_3 that is scheduled before store_2. Store_0 may check against the register 0 of instruction load_l that is scheduled before store_0. In the embodiment of Figure 2D, there may not be checking constraint store_0^>c load_3 (e.g., store_0 may not access same memory with load_3) and store_0 may not need to check against load_3. Fig. 2E illustrates an embodiment of register allocation not in original program order. For example, referring to Fig. 2E, register 0 and 1 may be allocated to load_3 and load_l, respectively, in an order opposite to their original program order. Referring to Fig. 2E, register 0 for Load_3 (ORD 0/P) may only be checked by Store_2 (ORD 0/C). In the embodiment of Fig. 2E, after Store_2, register 0 may not be checked against. Referring to Fig. 2F, AHPTR may be rotated by ROT "1" to release register 0. For example, register 0 may be rotated and released at the beginning of execution of load_l; however, in some embodiments, the register 0 may be released, e.g., in response that the register may have been checked against and may not be used by any other instruction. In another embodiment, the rotation may be performed after the execution of the store_2. In response to the rotation, AHPTR may be increased by 1 (e.g., current AHPTR=1). The ORD may still be "0" for a subsequent instruction as counted relative to the current AHPTR. Figure 2E and Figure 2F may do the same alias checking. For example, Figure 2F may reduce the register number to 1 register with the rotation. In the embodiment of register allocation with rotation of Fig. 2F, the register 0 used by load_3 may be released by register rotation in response to store_2 having checked against register 0. In this embodiment, one register may be used. Fig. 2F shows an embodiment to compute valid_all, valid_st and mask based on ORD, P, C, ROT and register count REG (e.g., 1). In one embodiment, valid_all, valid_st and mask in the formula may be computed as relative to AHPTR. For example: mask(n) = valid_all(n) & ~( ( 1 « ORD(n) ) - 1 ), if n is a store and C(n); = valid_st(n) & ~( ( 1 « ORD(n) ) - 1 ), if n is a load and C(n). In one embodiment, hardware may circularly left shift the mask by AHPTR for checking. In one embodiment, "circularly left shift" may shift the bits in the mask to left and wrap around overflow bit to the right. For example, with mask 00001111, circularly left shift by 2 may result in 00111100. In the embodiment of Fig. 2F, the valid_all, valid_st, and mask may be calculated as relative to AHPTR. For example, if AHPTR=2, mask 00001111 may indicate register 2 to register 5 have a mask value "1" and register 6,7,0,1 may each have a mask value "0". The embodiment of FIG. 2G may be used to provide register allocation based on both checking constraint and anti-checking constraint integrated with the instruction scheduling. In one embodiments, if Al^>c A2, the register for Al may be no larger than that A2 in order for Al to check against A2. In some embodiment, checking constraint and anti-checking constraint may be used in register allocation to avoid false positive in the alias checking. In one embodiment, an anti-checking constraint Al^>ac A2 may be defined, if 1) Al— >A2; 2) Al may set up an alias protection register based on a checking constraints A0^>c Al ; 3) A2 may check against some alias protection registers based on a checking constraint A2^>c A3 ; and 4) scheduling may not reorder A2 before Al. In one embodiment, based on anti-checking constraint Al^>ac A2, the register number for Al may be smaller than A2 to prevent A2 from checking against Al (e.g., to avoid possible false positive). Checking constraints may be used to reduce or minimize constraints in register allocation, e.g., to prevent false negative in the checking, and anti-checking constraints may be used to reduce/minimize additional constrains in register allocation, e.g., to prevent false positive in the checking. The embodiment of FIG. 2G may be used to dynamically restrict scheduling when running out of alias registers. Referring to Fig. 2G, the checking constraints and anti-checking constraints may be built incrementally during the scheduling. The register for an instruction may be allocated only in response to the instruction is scheduled. If Al^>c A2 or Al^>ac A2, the register allocation for A2 may be delayed until the registers for Al are allocated. In the embodiment of FIG. 2G, P(A), C(A), ORD(A) and ROT(A) may respectively represent the P bit, C bit, ORD and ROT for an instruction A. In one embodiment, an optimizer/scheduler may be used to keep track of AHPTR change that may happen during the execution of one or more instructions, e.g., during the scheduling. In one embodiment, the optimizer/scheduler may be implemented, e.g., by software. For example, AHPTR_AT(A) may record the AHPTR at the execution of instruction A for the delayed register allocation. R(A) may represent whether the register for A is allocated or not. In the embodiment of FIG. 2G, a register allocation for an instruction may be delayed after all the instructions that check the instruction are scheduled (e.g., based on checking constraints). In one embodiment, one or more allocated registers may be released, e.g., after the corresponding scheduled instruction (e.g., only in beginning of next scheduled instruction). Although the embodiment of Figure 2G may utilize a list scheduling, some embodiments may be extended to work with any other scheduling techniques such as modulo scheduling. Referring to Fig. 2G, the embodiment of register allocation that may be integrated with instruction scheduling. In one embodiment, the embodiment may check whether it is run out of registers (e.g., ORD(A) >= REG). Referring to FIG. 2G, ORD(A) may relate to three variables, REG that may represent a register count, AHPTR and AHPTR_AT(A). AHPTR may always be available. The variable 'REG' may be bounded by a number of instructions with P(A) =1 and !R(A), wherein P(A) may represent that instruction A may need a new register to set up protection and !R(A) may represent that the register for instruction A has not been allocated yet. For example, the variable 'REG' may be bounded by a number of instructions that their register allocations are delayed. In one embodiment, AHPTR may keep increasing in the scheduled order. AHPTR_AT(A) may record the AHPTR at the execution of instruction A for delayed register allocation. In one embodiment, the delayed register allocations may be counted to prevent register overflow. In one embodiment, an optimizer/scheduler may keep track of the information such as the variables REG, AHPTR, AHPTR_AT during scheduling to estimate whether there is one or more alias protection register to be allocated to a scheduled instruction or it is run out of registers. In one embodiment, in response to running out of register, reordering of any new instruction A (i.e. P(A) = 1) may be prevented. In one embodiment, the remaining instructions may be scheduled in their original execution order to avoid the reordering. Figure 2H illustrates an embodiment to handle memory optimizations that may use alias registers. The optimization may be speculative if the second memory operation may be conflict with some memory operation between them. For example, speculative memory optimization may use alias register protection and check. The optimizations may be applied before instruction scheduling and alias registers allocation may be performed during instruction scheduling: however, in some embodiment, instruction optimization may not be necessary. In response to the optimization and during scheduling, the optimized code may be logically viewed as fusing eliminated instructions into other instructions, and the fused instruction/codes may be used for alias checking on all the eliminated instructions. Referring to Fig. 2H, in the embodiment of store-load elimination 282, the code may be logically viewed as fusing load_2 into store_l. In load-load elimination 284, the code may be logically viewed as fusing load_2 into load_l. In the store- store-elimination 286, the code may be logically viewed as fusing store_l into store_2. In one embodiment, the fused instruction may contain one or more logical instructions/codes. During the scheduling for the fused instruction, the constraints on the logical instructions in the fused instruction may be considered. For example, in the code shown in Figure 21, store-load-elimination may be applied from store_0 to load_3. After the optimization and in the scheduling, the constraints on both store_0 and load_3 may be considered when scheduling store_0. The register allocation is shown in Figure 21. Referring to Fig. 21, in optimization, Store_2 may check Load_3 and Store_0 may check Load_l. In response to Load_3 and Store_0 being merged into Store_0, Store_2 may check Store_0 and Store_0 may check Load_l. Store_0 and Load_l may need protection (P bit = 1) and may be assigned register 0, 1. Store_2 and Store_0 may check (C bit =1) against register 0. Store_0 may check register 0 before setting protection (P bit =1) and thus Store_0 may not check itself. In some embodiment, the fused instructions may contain cycles in data dependences, which may lead to cycles in the checking/anti-checking constraints. For example, a cycle in checking/anti-checking constraints may be represented as: store_0 (load_3)^>c load_l^>ac store_2^>c store_0 (load_3). In one embodiment, allocating alias protection registers may lead to false negative and false positive if the checking/anti-checking constraints contain cycles. Fig. 2J shows an embodiment to insert a dummy load to break the constraint cycle. Referring to 2J, in one embodiment, fused instructions may use one or more alias protection registers to break constraint cycles. In another embodiment, a dummy memory instruction may be inserted immediately after the fused instruction that may access the same memory as fused instruction but may use different alias protection register with regard to the fused instruction. In one embodiment, the hardware may implement the dummy memory instructions to perform only the alias protection/check without actual memory access to reduce overhead. In one embodiment, a dummy memory operation may be inserted when constraint cycle is about to happen if constraint cycle may not happen frequently. For example, during the scheduling, information on P/C bit for each logical instruction in a fused instruction may be tracked. Dummy memory instructions may be inserted to partition the P/C bits, in case the P/C bits are on one or more logical instructions in a fused instruction. In some embodiments, setting P/C bit on one or more logical instructions in a fused instruction may be avoided if the one or more logical instructions in a fused instruction access the same memory. For example, in the load-load elimination cases shown in Figure 2H, the C bit on Load_2 may not be set in response to that the instructions checked by Load_2 may always be checked by Load_l. Similarly, P bit on Load_l may not be set in response that the instructions that check Load_l may always check Load_2. In one embodiment, C/P bit on at most three logical instructions may be kept, such as the earliest instruction with C bit, the latest instruction with P bit and the latest store with P bit, no matter how many logical instructions are merged into a fused instruction. The embodiment of FIG. 2J illustrates an example of using dummy memory instructions to break the cycle. In some embodiments, dummy memory instructions may break the cycle, but may not remove the checking/anti-checking constraints. The scheduling of dummy memory instruction may not be performed in case of the cycle in response that alias protection registers may not be enough for the schedule. For example, in the schedule shown in Figure 2J, in response that there is no alias protection register after scheduling load_l, either store_0 or store_2 may not be scheduled if at least one more alias protection register may be needed for either of the scheduling. The embodiment of figure 2J may illustrate that if store_0 is to be scheduled, a new alias protection register may be needed for dummy_load that is to be checked by store_2. If store_2 is to be scheduled, a new alias protection register may be needed for store_2 that may be checked against by store_0 (load_3). In one embodiment, the scheduling of store_2 may be executed based on availability of the new alias protection register for store_2. For example, store_2 may not be scheduled in response to determining that the new alias protection register for store_2 may be absent or may not be available. In one embodiment, a number of alias registers may be reserved. For example, the number may equal to a number of eliminated instructions in fused instructions. If all the remaining instructions are scheduled in their original order (for fused instruction, order of its first logical instruction), only the reordered logical instructions may use additional alias registers. With the alias register reservation, the scheduling may be performed without running out of registers. Figure 2K depicts an embodiment of an algorithm that may extend the register allocation in Figure 2G to handle register overflow and constraint cycles. Referring to FIG. 2K, the embodiment may reserve a register count for all fused instructions. In one embodiment, the alias registers by the number of eliminated instructions in fused instructions may be reserved to avoid running out of registers. In another embodiment, if all the remaining instructions are scheduled in their original order (for fused instruction, order of its first logical instruction), only the reordered logical instructions may need additional alias registers. FIG. 3 illustrates an embodiment of a method. The flow of FIG. 3 may be used to perform optimization and scheduling on original codes. In one embodiment, one or more embodiments as shown in Figs. 2A to 2K may be used in the flow of FIG. 3. In one embodiment, the flow of FIG. 3 may be used to implement an optimizer/scheduler that may optimize and/or schedule original codes. In one embodiment, the optimizer/scheduler may be implemented by software; however, in some embodiments, the optimizer/scheduler may be implemented by hardware, software, firmware and/or any combination of them. In block 302, the optimizer/scheduler may compute data dependences such as Al— >A2 between instruction Al and A2. In block 304, the optimizer/scheduler may reserve a number of alias registers to prevent register overflow due to one or more fused instructions. In one embodiment, the number of alias registers may be equal to a number of the fused instructions; however, in some embodiments, the number of alias registers may have a different value. In block 306, the optimizer/scheduler may select one instruction, e.g., the second instruction A2, to schedule until all instructions in the original codes are scheduled. In block 308, in response to determining that the schedule of the selected second instruction A2 may cause alias register overflow, the optimizer/scheduler may return to block 306, wherein the optimizer/scheduler may select a third instruction A3 other than the second instruction A2. In response to selecting the third instruction A3 in block 306, the optimizer/scheduler may determine if the third instruction A3 may run out of alias registers (block 308). If yes, the optimizer/scheduler may continue to select a different instruction to schedule until it is determined that the selected instruction may not cause alias register overflow. The optimizer/scheduler may schedule the selected instruction in response to determining that the selected instruction may not cause alias register overflow (block 308). In block 310, the optimizer/scheduler may add constraints relating to the scheduled instruction, e.g., A3, to the constraint graph (e.g., as shown in FIG. 2K) and set the corresponding C/P bits. In one embodiment, the optimizer/scheduler may add checking constraints and/or anti-checking constraints for the scheduled instruction A3 to the constraint graph or any other structure. In block 312, the optimizer/scheduler may insert dummy memory operation or codes to prevent cycles in the constraint graph. In one embodiment, the optimizer/scheduler may remove unnecessary C/P bits if the scheduled instruction A3 is a fused instruction. In another embodiment, if the scheduled instruction has C/P bits on multiple logical instructions, optimizer/scheduler may insert one or more dummy memory operation or instruction to partition the C/P bits. In another embodiment, if no alias register is needed for the scheduled instruction, the flow may go back to block 306 to select and schedule a next instruction (block 314). In block 316, optimizer/scheduler may release an allocated alias register through rotation. For example, the release may be implemented in response that the allocated alias register has been checked against and no other instruction to check against the allocated register. In one embodiment, the alias protection register may be released at the beginning of execution of a next instruction. In another embodiment, allocation for an alias protection register that is used by a current instruction may be delayed until the register being released at the beginning the execution of a next instruction. In block 316, AHPTR may be updated in response to the rotation. In block 318, if the constraints in the constraint graph prevent the alias register allocation for the scheduled instruction, e.g., if there are one or more constraints from a subsequent instruction that has not been scheduled, the optimizer/scheduler may delay the register allocation for the current scheduled instruction. In one embodiment, the alias register for the current scheduled instruction may be allocated in response to the subsequent instruction being scheduled. For example, the flow may return to block 306 to select and schedule a next instruction. In block 320, in response to determining that the register allocation for the current scheduled instruction may not need to be delayed, the optimizer/scheduler may allocate an alias register for the scheduled instruction. In block 322, in response to allocating the new alias register for the scheduled instruction, the optimizer/scheduler may remove constraints related to the scheduled instruction, and/or may recursively allocate alias registers for the scheduled instruction whose register allocation is delayed due to the constraints. In one embodiment, the embodiments of Figs. 2A to 2K and Fig. 3 may be used for register allocation for rotation-based alias protection registers. In one embodiment, the embodiments may be used to reduce the number of registers used in rotation-based alias protection. For example, reducing alias register may be used for optimization benefits and performance. Reducing alias register may be used to enable reduction of the alias hardware to save die area and power consumption. While the method of FIG. 3 is illustrated to comprise a sequence of processes, the methods in some embodiments may perform illustrated processes in a different order. While the embodiments as shown in Fig. 3 and/or Figs. 2A to 2K may be implemented by an optimizer/scheduler, in some embodiments, instruction optimizing and scheduling may be implemented separately by an optimizer and a scheduler, respectively, or in some embodiments, one or more logics such as a register allocation logic may be used to implement the embodiments of Fig. 3 and/or Figs. 2A to 2K. In another embodiment, instruction optimizing and scheduling may be implemented by either an optimizer or a scheduler. While the embodiments as mentioned herein may relate to store and/or load instructions, in some embodiments, any other memory instructions may be utilized. While certain features of the invention have been described with reference to embodiments, the description is not intended to be construed in a limiting sense. Various modifications of the embodiments, as well as other embodiments of the invention, which are apparent to persons skilled in the art to which the invention pertains are deemed to lie within the spirit and scope of the invention. |
According to an embodiment of the present disclosure, a leadframe for an integrated circuit (IC) device may comprise a center support structure for mounting an IC chip, a plurality of pins extending from the center support structure, and a bar connecting the plurality of pins remote from the center support structure. Each pin of the plurality of pins may include a dimple. |
CLAIMS1. A leadframe for an integrated circuit (IC) device, the leadframe comprising: a center support structure for mounting an IC chip;a plurality of pins extending from the center support structure; anda bar connecting the plurality of pins remote from the center support structure;wherein each pin of the plurality of pins includes a dimple.2. A leadframe according to Claim 1, further comprising the dimple of each pin disposed adjacent the bar.3. A leadframe according to Claims 1 or 2, wherein the leadframe is for a quad- flat no-leads IC package.4. A leadframe according to Claim 3, wherein the leadframe is for a dual-flat no- leads IC package.5. A leadframe according to any one of the proceeding Claims, wherein the leadframe includes a multitude of center support structures arrayed in a matrix for manufacturing multiple IC devices.6. A leadframe according to any one of the proceeding Claims, wherein the leadframe includes a multitude of center support structures arrayed in a matrix for manufacturing multiple IC devices; andwherein each dimple extends from a first side of the bar to a second side of the bar. 7. A leadframe according to any one of the proceeding Claims, wherein each dimple is etched into the respective pins in a square shape.8. A leadframe according to any one of the proceeding Claims, wherein each dimple is etched into the respective pins in a square shape with sides having a length of approximately 0.14 mm.9. A leadframe according to any one of the proceeding Claims, wherein each dimple is etched to a depth of approximately half the full height of the respective pin.10. A method for manufacturing an integrated circuit (IC) device in a flat no-leads package, the method comprising:mounting an IC chip onto a center support structure of a leadframe, the leadframe including:the center support structure;a plurality of pins extending from the center support structure; and a bar connecting the plurality of pins remote from the center support structure; wherein each pin of the plurality of pins includes a dimple;bonding the IC chip to at least some of the plurality of pins;encapsulating the leadframe and bonded IC chip creating an IC package; and cutting the IC package free from the bar by sawing through the encapsulated lead frame at a set of cutting lines intersecting the dimples of the plurality of pins, exposing an end face of each of the plurality of pins and leaving a portion of the dimples that extends from the bottom surface of the IC package to a side surface with the exposed end faces of the pins.11. A method according to Claim 10, further comprising:performing an isolation cut to isolate individual pins of the IC package without separating the IC package from the lead frame; andperforming a circuit test of the isolated individual pins after the isolation cut.12. A method according to Claims 10 or 11, further comprising bonding the IC chip to at least some of the plurality of pins using wire bonding.13. A method according to Claim 12, further comprising plating the exposed portion of the plurality of pins, including the dimples, on a bottom surface of the IC package before cutting the IC package free from the bar.14. A method for installing an integrated circuit (IC) device in a flat no-leads package onto a printed circuit board (PCB), the method comprising:mounting an IC chip onto a center support structure of a leadframe, the leadframe including:the center support structure;a plurality of pins extending from the center support structure; and a bar connecting the plurality of pins remote from the center support structure; wherein each pin of the plurality of pins includes a dimple;bonding the IC chip to at least some of the plurality of pins;encapsulating the leadframe and bonded IC chip creating an IC package; and cutting the IC package free from the bar by sawing through the encapsulated lead frame at a set of cutting lines intersecting the dimples of the plurality of pins, exposing an end face of each of the plurality of pins and leaving a portion of the dimples that extends from the bottom surface of the IC package to a side surface with the exposed end faces of the pins; andattaching the flat no-leads IC package to the PCB using a reflow soldering method to join the plurality of pins of the IC package to respective contact points on the PCB.15. A method according to Claim 14, further comprising:performing an isolation cut to isolate individual pins of the IC package without separating the IC package from the bar; andperforming a circuit test of the isolated individual pins after the isolation cut.16. A method according to Claims 14 or 15, further comprising bonding the IC chip to at least some of the plurality of pins using wire bonding.17. A method according to Claim 16, wherein the reflow soldering process provides fillet heights of approximately 60% of the exposed surface of the pins.18. A method according to any one of the proceeding Claims 14-17, further comprising plating the exposed portion of the plurality of pins on a bottom surface of the IC package, including the dimples, before cutting the IC package free from the bar.19. An integrated circuit (IC) device in a flat no-leads package comprising:an IC chip mounted onto a center support structure of a leadframe and encapsulated with the leadframe to form an IC package having a bottom face and four sides;a set of pins with faces exposed along a lower edge of the four sides of the IC package; anda dimple in each of the set of pins disposed along a perimeter of the bottom face of the IC package and extending into the exposed faces of the set of pins;wherein at least a bottom facing exposed portion of each of the plurality of pins including the dimple is plated.20. An IC device according to Claim 19, wherein the plurality of pins are attached to a printed circuit board with fillet heights of approximately 60%. |
FLAT NO-LEADS PACKAGE WITH IMPROVED CONTACT PINSRELATED PATENT APPLICATIONThis application claims priority to commonly owned U. S. Provisional Patent Application No. 62/082,357, filed November 20, 2014, which is hereby incorporated by reference herein for all purposes.TECHNICAL FIELDThe present disclosure relates to integrated circuit packaging, in particular to so-called flat no-leads packaging for integrated circuits. BACKGROUNDFlat no-leads packaging refers to a type of integrated circuit (IC) packaging with integrated pins for surface mounting to a printed circuit board (PCB). Flat no-leads may sometimes be called micro leadframes (MLF). Flat no-leads packages, including for example quad-flat no-leads (QFN) and dual-flat no-leads (DFN), provide physical and electrical connection between an encapsulated IC component and an external circuit (e.g., to a printed circuit board (PCB)).In general, the contact pins for a flat no-leads package do not extend beyond the edges of the package. The pins are usually formed by a single leadframe that includes a central support structure for the die of the IC. The leadframe and IC are encapsulated in a housing, typically made of plastic. Each leadframe may be part of a matrix of leadframes that has been molded to encapsulate several individual IC devices. Usually, the matrix is sawed apart to separate the individual IC devices by cutting through any joining members of the leadframe. The sawing or cutting process also exposes the contact pins along the edges of the packages.Once sawn, the bare contact pins may provide bad or no connection for reflow soldering. The exposed face of contact pins may not provide sufficient wettable flanks to provide a reliable connection. Reflow soldering is a preferred method for attaching surface mount components to a PCB, intended to melt the solder and heat the adjoining surfaces without overheating the electrical components, and thereby reducing the risk of damage to the components. SUMMARYHence, a process or method that improves the wettable surface of flat no-leads contact pins for a reflow soldering process to mount the flat no-leads package to an external circuit may provide improved electrical and mechanical performance of an IC in a QFN or other flat no-leads package.According to an embodiment of the present disclosure, a leadframe for an integrated circuit (IC) device may comprise a center support structure for mounting an IC chip, a plurality of pins extending from the center support structure, and a bar connecting the plurality of pins remote from the center support structure. Each pin of the plurality of pins may include a dimple. The dimple of each pin may be disposed adjacent the bar. In some embodiments, the leadframe may be for a quad-flat no-leads IC package. In some embodiments, the leadframe may be for a dual-flat no-leads IC package. The leadframe may include a multitude of center support structures arrayed in a matrix for manufacturing multiple IC devices. In some embodiments, each dimple may extend from a first side of the bar to a second side of the bar. Each dimple may be etched into the respective pins in a square shape. Each dimple may be etched into the respective pins in a square shape with sides having a length of approximately 0.14 mm. Each dimple may be etched to a depth of approximately half the full height of the respective pin.According to an embodiment of the present disclosure, a method for manufacturing an integrated circuit (IC) device in a flat no-leads package may include mounting an IC chip onto a center support structure of a leadframe, bonding the IC chip to at least some pins of the leadframe, encapsulating the leadframe and bonded IC chip creating an IC package, and cutting the IC package free from the bar by sawing through the encapsulated lead frame at a set of cutting lines intersecting the dimples of the plurality of pins. The leadframe may include a center support structure, a plurality of pins extending from the center support structure, and a bar connecting the plurality of pins remote from the center support structure. Each pin of the plurality of pins may include a dimple. Sawing along the set of cutting lines may expose an end face of each of the plurality of pins and leave a portion of the dimples that extends from the bottom surface of the IC package to a side surface with the exposed end faces of the pins. In some embodiments, the method may include performing an isolation cut to isolate individual pins of the IC package without separating the IC package from the lead frame and performing a circuit test of the isolated individual pins after the isolation cut. Some embodiments may include bonding the IC chip to at least some of the plurality of pins using wire bonding. Some embodiments may include plating the exposed portion of the plurality of pins, including the dimples, on a bottom surface of the IC package before cutting the IC package free from the bar.According to another embodiment of the present disclosure, a method for installing an integrated circuit (IC) device in a flat no-leads package onto a printed circuit board (PCB) may include mounting an IC chip onto a center support structure of a leadframe, bonding the IC chip to at least some of the plurality of pins, encapsulating the leadframe and bonded IC chip creating an IC package, cutting the IC package free from the bar by sawing through the encapsulated lead frame at a set of cutting lines intersecting the dimples of the plurality of pins, and attaching the flat no-leads IC package to the PCB using a reflow soldering method to join the plurality of pins of the IC package to respective contact points on the PCB. Sawing along the set of cutting lines may expose an end face of each of the plurality of pins and leave a portion of the dimples that extends from the bottom surface of the IC package to a side surface with the exposed end faces of the pins. The leadframe may include a center support structure, a plurality of pins extending from the center support structure, and a bar connecting the plurality of pins remote from the center support structure. Each pin of the plurality of pins may include a dimple. Some embodiments of the method may include performing an isolation cut to isolate individual pins of the IC package without separating the IC package from the bar and performing a circuit test of the isolated individual pins after the isolation cut. Some embodiments of the method may include bonding the IC chip to at least some of the plurality of pins using wire bonding. Some embodiments of the method may provide provides fillet heights of approximately 60% of the exposed surface of the pins. Some embodiments of the method may include plating the exposed portion of the plurality of pins on a bottom surface of the IC package, including the dimples, before cutting the IC package free from the bar.According to some embodiments of the present disclosure, an integrated circuit (IC) device in a flat no-leads package may include an IC chip mounted onto a center support structure of a leadframe and encapsulated with the leadframe to form an IC package having a bottom face and four sides, a set of pins with faces exposed along a lower edge of the four sides of the IC package, and a dimple in each of the set of pins disposed along a perimeter of the bottom face of the IC package and extending into the exposed faces of the set of pins. At least a bottom facing exposed portion of each of the plurality of pins including the dimple may be plated. In some embodiments, the plurality of pins may be attached to a printed circuit board with fillet heights of approximately 60%.BRIEF DESCRIPTION OF THE DRAWINGS Figure 1 is a schematic showing a cross section side view through an embodiment a flat no-leads package mounted on a printed circuit board (PCB) according to the teachings of the present disclosure.Figure 2A is a picture showing part of a typical QFN package in a side view and bottom view. Figure 2B shows an enlarged view of the face of copper contact pins along the edge of QFN package exposed by sawing through an encapsulated leadframe.Figure 3 is a picture showing a typical QFN package after a reflow soldering process failed to provide sufficient mechanical and electrical connections to a PCB.Figures 4A and 4B are pictures showing a partial view of a packaged IC device incorporating teachings of the present disclosure in a flat no-leads package with high wettable flanks for use in reflow soldering.Figures 5A and 5B are drawings showing an isometric view of a typical QFN package after mounting to a PCB by a reflow soldering process.Figures 6A and 6B are drawings showing a leadframe matrix including multiple leadframes which may be used to practice the teachings of the present disclosure. Figures 7A and 7B are drawings showing a portion of the plurality of pins of two adjacent leadframes incorporating teachings of the present disclosure.Figures 8A-8D show various embodiments of dimples and pins that may be used to practice the teachings of the present disclosure incorporating teachings of the present disclosure. Figures 9A and 9B are drawings showing an isometric view of an encapsulated IC device incorporating the teachings of the present disclosure.Figures 10A and 10B are drawings showing an isometric view of IC device and encapsulated in plastic attached to a PCB by a reflow soldering process according to teachings of the present disclosure. Figure 11 is a flowchart illustrating an example method for manufacturing an IC device in a flat no-leads package incorporating teachings of the present disclosure.Figure 12 illustrates an example process that may be used to practice teachings of the present disclosure. DETAILED DESCRIPTIONFigure 1 is a side view showing a cross section view through a flat no-leads package 10 mounted on a printed circuit board (PCB) 12. Package 10 includes contact pins 14a, 14b, die 16, leadframe 18, and encapsulation 20. Die 16 may include any integrated circuit, whether referred to as an IC, a chip, and/or a microchip. Die 16 may include a set of electronic circuits disposed on a substrate of semiconductor material, such as silicon.As shown in Figure 1, contact pin 14a is the subject of a failed reflow process in which the solder 20a did not stay attached to the exposed face of contact pin 14a; the bare copper face of contact pin 14a created by sawing the package 10 free from a leadframe matrix (shown in more detail in Figure 6 and discussed below) may contribute to such failures. In contrast, contact pin 14b shows an improved soldered connection 20b created by a successful reflow procedure. This improved connection provides both electrical communication and mechanical support. The face of contact pin 14b may have been plated before the reflow procedure (e.g., with tin plating).Figure 2 A is a picture showing part of a typical QFN package 10 in a side view and bottom view. Figure 2B shows an enlarged view of the face 24 of copper contact pins 14a along the edge of QFN package 10 exposed by sawing through the encapsulated leadframe 18. As shown in Figure 2A, the bottom 22 of contact pin 14a is plated (e.g., with tin plating) but the exposed face 24 is bare copper.Figure 3 is a picture of a typical QFN package 10 after a reflow soldering process failed to provide sufficient mechanical and electrical connections to a PCB 12. As shown in Figure 3, bare copper face 24 of contact pins 14a may provide bad or no connection after reflow soldering. The exposed face 24 of contact pins 14a may not provide sufficient wettable flanks to provide a reliable connection.Figures 4A and 4B are drawings showing an isometric view of a typical QFN package 10 after sawing through the encapsulated leadframe 18. The bottom 22 of each contact pin 14a is plated (e.g., with tin plating), but the exposed face 24 of each contact pin is unplated due to the sawing process. In many QFN packages 10, there is an additional plated central surface such as thermal pad 26.Figures 5A and 5B are drawings showing an isometric view of a typical QFN package 10 after mounting to a PCB 28 by a reflow soldering process. PCB includes leads 30, which are mechanically and electrically connected to the contact pins 14a by solder bead 32. As shown in Figures 5A and 5B, solder beads 32 cover only a small portion of exposed faces 24. As discussed above, this may be because of insufficient wettable flanks for the pins 14a.Figures 6A and 6B are drawings showing a leadframe matrix 40 including multiple leadframes 42a, 42b, 42c, 42d which may be used to practice the teachings of the present disclosure. As shown, each leadframe 42 may include a center support structure 44, a plurality of pins 46 extending from the center support structure, and one or more bars 48 connecting the plurality of pins remote from the center support structure. Leadframe 42 may include a metal structure providing electrical communication through the pins 46 from an IC device (not shown in Figures 6A and 6B) mounted to center support structure 44 as well as providing mechanical support for the IC device. In some applications, an IC device may be glued to center support structure 44. In some embodiments, the IC device may be referred to as a die. In some embodiments, pads or contact points on the die or IC device may be connected to respective pins by bonding (e.g., wire bonding, ball bonding, wedge bonding, compliant bonding, thermosonic bonding, or any other appropriate bonding technique). In some embodiments, leadframe 42 may be manufactured by etching or stamping.Figures 7A and 7B are drawings showing a portion of the plurality of pins 46 of two adjacent leadframes 42a, 42b. As shown in Figures 7 A and 7B, the pins 46 may each include a dimple 50. In some embodiments of the present disclosure, dimples 50 may be etched into pins 46. In the embodiment of Figures 7A and 7B, dimples 50 may be square with a side length of approximately 0.14 mm and disposed on opposite sides of bar 48. In some embodiments, two opposing dimples 50 may be disposed with centers spaced approximately 0.075 mm from the edge of bar 48. In some embodiments, the center of opposing dimples 50 may be disposed approximately 0.3 mm apart. Figures 8A-8D show various embodiments of dimples 50 and pins 44 that may be used to practice the teachings of the present disclosure. Figures 9A and 9B are drawings showing an isometric view of an encapsulated IC device 60 packaged in plastic 62 and incorporating the teachings of the present disclosure. The bottom surfaces 52 of the pins 46 and thermal pad 64 have been plated with tin to produce an IC device 60 in a fiat no-leads package with high wettable flanks for use in refiow soldering, providing an improved solder connection such as that shown at contact pin 14b in Figure 1. As shown, IC device 60 may comprise a quad-fiat no-leads package. In other embodiments, IC device 60 may comprise a dual-flat no-leads packaging, or any other packaging (e.g., any micro leadframe (MLT)) in which the leads do not extend much beyond the edges of the packaging and which is configured to surface-mount the IC to a PCB. As shown in Figures 9A and 9B, dimples 50 are plated along with bottom surfaces 52 of pins 46. Although the exposed faces 54 of pins 46 may include some bare copper, dimples 50 provide a plated surface on the side of IC device 60. The plated surface of dimples 50 provides increased wettable flanks and, therefore, may provide improved electrical and/or mechanical connections between IC device 60 and a PCB. In alternative embodiments, dimples 50 and/or bottom surfaces 52 may not be plated at all. In these embodiments, the physical shape of dimples 50 may allow solder to flow into dimples 50 and improve the connections even in the absence of plating.Figures 10A and 10B are drawings showing an isometric view of IC device 60 and encapsulated in plastic 62 attached to a PCB 64 by a refiow soldering process. As shown in Figures 10A and 10B, the pins 46 of IC device 60 are connected to leads 66 on PCB 64 by solder beads 68. In contrast to the IC device 10 shown in Figure 5B, solder beads 68 extend upward along exposed faces 54 of pins 46. Greater physical extent of solder beads 68 upward along exposed faces 54 may provide improved mechanical and/or electrical connections between IC device 60 and PCB 64. Figure 1 1 is a flowchart illustrating an example method 100 for manufacturing an IC device in a flat no-leads package incorporating teachings of the present disclosure. Method 100 may provide improved connection for mounting the IC device to a PCB.Step 102 may include backgrinding a semiconductor wafer on which an IC device has been produced. Typical semiconductor or IC manufacturing may use wafers approximately 750 μιη thick. This thickness may provide stability against warping during high-temperature processing. In contrast, once the IC device is complete, a thickness of approximately 50 μιη to 75 μηι may be preferred. Backgrinding (also called backlap or wafer thinning) may remove material from the side of the wafer opposite the IC device.Step 104 may include sawing and/or cutting the wafer to separate an IC chip from other components formed on the same wafer. Step 106 may include mounting the IC chip (or die) on a center support structure of a leadframe. The IC die may be attached by the center support structure by gluing or any other appropriate method.At Step 108, the IC die may be connected to the individual pins extending from the center support structure of the leadframe. In some embodiments, pads and/or contact points on the die or IC device may be connected to respective pins by bonding (e.g., wire bonding, ball bonding, wedge bonding, compliant bonding, thermosonic bonding, or any other appropriate bonding technique).At Step 1 10, the IC device and leadframe may be encapsulated to form an assembly. In some embodiments, this includes molding into a plastic case. If a plastic molding is used, a post-molding cure step may follow to harden and/or set the housing.Step 1 12 may include a chemical de-flashing and a plating process to cover the exposed bottom areas of the connection pins. As discussed above, the step of plating may not be incorporated in all embodiments of the present disclosure. In embodiments including plating, dimples in the pins may also be plated. Step 1 14 may include performing an isolation cut. The isolation cut may include sawing through the pins of each package to electrically isolate the pins from one another.Step 1 16 may include a test and marking of the IC device once the isolation cut has been completed. Method 100 may be changed by altering the order of the various steps, adding steps, and/or eliminating steps. For example, flat no-leads IC packages may be produced according to teachings of the present disclosure without performing an isolation cut and/or testing of the IC device. Persons having ordinary skill in the art will be able to develop alternative methods using these teachings without departing from the scope or intent of this disclosure.Step 1 18 may include a singulation cut to separate the IC device from the bar, the leadframe, and/or other nearby IC devices in embodiments where leadframe 42 is part of a matrix 40 of leadframes 42a, 42b, etc. The singulation cut may be made through the dimples 50 of the pins 46 of the leadframe 42.Figure 12 illustrates a process of one embodiment of a singulation cut that may be used at Step 118. Figures 12 is a schematic drawing showing isometric view of saw 70 cutting through pins 46 along bar 48 encapsulated in plastic molding 62. After any testing and/or marking in Step 116, a singulation cut of width wf is made through the full package as shown in Figure 11. Saw width, Ws, is wide enough to intersect dimples 50 but not so wide as to obliterate dimples 50 completely. Thus, after the singulation cut is complete, the remaining portion of dimples 50 will extend from bottom faces 52 to exposed faces 54 of pins 46 as shown in Figures 9 A and 9B.Step 120 may include attaching the separated IC device 60, in its package, to a PCB 64 or other mounting device. In some embodiments, the IC device may be attached to a PCB using a refiow soldering process. Figures 10A and 10B show an isometric view of the pin area of an IC device that has been mounted on a printed circuit board and attached by a reflow solder process. The dimples 50 provided by the present disclosure can increase the wettable flanks or fillet height to 60% and meet, for example, automotive customer requirements. Thus, according to various teachings of the present disclosure, the "wettable flanks" of a flat no-leads device may be improved and each solder joint made by a refiow soldering process may provide improved performance and/or increased acceptance rates during visual and/or performance testing.In contrast, a conventional manufacturing process for a flat no-leads integrated circuit package may leave pin connections without sufficient wettable surface for a refiow solder process. Even if the exposed pins are plated before separating the package from the leadframe or matrix, the final sawing step used in a typical process leaves only bare copper on the exposed faces of the pins. |
An embedded multi-die interconnect bridge (EMIB) die is configured with power delivery to the center of the EMIB die and the power is distributed to two dice that are interconnected across the EMIB die. |
1.An embedded multi-die interconnection bridge package includes:Centrally located in the power flooding area on the embedded multi-die interconnect bridge (EMIB) die, wherein the EMIB die is embedded in the semiconductor device package;The first power transmission via on the power floodplain, wherein the first power transmission via is a part of the metallization on the interconnection surface of the EMIB die;Subsequent power transmission through holes on the power floodplain and in the metallization;The power rail within the metallization, wherein the power rail contacts the first and subsequent power transmission vias;A first power distribution via contacting the power rail, wherein the first power distribution via emerges from the metallization at the first side of the die; andA subsequent power distribution via contacting the power rail, wherein the subsequent power distribution via emerges from the metallization at the subsequent side of the die.2.The EMIB package of claim 1, wherein the metallization includes four metallization levels, the four metallization levels include metal-1 (M1), M2, M3, and M4, and wherein the power rail is arranged At one of M1, M2, M3, and M4, the metallization further includes a ground rail arranged at one of M1, M2, M3, and M4 that is not occupied by the power rail.3.The EMIB package of claim 1, further comprising:A first ground via in the metallization on the first die side;Subsequent ground vias in the subsequent die side; andThe ground rail within the metallization, the ground rail contacts the corresponding first and subsequent ground vias.4.A semiconductor device package, including:A bridge die arranged in a package substrate, wherein the bridge die includes metallization at the interconnect surface;A first semiconductor device coupled to the bridge die, wherein the first semiconductor device projects a first footprint on the first portion of the metallization; andA subsequent semiconductor device coupled to the bridge die, wherein the subsequent semiconductor device projects a subsequent occupied area on the subsequent portion of the metallization; andWherein the metallization includes coupling a first power transmission via to a power rail and to a power flooding area of a first die power distribution via, and the first die power distribution via is coupled to the first A semiconductor device, wherein a power flooding region couples a subsequent power transmission via to the power rail and to a subsequent die power distribution via, and the subsequent die power distribution via is coupled to the subsequent semiconductor device, And wherein the power flooding area is arranged between the first power distribution through hole and the subsequent power distribution through hole.5.The semiconductor device package of claim 4, wherein the first power distribution via is one of more than one first power distribution via, and wherein the subsequent power distribution via is more than one subsequent power distribution One of the through holes, and wherein the more than one first power distribution through holes are more than the more than one subsequent power distribution through holes.6.The semiconductor device package of claim 4, further comprising:A first ground through hole arranged in the first occupied area;Subsequent ground vias arranged in the subsequent occupied area; andThe ground rail within the metallization, the ground rail contacts the corresponding first and subsequent ground vias.7.The semiconductor device package of claim 4, wherein the metallization includes four metallization levels, the four metallization levels include metal-1 (M1), M2, M3, and M4, and wherein the power rail Arranged at one of M1, M2, M3, and M4, the metallization further includes a ground rail arranged at one of M1, M2, M3, and M4 not occupied by the power rail.8.The semiconductor device package of claim 4, wherein the metallization includes four metallization levels, the four metallization levels include metal-1 (M1), M2, M3, and M4, and wherein the power rail Disposed at one of M1, M2, M3, and M4, the metallization further includes a ground rail disposed at one of M1, M2, M3, and M4 not occupied by the power rail, the metallization further includes configuring There are two of the metallization stages of input/output rails.9.The semiconductor device package of claim 4, wherein the first semiconductor device is a logic die, and the subsequent semiconductor device is a memory die.10.The semiconductor device package of claim 4, wherein the power flooding area is a first (VCC1) power flooding area, the semiconductor device package further includes a second (VCC2) power flooding area, and wherein the The VCC2 power floodplain is arranged between the first occupied area and the subsequent occupied area.11.The semiconductor device package according to claim 4, wherein the power flooding zone is a first (VCC1) power flooding zone, and the semiconductor device package further comprises:The second (VCC2) power floodplain area, wherein the VCC2 power floodplain area is arranged between the first occupied area and the subsequent occupied area, and wherein the VCC2 power floodplain area contains power from the VCC1 Two VCC2 flooding areas separated by flooding areas;A first plurality of power traces, the first plurality of power traces extending from one of the two VCC2 power flooding regions into the first occupied area; andA plurality of subsequent power traces extending from the other of the two VCC2 power flooding regions into the subsequent occupied area.12.The semiconductor device package according to claim 4, wherein the power flooding zone is a first (VCC1) power flooding zone, and the semiconductor device package further comprises:The second (VCC2) power floodplain area, wherein the VCC2 power floodplain area is arranged between the first occupied area and the subsequent occupied area, and wherein the VCC2 power floodplain area contains power from the VCC1 Two VCC2 flooding areas separated by flooding areas;A first plurality of power traces, the first plurality of power traces extending from one of the two VCC2 power flooding regions into the first occupied area;A plurality of subsequent power traces extending from the other of the two VCC2 power flooding areas into the subsequent occupied area; andA grounded floodplain, the grounded flooded area includes two parts separated by the VCC1 power flooded area, wherein one of the two grounded flooded areas includes a first contacting the ground rail in the metallization A ground via, and wherein the other of the two ground floodplain portions includes a subsequent ground via contacting the ground rail.13.The semiconductor device package of claim 4, further comprising:A third semiconductor device, the third semiconductor device is coupled to the first semiconductor device across the bridge die, wherein the subsequent semiconductor device and the third semiconductor device share an interconnection opposite to the first semiconductor device Even the surface.14.The semiconductor device package of claim 13, further comprising:A first ground through hole arranged in the first occupied area;Subsequent ground vias arranged in the subsequent occupied area;A third ground via arranged in the third occupied area; andThe ground rail in the metallization, the ground rail contacts the corresponding first, subsequent and third ground vias.15.A method of operating an embedded multi-die interconnect bridge (EMIB) device includes:Introducing power to the bridge die on the interconnection surface at the position between the electrical connection of the first semiconductor device and the subsequent semiconductor device;By directing current through the first power transmission through hole, the power rail contacting the first power transmission through hole, the first power distribution through hole contacting the power rail, and the electrical connection to the first semiconductor device contacting The first semiconductor device supplies power;Power is supplied to the subsequent semiconductor device by directing current through the subsequent power delivery via, the power rail, the subsequent power distribution via contacting the power rail, and the electrical connection to the subsequent semiconductor device contacting the subsequent semiconductor device.16.The method of claim 15, wherein the current flows in the bridge die along the power rail between the first power transmission via and the first power distribution via in the first direction. Ground flows, and wherein current flows in a subsequent direction peripherally in a subsequent direction along the power rail between the subsequent power transmission via and the subsequent power distribution via within the bridge die.17.The method of claim 15, wherein the first power distribution via is one of the first plurality of power distribution vias, wherein the subsequent power distribution via is one of the subsequent plurality of power distribution vias, and The first one is more than the subsequent ones.18.A computing system including:Centrally located in the power flooding area on the embedded multi-die interconnect bridge (EMIB) die, wherein the EMIB die is embedded in the semiconductor device package;The first power transmission via on the power floodplain, wherein the first power transmission via is a part of the metallization on the interconnection surface of the EMIB die;Subsequent power transmission through holes on the power floodplain and in the metallization;The power rail within the metallization, wherein the power rail contacts the first and subsequent power transmission vias;A first power distribution via contacting the power rail, wherein the first power distribution via emerges from the metallization at the first side of the die; andContacting the subsequent power distribution via of the power rail, wherein the subsequent power distribution via appears from the metallization at the subsequent side of the die;A first semiconductor device coupled to the first power distribution via, wherein the first semiconductor device occupies a first occupied area on the metallization; andA subsequent semiconductor device, the subsequent semiconductor device being coupled to the subsequent power distribution via, wherein the subsequent semiconductor device occupies a subsequent occupied area on the metallization; andThe EMIB die is a part of the chipset.19.The computing system of claim 18, wherein the semiconductor device package is attached to a board, and wherein the board includes providing physical protection and dielectric to a combination of a first die, the EMIB die, and subsequent dies Protect the shell of both.20.The computing system of claim 18, wherein the semiconductor device package is attached to a board, and wherein the first semiconductor device is a logic die, the computing system further comprising:Display device; andWherein the metalization includes four metalization levels, the four metalization levels include metal-1 (M1), M2, M3, and M4, and wherein the power rail is arranged in one of M1, M2, M3, and M4 Where, the metallization further includes a ground rail arranged at one of M1, M2, M3, and M4 that is not occupied by the power rail. |
Power transmission method for embedded multi-tube core interconnection bridge and assembling method thereofPriority applicationThis application claims the priority benefit of the U.S. application with serial number 15/937411 filed on March 27, 2018, which is incorporated herein by reference in its entirety.Technical fieldThe present disclosure relates to power transmission of an embedded multi-die interconnect bridge architecture for semiconductor device packaging.Background techniqueThe miniaturization of semiconductor devices during packaging includes challenges that allow high-speed and small-volume interconnection between dies and power delivery to the dies.Description of the drawingsIn the figures of the accompanying drawings, the disclosed embodiments are illustrated by way of example and not limitation. Similar reference numerals in the accompanying drawings may refer to similar elements. In the accompanying drawings:FIG. 1 is a top plan view of an embedded multi-die interconnect bridge die according to an embodiment, which exposes an array of micro-vias for interconnection between two semiconductor devices;1A is a projection and a cross-sectional partial cut front view of the metalization of the bridge die depicted in FIG. 1 taken along the section line 1A-1A according to an embodiment;1B is a projection and cross-sectional partial cut-away front view of the metalization of the bridge die taken along the section line 1B-1B in FIG. 1 according to an embodiment;2 is a cross-sectional cut and projected perspective view of an embedded multi-die interconnect bridge package according to an embodiment;3 is a top plan view of the interconnect bridge die 310 according to an embodiment, in which multiple power domains are stitched into the bridge die to serve the first die and subsequent dies;3A is a projection and a cross-sectional partial cut front view of the metalization of the bridge die depicted in FIG. 3 taken along the section line 3A-3A according to an embodiment;4 is a top plan view of an interconnect bridge die according to an embodiment, in which an enhanced EMIB power delivery architecture with a single power domain is stitched into the bridge die;4A is a projection and cross-sectional partial cut front view of the metalization of the bridge die depicted in FIG. 4 taken along the section line 4A-4A according to an embodiment;Figure 5 is a top plan view of an embedded multi-die interconnect bridge package according to an embodiment;Figure 6 is a process flow diagram according to an embodiment; andFigure 7 is included to show examples of higher-level device applications for the disclosed embodiments.detailed descriptionAn embedded multi-die interconnect bridge (EMIB) architecture includes at least two semiconductor devices interconnected across EMIB dies. The power delivery to the interconnected semiconductor devices is achieved by delivering power to the central area of the EMIB die, which can be referred to as the "flood plain". The power distributed to each interconnected semiconductor device through the metallization on the EMIB die can pass through the flooding area, and it can be supplemented by the peripheral power introduced to the EMIB die.According to several embodiments, the power and ground existing in the EMIB metallization are delivered to the floodplain, and current flow is selected in several metallization layers to effectively affect the induction loop problem and the electromagnetic noise problem. According to several embodiments, in addition to power transmission in the floodplain at the center of the EMIB die, supplementary power is added to the periphery of the EMIB die.In an embodiment, the bridge is partially embedded in the package. In an embodiment, the bridge is not embedded, but it has a configuration that bridges between two dies.1 is a top plan view 100 of an embedded multi-die interconnect bridge (EMIB) die 110 according to an embodiment, which exposes an array of micro-vias for interconnection between two semiconductor devices. The EMIB die 110 or "bridge die" 110 includes an interconnection surface 112 on which a number of microvia arrays are arranged. In one embodiment, the micro-via is quantified as a via with a diameter of less than 1 mm. In one embodiment, the micro vias are quantified in a range from 25 microns (micrometers) to 500 microns.A portion of the first die footprint 114 (hereinafter, the first die 114 or the first die side 114) is projected on the interconnection surface 112 of the bridge die 110 in a dashed line, and the first microvia The array 116 is configured within the bridge die 110 at the interconnect surface 112 to intersect the first die 114. A portion of the subsequent die occupancy area 118 (hereinafter, subsequent die 118 or subsequent die side 118) is depicted with a dashed line projected onto the interconnect surface 112 of the bridge die 110, and the subsequent micro-via array 120 is configured Within the bridge die at the interconnect surface 112 to intersect the subsequent die 118.The power connections on the interconnection surface 112 include power transmission micro-vias 124 and 126, and power distribution micro-vias 164 and 174. The power transmission microvias are illustrated as solid dark colors, and they are located in the floodplain 122 portion of the bridge die 110. As shown in the figure, the flooding area 122 is located between the first micro-via array 116 and the subsequent micro-via array 120. The power transmission microvias 124 and 126 are located between the power source (not shown) and the power rail 128 (see FIG. 1A). The power distribution microvias 164 and 174 are located between the power rail 128 and the connection to the corresponding first die 114 and subsequent die 118.The electrical ground connection on the interconnect surface 112 includes ground microvias 166 and 176. The ground microvias are shown in shades. The signal I/O electrical connections on the interconnect surface 112 include signal I/O vias 168 and 178. Throughout this disclosure, the signal I/O microvias are illustrated as unshaded.The power flood zone 122 is located on the interconnection surface 112 of the bridge die 110 between the corresponding first die 114 and the subsequent die 118. The power floodplain 122 is centrally located, which means that it is located between the occupied area formed by the corresponding first die 114 and the subsequent die 118. However, the power floodplain 122 may be geometrically different. It is positioned exactly across the center area of the interconnection surface 112 of the bridge die 110. In one embodiment, the power flooding area 122 is located at the approximate bilateral center of the bridge die 110 (the Y direction describes the approximate axis of symmetry), so that the power to be transmitted to the corresponding first die 114 and subsequent die 118 is flooded The region 122 is introduced into the bridge die 110, and current is transferred peripherally to the corresponding die 114 and 118 through the metallization of the bridge die 110 (see FIG. 1A). Power is introduced to the bridge die 110 at the power bump 130 near the periphery of the bridge die 110, and the current is routed to the power transmission microvias 124 and 126, the power rail 128, and the power distribution microvias 164 and 174.In an embodiment, the first die 114 requires more power than the subsequent die 118. In this embodiment, the first die 114 may be referred to as the mother die 114, and the subsequent die 118 may be referred to as the child die 118. In an example embodiment, the first die 114 is a logic die, such as a processor manufactured by Intel Corporation of Santa Clara, California, and the subsequent die 118 is a memory die.Note the cross section line 1A-1A. In an embodiment, six power transmission microvias 124 are located in the power flooding area 122, eight power distribution microvias 164 are located in the first microvia array 116, and four power distribution microvias 174 are located In the subsequent microvia array 120. In the case where more power is required at the mother die 114, and as seen along the section line 1A-1A, the eight power distribution microvias 164 in the first microvia array 116 serve the mother die 114. In the case that less power is required at the sub-die 118, the four power distribution micro-vias 174 in the subsequent micro-via array 120 serve the sub-die 118.FIG. 1A is a projection and cross-sectional partial cut front view of the metallization 101 of the bridge die 110 depicted in FIG. 1 taken along the section line 1A-1A according to an embodiment. The metallization 101 extends vertically (Z direction) in the illustration, and in one embodiment includes metal-1 (M1), M2, M3, and M4. The metallization line including vertical (Z-direction) microvias and horizontal (X-direction) traces contains non-short coupling between several metallization structures, but the overall structure of the metallization is projected according to the drawings. In other words, the metalizations shown in several illustrations are projections and are not necessarily connected.As illustrated, the power delivery microvias 124 and 126 depicted in FIG. 1 along the section line 1A-1A can be vertically (Z direction) mapped to selected power bond pads in the metallization 101 123 and 125. The power delivery microvia positions 124 and 126 listed in the two reference lines depicted in FIG. 1 correspond to the power bond pads 123 and 125 listed in the corresponding two reference lines, and they are also illustrated as solid dark colors. However, the power bonding pads 123 and 125 listed in the reference line are part of a power header 127 not shown in FIG. 1.According to an embodiment, the power routing through the metallization 101 starts at the power bump 130 (FIG. 1), and the current is routed through the power flooding area 122 (details not shown) to the illustrated power through the power header 127 The micro vias 124 and 126 are transported to the power rail 128 at M1 in the metallization 101.For the first die 114, the current flows from the power header 127 in the floodplain 122 of the bridge die 110 to M1 (power rail 128), and then is distributed to the first power distribution via 164 peripherally through the first power distribution microvia 164 Die 114. For the subsequent die 118, current flows from the power header 127 in the power flooding area 122 of the bridge die 110 to M1 (power rail 128), and then is distributed to the subsequent die peripherally through the subsequent power distribution microvia 174 118. As illustrated, power distribution microvias 164 and 174 emerge from the metallization 101 to couple to the semiconductor device structures 163 and 173. By looking at both FIG. 1 and FIG. 1A, it can be seen that the power distribution microvias 164 and 174 appear at the corresponding die first side 114 and die subsequent side 118 of the bridge die 110.As illustrated, additional bridge pads and EMIB via connections in the inner layer of the metallization 101 of the bridge die 110 at the floodplain 122 are used, as well as the first die 114 for surface mounting and subsequent More micro-via connections between the electrical bump fields of the die 118 to achieve enhanced EMIB power transmission. In the illustrated embodiment, 24 first die power distribution microvias 164 are located in the first microvia array 116, and 12 subsequent die power distribution microvias 174 are located in the subsequent microvia array 120 . For the first die power distribution microvia 164, the bonding pad 163 provides an interface surface to the first die 114, such as for electrical bumps. For the subsequent die power distribution microvia 174, the bond pad 173 provides an interface surface to the subsequent die 118, such as for electrical bumps.As illustrated in this embodiment, the two metallization stages M2 and M4 of the metallization 101 are configured with input/output signal tracks. The signal I/O microvia 168 contacts the M2 signal I/O track in the first die microvia array 116. The signal I/O microvia 178 contacts the M4 signal I/O track in the subsequent die microvia array 120.FIG. 1B is a projection and a cross-sectional partial cut-out front view of the metallization 102 of the bridge die 110 taken along the section line 1B-1B in FIG. 1 according to an embodiment. Describes the enhanced VSS return path to provide a more flexible EMIB power delivery solution. The metallization 102 expands vertically (Z direction) in the illustration, and in one embodiment includes M1, M2, M3, and M4 illustrated in FIG. 1A. As illustrated, the ground microvias 166 and 176 are depicted along the section line IB-IB in FIG. 1. The ground microvias 166 and 176 can be mapped to the corresponding ground bond pads 165 and 175 in the metallization 102, and the ground bond pads 165 and 175 are also hatched. The ground microvias 166 and 176 also listed in the two reference lines depicted in FIG. 1 correspond to the ground bond pads 165 and 175 also listed in the two reference lines illustrated in hatching. In the first die microvia array 116, there are a total of eight ground bond pads 166 that intersect the section line IB-IB. In the subsequent die microvia array 120, there are a total of four ground bond pads 176 that intersect the section line IB-IB. The current flow path of the ground connection contains the ground rail at M3, where VSS is gathered.In an embodiment, the power rail 128 and the ground rail 136 are spaced as far apart as possible, such as for the power rail 128 at M1, and for the ground rail at Mn (in the figure, Mn is M4) 136. However, in the illustrated embodiment, the power rail 128 and the ground rail 136 are only separated by a single metallization trace at M2. In an embodiment where it is desired that the induced current circulation between the power rail and the ground rail is minimized, the power rail and the ground rail are positioned adjacent to each other, such as at M2 and M3. In an embodiment where it is desired that current noise be minimized between the power rail and the ground rail, the power rail and the ground rail are positioned spaced apart, such as at M1 and M4.2 is a cross-sectional cut and projected perspective view of an embedded multi-die interconnect bridge package 200 according to an embodiment. The bridge die 210 contains the metallization 201 of the portion of the corresponding metallization 101 and 102 depicted in FIGS. 1A and 1B as such. The EMIB package 200 includes a first semiconductor device 214 and a subsequent semiconductor device 218 coupled through the metallization 201 of the bridge die 210. Power is delivered to the bridge die 210 at the floodplain 222 between the surface-mounted die bump fields to allow enhanced power delivery to the semiconductor devices 214 and 218. The surface-mounted die bump field is roughly contained in the corresponding first die microvia array 216 and subsequent die microvia array 220.In one embodiment, the metallization 201 is depicted in a simplified form, in which the power rail 228 is one of several layers of the metallization 201, and is in the corresponding first die microvia array 216 and subsequent die microvia arrays EMIB power transmission between 220 and microvias 224 and 226 (appearing once in each illustration) illustrate additional power and ground pads on the bridge. The power is routed to the bridge die 210 centrally at the power contacts 223 and 225 above the floodplain 222 of the bridge die 210, the current is directed to the power delivery microvias 224 and 226, to the power rail 228, and then The micro-vias 264 and 274 are distributed to the corresponding first die 214 and subsequent die 218 through the corresponding power.In an embodiment, the bridge die 210 is coupled to the first semiconductor device 214 through the first microvia array 216. The first microvia array 216 includes power distribution microvias 264, and each microvia is coupled to the first semiconductor device 214 through a power bump 232 (three exemplarily appearing). In addition, in the illustrated cross-section, I/O electrical bumps 234 (two exemplary occurrences) also couple the first semiconductor device 214 to the bridge die 210.In an embodiment, the bridge die 210 is coupled to the subsequent semiconductor device 218 through the subsequent microvia array 220. The subsequent micro-via array 220 includes power distribution micro-vias 274, and each micro-via is coupled to the subsequent semiconductor device 218 through a power bump 232 (two exemplarily appearing). In addition, in the illustrated cross-section, I/O electrical bumps 234 (three exemplarily appearing) also couple subsequent semiconductor devices 218 to bridge die 210.In one embodiment, the packaging material 240 includes a plurality of build-up layers disposed between the corresponding power transmission microvias 224 and 226 and the power distribution microvias 264 and 274 and the electrical bumps 232. A part of the packaging material 240 includes a top dielectric layer 241, which can be formed by planarizing the material after filling a laser-drilled via groove.The power is routed from the power contacts 223 and 225 to the corresponding first semiconductor device 214 by centrally introducing power into the bridge die 210 and peripherally transporting it to the corresponding semiconductor devices 214 and 218 through the metallization 201 of the bridge die 210 And the subsequent semiconductor device 218.In an embodiment, the power is routed to the power flooding area 222 above the bridge die 210 through the power transmission via 238 in the encapsulation material 240, and the power is routed (not shown) to the power contacts 223 and 225. Electric current flows from the power contacts 223 and 225 through the power transmission microvias 224 and 226.In order to deliver power to the first semiconductor device 214, power is initiated for the bridge die 210 in the flooding area 222 at the power contact 223. The current is led from the power contact 223 to the power rail 228 through the power delivery microvia 224, through the first die power distribution microvia 264 and to the power bump 232 adjacent to the first semiconductor device 214.In order to deliver power to the subsequent semiconductor device 218, power is initiated for the bridge die 210 in the flooding area 222 at the power contact 225. The current is led from the power contact 225 to the power rail 228 through the power transmission microvia 226, through the subsequent die power distribution microvia 274 and to the power electrical bump 232 adjacent to the subsequent semiconductor device 218.In an embodiment, such as through the power transmission through hole 244 in the packaging material 240 of the EMIB package 200 to the supplementary power bump 242 on the periphery of the first die 214, the supplementary power is directed to the first semiconductor device 214. In the case where the first die 214 requires more power than the subsequent die 218, the first die 214 may be referred to as the mother die 214, and the subsequent die 218 may be referred to as the child die 218.In an embodiment, the EMIB package 200 is useful for a handheld computing system, and the computing system includes a board 294 such as a motherboard 294. In an embodiment, the board 294 includes a housing 296 that provides both physical and dielectric protection to the combination of the first die 214, the bridge die 210, and the subsequent die 218.FIG. 3 is a top plan view 300 of the interconnect bridge die 310 according to an embodiment, in which multiple power domains are stitched into the die 310 to serve the first die 314 and the subsequent die 318. The bridge die 310 includes an interconnection surface 312 to which a number of micro-via arrays and connection traces are arranged. According to an embodiment, the bridge die 310 exposes a micro-via array and connection traces for a plurality of power domains for interconnection between the two semiconductor devices 314 and 318.The bridge die 310 uses an enhanced EMIB power delivery configuration with multiple power domains stitched into the metallization 301 (see Figure 3A). The interconnection surface 312 includes a first power source flooding region 316 for the first power source VCC1, a second power source flooding region 317 and 317' for the second power source VCC2, and ground or common VSS flooding regions 387 and 387'.The floodplain 316 of the first power source VCC1 includes all the solid black contacts 330 and the micro vias 324 and 326 illustrated from the top plan view 300. The second power source VCC2 flooding regions 317 and 317' include cross-hatched portions and subsequent power contact pads 348.A portion of the first die footprint 314 is projected on the interconnect surface 312 of the bridge die 310 in a dashed line, and the first power trace array 363 (an exemplary trace listed) is located within the first die footprint 314 The interconnection surface 312. A part of the subsequent die footprint 318 is also projected on the interconnection surface 312 of the bridge die 310 with a dashed line, and the first power trace array 373 (an exemplary trace listed) is located within the subsequent die footprint 318 On the interconnection surface 312.The second power source VCC2 flooding areas 317 and 317' contain all the cross-hatched parts illustrated from the top plan view 300. The second power source is located on the top surface of the metallization 301 (see Figure 3A), and the first power source containing the power transmission microvias 324 and 236 is introduced centrally, penetrates into the metallization 310, and the power is distributed to the corresponding The first semiconductor device 314 and the subsequent semiconductor device 318.The second power trace array 365 (one exemplary trace listed) is located on the interconnect surface 312 within the first die footprint 314. A portion of the second power trace array 375 (one exemplary trace listed) is located on the interconnection surface 312 within the subsequent die footprint 318.The grounded VSS floodplains 387 and 387' contain circular electrical bumps and micro-vias 388, which are depicted in shaded cross-sections. The ground source trace array 367 (one exemplary trace listed) is located on the interconnect surface 312 within the first die footprint 314. A portion of the ground source trace array 377 (one exemplary trace listed) is located on the interconnect surface 312 within the subsequent die footprint 318.Input/output (I/O) traces are also located on the interconnect surface 312 of the bridge die 310. The I/O trace array 369 (one exemplary trace listed) is located on the interconnection surface 312 within the first die footprint 314. The I/O trace array 379 (one exemplary trace listed) is located on the interconnect surface 312 within the subsequent die footprint 318.As illustrated, the power floodplains 316, 317, and 317' and the ground source floodplains 387 and 387' are centrally located on the interconnect surface 312 of the bridge die 310, and the I/O trace arrays 369 and 379 It is located only within the corresponding die footprint 314 and 318.In this embodiment, the power transmission is to the center of the bridge die 310 and two semiconductor devices, which may be referred to as a mother die 314 and a child die 318, respectively, each of which can receive multiple power supplies.3A is a projection and cross-sectional partial cut-away front view of the metallization 301 of the bridge die 310 depicted in FIG. 3 taken along the section line 3A-3A according to an embodiment. The metallization 301 expands vertically (Z direction) in the illustration, and in one embodiment includes metal-1 (M1), M2, M3, and M4.The projection of the metallization line containing vertical (Z-direction) microvias and horizontal (X-direction) traces includes non-short coupling between several metallization structures, but the overall structure of metallization 301 is projected according to the drawings . In other words, the metalizations shown in several illustrations are projections and are not necessarily connected.In the illustrated power delivery scheme, the VCC1 associated with the power delivery microvias 324 and 326 and the VSS associated with the ground microvia 388 are fed down into the metallization 301 from the interconnect surface 312 depicted in FIG. 3, And then feed to the corresponding first die 314 and subsequent die 318 laterally and vertically. For the VCC2 from the second power flooding regions 317 and 317', additional die bumps are used on the first die 314 and the subsequent die 318, and the power delivery remains above the M4 level.As illustrated, the power transmission microvias 324 and 326 depicted along the section line 3A-3A in FIG. 3 and in the first power microvia flooding area 316 can be vertically (Z direction) mapped to the metallization Power bond pads 323 and 325 selected in 301. In addition, the micro traces 363 and 373 listed by the reference line depicted in FIG. 3 correspond to the power distribution micro-vias 364 and 374 listed by the two reference lines, and they are also illustrated as solid dark colors. The current flow path of the power traces 363 and 373 starts from the floodplain 316, flows downward (negative Z direction) in the power transmission microvias 324 and 326, crosses the power rail 328 at M1, and vertically passes through the corresponding The power distribution microvias 364 and 374, to the corresponding power traces 363 and 373, and then to the corresponding mother die 314 and daughter die 318.As illustrated, the second power source is delivered to contain subsequent power flooding regions 317 and 317' as seen in FIG. 3, which is depicted in FIG. 3A, but the cross-sections 3A-3A are similar to those located on the first die. A second power trace array 365 on the interconnect surface 312 within the occupied area 314 intersects. A part of the second VCC2 power supply is delivered from the heads in the corresponding second power floodplains 317 and 317' above the Mn metallization level at the power traces 365 and 375, and a part of the second VCC2 power supply is not Penetrate any vertical micro-vias into the metallization.Ground source coupling is illustrated by ground microvias 388, which are depicted as coupling to ground source traces 367 and 377 from ground source flooding regions 387 and 387'. As illustrated in this cross-sectional view, the ground source coupling can be vertically (Z direction) mapped to the M3 metallization through the ground source microvia 388. Similarly, as illustrated, according to an embodiment, I/O traces within metallization 301 are depicted at M2 and M4.4 is a top plan view 400 of an embedded multi-die interconnect die 410 according to an embodiment, in which an enhanced EMIB power delivery architecture with a single power domain is stitched into the bridge die 410. The bridge die 410 includes an interconnection surface 412 on which a number of microvia arrays and connection traces are arranged. According to an embodiment, the bridge die 410 uses a single power domain to serve the first die 414 and the subsequent die 418. The bridge die 410 includes the VCC1 flooding area 416, which is tied to the VCC1 grid layer on the bridge through the EMIB power transmission microvias 424 and 426. Starting from the VCC1 floodplain 416 and passing through the power transmission microvias 424 and 426, power is stitched to the first die 414 and the subsequent die 418.The ground contact 466 is arranged in the ground flooding area 427, and the ground contact 466 serves as a ground micro via 466. In addition, signal I/O traces 428 (see FIG. 4A) are also configured on the interconnection surface 412, and they are connected by metallization 401.4A is a projection and cross-sectional partial cut-out front view of the metallization 401 of the bridge die 410 depicted in FIG. 4 taken along the section line 4A-4A according to an embodiment. The metallization 401 extends vertically (Z direction) in the illustration, and in one embodiment includes metal-1 (M1), M2, M3, and M4. According to an embodiment, the directional arrow 443 indicates the current flow path to the power rail 428 through the corresponding power transmission microvias 424 and 426. The current flows from the power rail 428 to the power distribution microvias 464 and 474 and to the corresponding power traces 463 and 473.In one embodiment, supplementary power 444 is also introduced peripherally above the Mn metallization layer. In one embodiment, the current flow path is indicated by the directional arrow 444, both by introducing power into the bridge die 410 at the center of the floodplain 416 (see FIG. 4), and by introducing the power peripheral ground 444 to Each power trace 463 and 473.In one embodiment, the illustrated bridge die 410 is analyzed for power delivery compared to the total peripheral power delivery (where the power exclusively comes from the packaging material of the EMIB die), where the direct current (DC) voltage is reduced by 11% for the VCC1 power And 17%.FIG. 5 is a top plan view of an embedded multi-die interconnect bridge package 500 according to an embodiment. The first semiconductor device 514 is coupled to the bridge die 510 through a microvia, such as any of the microvia embodiments depicted in this disclosure. In addition, the subsequent semiconductor device 518 is also coupled to the first semiconductor device 514 across the bridge die 510. In addition, the third semiconductor device 519 is also coupled to the first semiconductor device 514 across the bridge die 510. In one embodiment, the EMIB package 500 is built into the packaging material 540.The power flooding area is indicated by the power transmission microvias 524 and 526, so that the power is introduced centrally on the bridge die 510, and peripherally transmitted to the power distribution microvias 564, 574, and 575 to the corresponding first semiconductor The device 514, the subsequent semiconductor device 518, and the third semiconductor device 519.Grounding is performed through VCC microvias 566, 576, and 577. Similar to all the embodiments set forth in the previous figures, the grounding is achieved through the VCC ground rail (not shown), which is at one of the metallization layers within the metallization. The signal I/O microvias are shown as unshaded.As shown in several embodiments, several bridge dies 110, 210, 310, 410, and 510 may be characterized as power floodplain EMIB dies.Figure 6 is a process flow diagram 600 according to an embodiment.At 610, the process includes: forming a metallization on the bridge die, wherein the power transmission microvia is centrally located on the bridge die, the power rail is configured within the metallization, and the power distribution microvia is peripherally located on the bridge die on.At 620, the process includes assembling the bridge die into the power floodplain EMIB package.At 630, the process includes assembling the power floodplain EMIB package to the computing system.Figure 7 is included to show examples of higher-level device applications for the disclosed embodiments. Examples of power floodplain EMIB packaging can be found in several parts of the computing system. In an embodiment, the power floodplain EMIB package embodiment may be part of a communication device, such as being attached to a cellular communication tower. In an embodiment, the computing system 700 includes but is not limited to a desktop computer. In one embodiment, the system 700 includes, but is not limited to, a laptop computer. In one embodiment, the system 700 includes but is not limited to a tablet. In one embodiment, the system 700 includes but is not limited to a notebook computer. In an embodiment, the system 700 includes, but is not limited to, a personal digital assistant (PDA). In an embodiment, the system 700 includes but is not limited to a server. In an embodiment, the system 700 includes but is not limited to a workstation. In an embodiment, the system 700 includes, but is not limited to, a cell phone. In an embodiment, the system 700 includes but is not limited to a mobile computing device. In an embodiment, the system 700 includes but is not limited to a smart phone. In one embodiment, the system 700 includes but is not limited to Internet appliances. Other types of computing devices may be configured with microelectronic devices that include power floodplain EMIB package embodiments.In an embodiment, the processor 710 has one or more processing cores 712 and 712N, where 712N represents the Nth processor core inside the processor 710, where N is a positive integer. In one embodiment, the electronic device system 700 uses a power flooding area EMIB package embodiment that includes multiple processors including 710 and 705, where the processor 705 has logic similar to or similar to that of the processor 710. The equivalent logic. In one embodiment, the processing core 712 includes, but is not limited to, pre-fetch logic for fetching instructions, decoding logic for decoding instructions, execution logic for executing instructions, and so on. In an embodiment, in the system 700, the processor 710 has a cache memory 716 to cache at least one of data and instructions for a multilayer solder resist on a semiconductor device packaging substrate. The cache memory 716 may be organized into a hierarchical structure including one or more levels of cache memory.In one embodiment, the processor 710 includes a memory controller 714 that is operable to perform functions that enable the processor 710 to access and communicate with the memory 730, which includes a volatile memory 732 and a non-volatile memory. At least one of the memories 734. In an embodiment, the processor 710 is coupled with the memory 730 and the chipset 720. In one embodiment, the chipset 720 is part of the power flooding area EMIB package embodiment depicted in FIG. 2. The processor 710 may also be coupled to the wireless antenna 778 to communicate with any device configured to transmit and receive at least one of wireless signals. In an embodiment, the wireless antenna interface 778 operates according to, but not limited to, the IEEE 802.11 standard and its related series, household plug AV (HPAV), ultra-wideband (UWB), Bluetooth, WiMax, or any form of wireless communication protocol.In one embodiment, the volatile memory 732 includes but is not limited to synchronous dynamic random access memory (SDRAM), dynamic random access memory (DRAM), RAMBUS dynamic random access memory (RDRAM) and/or any other type of Random access memory device. The non-volatile memory 734 includes but is not limited to flash memory, phase change memory (PCM), read only memory (ROM), electrically erasable programmable read only memory (EEPROM) or any other type of nonvolatile memory Device.The memory 730 stores information and instructions to be executed by the processor 710. In an embodiment, while the processor 710 is executing instructions, the memory 730 may also store temporary variables or other intermediate information. In the illustrated embodiment, the chipset 720 is connected to the processor 710 via point-to-point (PtP or P-P) interfaces 717 and 722. Any of these PtP embodiments can be implemented using the power flooding area EMIB package embodiment as set forth in this disclosure. The chipset 720 enables the processor 710 to be connected to other elements in the power flooding area EMIB package embodiment in the system 700. In an embodiment, the interfaces 717 and 722 operate according to a PtP communication protocol such as Intel® QuickPath Interconnect (QPI). In other embodiments, different interconnections may be used.In an embodiment, the chipset 720 is operable to communicate with the processors 710, 705N, the display device 740, and other devices 772, 776, 774, 760, 762, 764, 766, 777, etc. The chipset 720 may also be coupled to the wireless antenna 778 to communicate with any device configured to perform at least one of transmitting and receiving wireless signals.The chipset 720 is connected to the display device 740 via the interface 726. The display 740 may be, for example, a liquid crystal display (LCD), a plasma display, a cathode ray tube (CRT) display, or any other form of visual display device. In one embodiment, the processor 710 and the chipset 720 are incorporated into the power flooding area EMIB package embodiment in the system. In addition, the chipset 720 is connected to one or more buses 750 and 755 that interconnect various elements 774, 760, 762, 764, and 766. The buses 750 and 755 may be interconnected together via a bus bridge 772 such as at least one EMIB package embodiment. In an embodiment, the chipset 720 is connected to the non-volatile memory 760, the mass storage device(s) 762, the keyboard/mouse 764, the network interface 766, the smart TV 776, and consumer electronics 777 via the interface 724, etc. coupling.In an embodiment, the mass storage device 762 includes, but is not limited to, a solid-state drive device, a hard disk drive device, a universal serial bus flash memory drive device, or any other form of computer data storage medium. In one embodiment, the network interface 766 is implemented by any type of well-known network interface standard, the network interface standard includes but is not limited to Ethernet interface, universal serial bus (USB) interface, peripheral component interconnect (PCI) express Interface, wireless interface and/or any other suitable type of interface. In one embodiment, the wireless interface operates according to, but not limited to, the IEEE 802.11 standard and its related series, household plug AV (HPAV), ultra-wideband (UWB), Bluetooth, WiMax, or any form of wireless communication protocol.Although the modules shown in FIG. 7 are depicted as separate blocks within the EMIB package embodiment in the computing system 700, the functions performed by some of these blocks can be integrated in a single semiconductor circuit, or two Or more separate integrated circuits to achieve. For example, although the cache memory 716 is depicted as a separate block within the processor 710, the cache memory 716 (or selected aspects of 716) may be incorporated into the processor core 712.To illustrate the EMIB packaging embodiments and methods disclosed herein, a non-limiting list of examples is provided herein:Example 1 is an embedded multi-die interconnect bridge package, including: a power flooding area centrally located on the embedded multi-die interconnect bridge (EMIB) die, where the EMIB die is embedded in the semiconductor device package Medium; the first power transmission microvia on the power floodplain, where the first power transmission microvia is part of the metallization on the interconnect surface of the EMIB die; on the power floodplain and within the metallization Subsequent power transmission micro-via; metallized power rail, wherein the power rail contacts the first and subsequent power transmission micro-vias; the first power distribution micro-via contacting the power rail, wherein the first power distribution micro-via Emerging from the metallization at the first side of the die; and subsequent power distribution microvias that contact the power rail, wherein the subsequent power distribution microvias emerge from the metallization at the subsequent side of the die.In Example 2, the subject matter of Example 1 optionally includes not being occupied by power rails.In Example 3, the subject matter of any one or more of Examples 1-2 optionally includes: a first grounded microvia in the metallization on the first die side; subsequent grounded microvias in the subsequent die side Holes; and the ground rails that contact the corresponding first and subsequent ground microvias within the metallization.Example 4 is a semiconductor device package including: a bridge die arranged in a package substrate, wherein the bridge die includes metallization at an interconnect surface; a first semiconductor device coupled to the bridge die, wherein the first semiconductor device The device projects a first occupied area on the first portion of the metallization; and a subsequent semiconductor device coupled to the bridge die, wherein the subsequent semiconductor device projects a subsequent occupied area on the subsequent portion of the metallization; and The metallization includes coupling the first power transmission microvia to the power rail and to the power flooding area of the first die power distribution microvia, the first die power distribution microvia being coupled to the first semiconductor device , Wherein the power flooding area couples the subsequent power transmission microvia to the power rail and to the subsequent die power distribution microvia, the subsequent die power distribution microvia is coupled to the subsequent semiconductor device, and wherein the power flood The pan area is arranged between the first power distribution microvia and the subsequent power distribution microvia.In Example 5, the subject of Example 4 optionally includes: wherein the first power distribution microvia is one of more than one first power distribution vias, and wherein the subsequent power distribution microvia is more than one subsequent power One of the micro-vias is distributed, and wherein the more than one first power distribution micro-vias are more than the more than one subsequent power distribution micro-vias.In Example 6, the subject of any one or more of Examples 4-5 may optionally include: a first grounded microvia arranged in a first occupied area; a subsequent grounded microvia arranged in a subsequent occupied area ; And the ground rails that contact the corresponding first and subsequent ground microvias within the metallization.In Example 7, the subject matter of any one or more of Examples 4-6 optionally includes not being occupied by power rails.In Example 8, the subject matter of any one or more of Examples 4-7 optionally includes not being occupied by a power rail, and further includes two of its metallization stages being configured with input/output rails.In Example 9, the subject matter of any one or more of Examples 4-8 optionally includes: wherein the first semiconductor device is a logic die, and the subsequent semiconductor device is a memory die.In Example 10, the theme of any one or more of Examples 4-9 optionally includes: the power floodplain is arranged between the first occupied area and the subsequent occupied area.In Example 11, the theme of any one or more of Examples 4-10 optionally includes: a power floodplain, and the theme further includes: a second (VCC2) power floodplain, where VCC2 power floodplain Is arranged between the first occupied area and the subsequent occupied area, and wherein the VCC2 power flooding area includes two VCC2 flooding areas separated by the VCC1 power flooding area; the first plurality of power traces, the first plurality The power trace extends from one of the two VCC2 power flooding areas into the first occupied area; and subsequent multiple power traces from the other of the two VCC2 power flooding areas Extend to the subsequent occupied area.In Example 12, the theme of any one or more of Examples 4-11 optionally includes: a power floodplain, the theme further includes: a second (VCC2) power floodplain, where VCC2 power floodplain Is arranged between the first occupied area and the subsequent occupied area, and wherein the VCC2 power flooding area includes two VCC2 flooding areas separated by the VCC1 power flooding area; the first plurality of power traces, the first plurality The power trace extends from one of the two VCC2 power floodplains into the first occupied area; subsequent multiple power traces extending from the other of the two VCC2 power floodplains To the subsequent occupied area; and the grounded floodplain, the grounded floodplain includes two parts separated by the VCC1 power flooding area, and one of the two grounded flooded areas contains the ground rail in the contact metalization The first grounding micro-via, and the other of the two grounding floodplain parts contains subsequent grounding micro-vias that contact the ground rail.In Example 13, the subject matter of any one or more of Examples 4-12 optionally includes: a third semiconductor device coupled to the first semiconductor device across the bridge die, wherein The subsequent semiconductor device and the third semiconductor device share an interconnection surface opposite to the first semiconductor device.In Example 14, the theme of Example 13 optionally includes: a first grounded microvia arranged in a first occupied area; subsequent grounded microvias arranged in a subsequent occupied area; A third ground microvia; and a ground rail in the metallization, the ground rail contacts the corresponding first, subsequent, and third ground microvias.Example 15 is a method of operating an embedded multi-die interconnect bridge (EMIB) device, including: introducing power to an interconnect surface at a position between the electrical connection of a first semiconductor device and a subsequent semiconductor device Bridge die; by directing current through the first power transmission via, contacting the power rail of the first power transmission via, contacting the first power distribution via of the power rail, and contacting the first semiconductor device with electrical connection to the first The semiconductor device supplies power; the subsequent semiconductor device is supplied with power by directing current through the subsequent power transmission via, the power rail, the subsequent power distribution via contacting the power rail, and the electrical connection to the subsequent semiconductor device.In Example 16, the subject matter of Example 15 optionally includes: wherein a current flows in the bridge die along the first power transmission microvia and the first power distribution microvia between the microvia The rail flows peripherally in the first direction, and where the current flows in the bridge die along the power rail between the subsequent power transmission microvia and the subsequent power distribution microvia in the subsequent direction Flow in the periphery.In Example 17, the subject matter of any one or more of Examples 15-16 optionally includes: wherein the first power distribution via is one of the first plurality of power distribution vias, and wherein the subsequent power distribution The through hole is one of the subsequent plurality of power distribution through holes, and the first plurality is more than the subsequent ones.Example 18 is a computing system, including: a power flooding area centrally located on an embedded multi-die interconnect bridge (EMIB) die, wherein the EMIB die is embedded in a semiconductor device package; power flooding The first power transmission via on the area, wherein the first power transmission via is part of the metallization on the interconnect surface of the EMIB die; subsequent power transmission vias on the power flooding area and within the metallization The power rail within the metallization, where the power rail contacts the first and subsequent power transmission vias; the first power distribution via contacting the power rail, wherein the first power distribution via appears from the metallization at the first side of the die And a subsequent power distribution via contacting the power rail, wherein the subsequent power distribution via appears from the metallization at the subsequent side of the die; a first semiconductor device, the first semiconductor device being coupled to the first A power distribution via, in which a first semiconductor device occupies a first occupied area on the metallization; and a subsequent semiconductor device, which is coupled to a subsequent power distribution via, wherein the subsequent semiconductor device occupies a subsequent occupied area on the metallization ; And the EMIB die is part of the chipset.In Example 19, the subject matter of Example 18 optionally includes: wherein the semiconductor device package is attached to a board, and wherein the board includes providing physical protection to a combination of a first die, an EMIB die, and subsequent dies, and Dielectric protects the housing of both.In Example 20, the subject matter of any one or more of Examples 18-19 optionally includes: wherein the semiconductor device package is attached to a board, and wherein the first semiconductor device is a logic die, the subject matter further includes : Display device; and wherein the metallization includes four metallization levels, the four metallization levels include metal-1 (M1), M2, M3, and M4, and wherein the power rail is arranged at M1, M2, At one of M3 and M4, the metallization further includes a ground rail arranged at one of M1, M2, M3, and M4 that is not occupied by the power rail.The above detailed description includes reference to the accompanying drawings, which form a part of the detailed description. The drawings illustrate, by way of illustration, specific embodiments in which the present invention can be practiced. These embodiments are also referred to herein as "examples." Such examples may include elements in addition to those shown or described. However, the inventors also contemplate examples in which only those elements shown or described are provided. In addition, the inventors also contemplate the use of the specific examples (or one or more aspects thereof) or other examples (or one or more aspects thereof) shown or described herein. Examples of any combination or permutation of those elements (or one or more aspects thereof).If the usage in this document is inconsistent with any document so incorporated by reference, the usage in this document shall prevail.In this document, as is common in patent documents, the term "a or an" is used to include one or more than one, independent of any other instances of "at least one" or "one or more" Or use. In this document, unless otherwise indicated, the term "or" is used to mean a non-exclusive OR, such that "A or B" includes "A but not B", "B but not A", and "A and B" ". In this document, the terms "including" and "in which" are used as the plain English equivalents of the corresponding terms "comprising" and "wherein." Also, in the following claims, the terms "comprising" and "including" are open-ended, that is to say, it also includes elements other than those listed after such terms in the claims. Formulas, ingredients, articles, devices, or systems are still considered to fall within the scope of the claims. Moreover, in the following claims, the terms "first", "second", "third", etc. are merely used as marks, and are not intended to impose numerical requirements on their objects.The method examples described herein can be implemented at least partially by machines or computers. Some examples can include a computer-readable medium or a machine-readable medium encoded with instructions operable to configure an electrical device to perform a method as described in the examples above. The implementation of such methods may include code, such as microcode, assembly language code, high-level language code, and so on. Such code may include computer readable instructions for performing various methods. The code may form part of a computer program product. Additionally, in an example, such as during execution or at other times, the code may be tangibly stored on one or more volatile, non-transitory, or non-volatile tangible computer-readable media. Examples of these tangible computer-readable media can include, but are not limited to, hard disks, removable disks, removable optical disks (for example, compact disks and digital video disks), magnetic tapes, memory cards or memory sticks, random access memory (RAM), Read only memory (ROM), etc.The above description is intended to be illustrative and not restrictive. For example, the above examples (or one or more aspects thereof) can be used in combination with each other. Other embodiments can be used, such as when one skilled in the art has reviewed the above description. The abstract is provided to comply with 37 C.F.R.§1.72(b) to allow readers to quickly ascertain the nature of the technical disclosure. Submit it on the condition that it will not be used to interpret or limit the scope or meaning of the claims. Also, in the above detailed description, various features may be combined to simplify the present disclosure. This should not be interpreted as meaning that unclaimed disclosed features are essential to any claim. On the contrary, the subject matter of the invention may lie in less than all the features of a particular disclosed embodiment. Thus, the following claims are hereby incorporated into the detailed description as examples or embodiments, where each claim represents itself as a separate embodiment, and it is envisaged that such embodiments can be combined with each other in various combinations or permutations . The scope of the disclosed embodiments should be determined with reference to the appended claims along with the full scope of equivalents given to such claims. |
Disclosed are systems, apparatus, devices, methods, computer program products, and other implementations, including a method that includes determining location of a device, and controlling monitoring of behavior of one or more processes executing on the device based on the determined location of the device to identify potential one or more security-risky processes from the monitored one or more executing processes. In some embodiments, controlling the monitoring of the behavior of the one or more processes may include one or more of, for example, adjusting frequency of the monitoring of the one or more processes based on the determined location of the device, adjusting level of detail obtained for the monitored behavior of the one or more processes based on the determined location of the device, and/or adjusting features being observed for the monitored one or more processes based on the determined location of the device. |
WHAT IS CLAIMED IS:1. A method comprising:determining location of a device; andcontrolling monitoring of behavior of one or more processes executing on the device based on the determined location of the device to identify potential one or more security-risky processes from the monitored one or more executing processes.2. The method of claim 1, wherein the one or more security-risky processes comprise one or more of:a malicious process of a third party, or a process initiated by a user of the device that causes a potential security risk.3. The method of claim 1, wherein determining the location of the device comprises:determining one or more of: a global geographical position coordinates corresponding to the location of the device, a location context identifier for the device, or another identifier associated with the location of the device.4. The method of claim 1, wherein determining the location of the device comprises:determining whether the location of the device includes one or more of: a secure public location, a non-secure public location, a secure private location, or a non-secure private location.5. The method of claim 1, wherein controlling the monitoring of the behavior of the one or more processes executing on the device based on the determined location of the device comprises one or more of:adjusting frequency of the monitoring of the one or more processes executing on the device based on the determined location of the device; oradjusting level of detail obtained for the monitored behavior of the one or more processes executing on the device based on the determined location of the device.6. The method of claim 5, wherein adjusting the frequency of the monitoring of the one or more processes executing on the device comprises:increasing the frequency of observation of at least one of the one or more processes executing on the device in response to a determination that the device is located in a secure location.7. The method of claim 5, wherein adjusting the level of detail obtained for the monitored behavior of the one or more processes executing on the device comprises: increasing the level of detail obtained for at least one of the one or more processes executing on the device in response to a determination that the device is located in a secure location.8. The method of claim 1, wherein controlling the monitoring of the behavior of the one or more processes executing on the device based on the determined location of the device comprises:adjusting features being observed for the monitored one or more processes executing on the device based on the determined location of the device.9. The method of claim 1, further comprising:identifying the monitored behavior of at least one of the one or more processes as potentially malicious behavior based on the determined location of the device.10. The method of claim 9, wherein identifying the monitored behavior of the at least one of the one or more processes as potentially malicious behavior comprises: identifying the monitored behavior of the at least one of the one or more processes as potentially malicious behavior in response to a determination that number of uses of a particular feature exceeds a pre-determined threshold when the device is determined to be in a secure location.11. The method of claim 10, wherein the particular feature comprises at least one of: image capturing using a camera of the device, or data transfer over acommunication link between the device and a remote device.12. The method of claim 9, wherein identifying the monitored behavior of the at least one of the one or more processes as potentially malicious behavior comprises: identifying the monitored behavior of the at least one of the one or more processes as potentially malicious behavior using a machine-learning procedure.13. A mobile device comprising:one or more processors; andstorage media comprising computer instructions that, when executed on the one or more processors, cause operations comprising:determining location of the device; andcontrolling monitoring of behavior of one or more processes executing on the device based on the determined location of the device to identify potential one or more security-risky processes from the monitored one or more executing processes.14. The device of claim 13, wherein the one or more security-risky processes comprise one or more of:a malicious process of a third party, or a process initiated by a user of the device that causes a potential security risk.15. The device of claim 13, wherein determining the location of the device comprises:determining one or more of: a global geographical position coordinates corresponding to the location of the device, a location context identifier for the device, or another identifier associated with the location of the device. 16. The device of claim 13, wherein determining the location of the device comprises:determining whether the location of the device includes one or more of: a secure public location, a non-secure public location, a secure private location, or a non-secure private location.17. The device of claim 13, wherein controlling the monitoring of the behavior of the one or more processes executing on the device based on the determined location of the device comprises one or more of: adjusting frequency of the monitoring of the one or more processes executing on the device based on the determined location of the device; oradjusting level of detail obtained for the monitored behavior of the one or more processes executing on the device based on the determined location of the device.18. The device of claim 17, wherein adjusting the frequency of the monitoring of the one or more processes executing on the device comprises:increasing the frequency of observation of at least one of the one or more processes executing on the device in response to a determination that the device is located in a secure location.19. The device of claim 17, wherein adjusting the level of detail obtained for the monitored behavior of the one or more processes executing on the device comprises: increasing the level of detail obtained for at least one of the one or more processes executing on the device in response to a determination that the device is located in a secure location.20. The device of claim 13, wherein controlling the monitoring of the behavior of the one or more processes executing on the device based on the determined location of the device comprises:adjusting features being observed for the monitored one or more processes executing on the device based on the determined location of the device.21. The device of claim 13, wherein the instructions cause further operations comprising:identifying the monitored behavior of at least one of the one or more processes as potentially malicious behavior based on the determined location of the device.22. The device of claim 21, wherein identifying the monitored behavior of the at least one of the one or more processes as potentially malicious behavior comprises: identifying the monitored behavior of the at least one of the one or more processes as potentially malicious behavior in response to a determination that number of uses of a particular feature exceeds a pre-determined threshold when the device is determined to be in a secure location.23. The device of claim 22, wherein the particular feature comprises at least one of: image capturing using a camera of the device, or data transfer over acommunication link between the device and a remote device.24. The device of claim 21, wherein identifying the monitored behavior of the at least one of the one or more processes as potentially malicious behavior comprises: identifying the monitored behavior of the at least one of the one or more processes as potentially malicious behavior using a machine-learning procedure.25. An apparatus comprising:means for determining location of a device; andmeans for controlling monitoring of behavior of one or more processes executing on the device based on the determined location of the device to identify potential one or more security-risky processes from the monitored one or more executing processes.26. The apparatus of claim 25, wherein the one or more security-risky processes comprise one or more of:a malicious process of a third party, or a process initiated by a user of the device that causes a potential security risk.27. The apparatus of claim 25, wherein the means for determining the location of the device comprise:means for determining one or more of: a global geographical position coordinates corresponding to the location of the device, a location context identifier for the device, or another identifier associated with the location of the device.28. The apparatus of claim 25, wherein the means for determining the location of the device comprise: means for determining whether the location of the device includes one or more of: a secure public location, a non-secure public location, a secure private location, or a nonsecure private location.29. The apparatus of claim 25, wherein the means for controlling the monitoring of the behavior of the one or more processes executing on the device based on the determined location of the device comprise one or more of:means for adjusting frequency of the monitoring of the one or more processes executing on the device based on the determined location of the device; ormeans for adjusting level of detail obtained for the monitored behavior of the one or more processes executing on the device based on the determined location of the device.30. The apparatus of claim 29, wherein the means for adjusting the frequency of the monitoring of the one or more processes executing on the device comprise:means for increasing the frequency of observation of at least one of the one or more processes executing on the device in response to a determination that the device is located in a secure location.31. The apparatus of claim 29, wherein the means for adjusting the level of detail obtained for the monitored behavior of the one or more processes executing on the device comprise:means for increasing the level of detail obtained for at least one of the one or more processes executing on the device in response to a determination that the device is located in a secure location.32. The apparatus of claim 25, wherein the means for controlling the monitoring of the behavior of the one or more processes executing on the device based on the determined location of the device comprise:means for adjusting features being observed for the monitored one or more processes executing on the device based on the determined location of the device.33. The apparatus of claim 25, further comprising: means for identifying the monitored behavior of at least one of the one or more processes as potentially malicious behavior based on the determined location of the device.34. The apparatus of claim 33, wherein the means for identifying the monitored behavior of the at least one of the one or more processes as potentially malicious behavior comprise:means for identifying the monitored behavior of the at least one of the one or more processes as potentially malicious behavior in response to a determination that number of uses of a particular feature exceeds a pre-determined threshold when the device is determined to be in a secure location.35. The apparatus of claim 34, wherein the particular feature comprises at least one of: image capturing using a camera of the device, or data transfer over acommunication link between the device and a remote device.36. The apparatus of claim 33, wherein the means for identifying the monitored behavior of the at least one of the one or more processes as potentially malicious behavior comprise:means for identifying the monitored behavior of the at least one of the one or more processes as potentially malicious behavior using a machine- learning procedure.37. A processor readable media programmed with a set of instructions executable on a processor that, when executed, cause operations comprising:determining location of a device; andcontrolling monitoring of behavior of one or more processes executing on the device based on the determined location of the device to identify potential one or more security-risky processes from the monitored one or more executing processes.38. The processor readable media of claim 37, wherein the one or more security-risky processes comprise one or more of:a malicious process of a third party, or a process initiated by a user of the device that causes a potential security risk.39. The processor readable media of claim 37, wherein determining the location of the device comprises:determining one or more of: a global geographical position coordinates corresponding to the location of the device, a location context identifier for the device, or another identifier associated with the location of the device.40. The processor readable media of claim 37, wherein determining the location of the device comprises:determining whether the location of the device includes one or more of: a secure public location, a non-secure public location, a secure private location, or a non-secure private location.41. The processor readable media of claim 37, wherein controlling the monitoring of the behavior of the one or more processes executing on the device based on the determined location of the device comprises one or more of:adjusting frequency of the monitoring of the one or more processes executing on the device based on the determined location of the device; oradjusting level of detail obtained for the monitored behavior of the one or more processes executing on the device based on the determined location of the device.42. The processor readable media of claim 41, wherein adjusting the frequency of the monitoring of the one or more processes executing on the device comprises:increasing the frequency of observation of at least one of the one or more processes executing on the device in response to a determination that the device is located in a secure location.43. The processor readable media of claim 41, wherein adjusting the level of detail obtained for the monitored behavior of the one or more processes executing on the device comprises:increasing the level of detail obtained for at least one of the one or more processes executing on the device in response to a determination that the device is located in a secure location.44. The processor readable media of claim 37, wherein controlling the monitoring of the behavior of the one or more processes executing on the device based on the determined location of the device comprises:adjusting features being observed for the monitored one or more processes executing on the device based on the determined location of the device.45. The processor readable media of claim 37, wherein the instructions cause further operations comprising:identifying the monitored behavior of at least one of the one or more processes as potentially malicious behavior based on the determined location of the device.46. The processor readable media of claim 45, wherein identifying the monitored behavior of the at least one of the one or more processes as potentially malicious behavior comprises:identifying the monitored behavior of the at least one of the one or more processes as potentially malicious behavior in response to a determination that number of uses of a particular feature exceeds a pre-determined threshold when the device is determined to be in a secure location.47. The processor readable media of claim 46, wherein the particular feature comprises at least one of: image capturing using a camera of the device, or data transfer over a communication link between the device and a remote device.48. The processor readable media of claim 45, wherein identifying the monitored behavior of the at least one of the one or more processes as potentially malicious behavior comprises:identifying the monitored behavior of the at least one of the one or more processes as potentially malicious behavior using a machine-learning procedure. |
LOCATION BASED PROCESS-MONITORINGBACKGROUND[0001] Mobile devices may hold a lot of personal information, and may be susceptible to attacks by malicious applications. Some security mechanisms are configured to detect malicious processes on a mobile device through performance of real-time behavioral analysis of mobile processes that enable detection of anomalous behavior. There are also various situations where behavior of a mobile device, even if not caused by malicious processes, may constitute a security-risk (for example, situations of frequent camera use in a security-sensitive area, such as a government building, a hospital, a bank, etc.)[0002] To detect anomalous behavior of processes executing on a mobile device, the various processes executing on the device need to observed/monitored, and the monitored behavior then needs to be analyzed. If these processes were to be continually monitored or observed, the performance costs (e.g., computational cost, power use, etc.) associated with such continual monitoring might be high.SUMMARY[0003] Thus, in some variations, a method is disclosed. The method includes determining location of a device, and controlling monitoring of behavior of one or more processes executing on the device based on the determined location of the device to identify potential one or more security-risky processes from the monitored one or more executing processes.[0004] Embodiments of the method may include at least some of the features described in the present disclosure, including one or more of the following features.[0005] The one or more security-risky processes may include one or more of, for example, a malicious process of a third party, and/or a process initiated by a user of the device that causes a potential security risk.[0006] Determining the location of the device may include determining one or more of, for example, a global geographical position coordinates corresponding to the location of the device, a location context identifier for the device, and/or another identifier associated with the location of the device.[0007] Determining the location of the device may include determining whether the location of the device includes one or more of, for example, a secure public location, a non-secure public location, a secure private location, and/or a non-secure private location.[0008] Controlling the monitoring of the behavior of the one or more processes executing on the device based on the determined location of the device may include one or more of, for example, adjusting frequency of the monitoring of the one or more processes executing on the device based on the determined location of the device, and/or adjusting level of detail obtained for the monitored behavior of the one or more processes executing on the device based on the determined location of the device.[0009] Adjusting the frequency of the monitoring of the one or more processes executing on the device may include increasing the frequency of observation of at least one of the one or more processes executing on the device in response to a determination that the device is located in a secure location.[0010] Adjusting the level of detail obtained for the monitored behavior of the one or more processes executing on the device may include increasing the level of detail obtained for at least one of the one or more processes executing on the device in response to a determination that the device is located in a secure location. [0011] Controlling the monitoring of the behavior of the one or more processes executing on the device based on the determined location of the device may include adjusting features being observed for the monitored one or more processes executing on the device based on the determined location of the device.[0012] The method may further include identifying the monitored behavior of at least one of the one or more processes as potentially malicious behavior based on the determined location of the device.[0013] Identifying the monitored behavior of the at least one of the one or more processes as potentially malicious behavior may include identifying the monitored behavior of the at least one of the one or more processes as potentially malicious behavior in response to a determination that number of uses of a particular feature exceeds a predetermined threshold when the device is determined to be in a secure location. [0014] The particular feature may include at least one of, for example, image capturing using a camera of the device, and/or data transfer over a communication link between the device and a remote device.[0015] Identifying the monitored behavior of the at least one of the one or more processes as potentially malicious behavior may include identifying the monitored behavior of the at least one of the one or more processes as potentially malicious behavior using a machine- learning procedure.[0016] In some variations, a mobile device is disclosed. The mobile device includes one or more processors, and storage media comprising computer instructions. The computer instructions, when executed on the one or more processors, cause operations including determining location of the device, and controlling monitoring of behavior of one or more processes executing on the device based on the determined location of the device to identify potential one or more security-risky processes from the monitored one or more executing processes. [0017] Embodiments of the device may include at least some of the features described in the present disclosure, including at least some of the features described above in relation to the method.[0018] In some variations, an apparatus is disclosed. The apparatus includes means for determining location of a device, and means for controlling monitoring of behavior of one or more processes executing on the device based on the determined location of the device to identify potential one or more security-risky processes from the monitored one or more executing processes.[0019] Embodiments of the apparatus may include at least some of the features described in the present disclosure, including at least some of the features described above in relation to the method and the device, as well as one or more of the following features.[0020] The means for determining the location of the device may include means for determining one or more of, for example, a global geographical position coordinates corresponding to the location of the device, a location context identifier for the device, and/or another identifier associated with the location of the device. [0021] The means for determining the location of the device may include means for determining whether the location of the device includes one or more of, for example, a secure public location, a non-secure public location, a secure private location, and/or a non-secure private location.[0022] The means for controlling the monitoring of the behavior of the one or more processes executing on the device based on the determined location of the device may include one or more of, for example, means for adjusting frequency of the monitoring of the one or more processes executing on the device based on the determined location of the device, and/or means for adjusting level of detail obtained for the monitored behavior of the one or more processes executing on the device based on the determined location of the device. [0023] The means for adjusting the frequency of the monitoring of the one or more processes executing on the device may include means for increasing the frequency of observation of at least one of the one or more processes executing on the device in response to a determination that the device is located in a secure location.[0024] The means for adjusting the level of detail obtained for the monitored behavior of the one or more processes executing on the device may include means for increasing the level of detail obtained for at least one of the one or more processes executing on the device in response to a determination that the device is located in a secure location.[0025] The means for controlling the monitoring of the behavior of the one or more processes executing on the device based on the determined location of the device may include means for adjusting features being observed for the monitored one or more processes executing on the device based on the determined location of the device.[0026] The apparatus may further include means for identifying the monitored behavior of at least one of the one or more processes as potentially malicious behavior based on the determined location of the device. [0027] The means for identifying the monitored behavior of the at least one of the one or more processes as potentially malicious behavior may include means for identifying the monitored behavior of the at least one of the one or more processes as potentially malicious behavior in response to a determination that number of uses of a particular feature exceeds a pre-determined threshold when the device is determined to be in a secure location. [0028] The means for identifying the monitored behavior of the at least one of the one or more processes as potentially malicious behavior may include means for identifying the monitored behavior of the at least one of the one or more processes as potentially malicious behavior using a machine- learning procedure. [0029] In some variations, a processor readable media programmed with a set of instructions executable on a processor is disclosed. The set of instructions, when executed, cause operations including determining location of a device, and controlling monitoring of behavior of one or more processes executing on the device based on the determined location of the device to identify potential one or more security-risky processes from the monitored one or more executing processes.[0030] Embodiments of the processor-readable media may include at least some of the features described in the present disclosure, including at least some of the features described above in relation to the method, device, and apparatus.[0031] Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly or conventionally understood. As used herein, the articles "a" and "an" refer to one or to more than one ( .e., to at least one) of the grammatical object of the article. By way of example, "an element" means one element or more than one element. "About" and/or "approximately" as used herein when referring to a measurable value such as an amount, a temporal duration, and the like, encompasses variations of ±20% or ±10%>, ±5%, or +0.1% from the specified value, as such variations are appropriate to in the context of the systems, devices, circuits, methods, and other implementations described herein. "Substantially" as used herein when referring to a measurable value such as an amount, a temporal duration, a physical attribute (such as frequency), and the like, also encompasses variations of ±20% or ±10%, ±5%, or +0.1% from the specified value, as such variations are appropriate to in the context of the systems, devices, circuits, methods, and other implementations described herein.[0032] As used herein, including in the claims, "or" or "and" as used in a list of items prefaced by "at least one of or "one or more of indicates that any combination of the listed items may be used. For example, a list of "at least one of A, B, or C" includes any of the combinations A or B or C or AB or AC or BC and/or ABC (i.e., A and B and C). Furthermore, to the extent more than one occurrence or use of the items A, B, or C is possible, multiple uses of A, B, and/or C may form part of the contemplated combinations. For example, a list of "at least one of A, B, or C" may also include AA, AAB, AAA, BB, etc.[0033] As used herein, including in the claims, unless otherwise stated, a statement that a function, operation, or feature, is "based on" an item and/or condition means that the function, operation, function is based on the stated item and/or condition and may be based on one or more items and/or conditions in addition to the stated item and/or condition.[0034] Other and further objects, features, aspects, and advantages of the present disclosure will become better understood with the following detailed description of the accompanying drawings.BRIEF DESCRIPTION OF THE DRAWING[0035] FIG. 1 is a schematic diagram of an example operating environment in which a mobile device may operate.[0036] FIG. 2 is a schematic diagram of an example mobile device. [0037] FIG. 3 is a flowchart of an example monitoring control procedure. [0038] FIG. 4 is a schematic diagram of an example computing system. [0039] Like reference symbols in the various drawings indicate like elements.DESCRIPTION[0040] To improve process monitoring operations, observation/monitoring of the device's activities / processes may be controlled (e.g., modified / adjusted, or maintained) based on location of the device. For example, if the device is determined to be in a high security area (e.g., a government building, a hospital), the frequency of observation may be increased from the frequency of observation used in non-secure areas (e.g., when the device is at a user's home). Additionally, in some embodiments, processing to determine whether observed behavior of a device's processes / applications / activities constitute malicious behavior may also be controlled / modified / adjusted based on the location of the device.[0041] Thus, disclosed herein are methods, devices, systems, apparatus, products, and other implementations, including a method that includes determining location of a device, and controlling monitoring of behavior of one or more processes (also referred to as activities or applications) executing on the device based on the determined location (e.g., secure or non-secure locations) of the device to identify potential one or more security- risky processes from the monitored one or more executing processes. Security-risky processes may include, for example, a malicious process of a third party, and/or a process initiated by a user of the device that causes a potential security risk. For example, as noted, if a device is located in a security sensitive area, an otherwise legitimate process, such as capturing images by an on-board camera, may constitute a security-risky process at that particular location. In some embodiments, controlling the monitoring operations may include, for example, adjusting frequency of the monitoring operations based on the determined location of the device (e.g., to more frequently monitor external communication processes when the device is determined to be in a secure area), adjusting level of detail obtained for the monitored behavior of the one or more processes executing on the device based on the determined location of the device, etc.[0042] In some embodiments, the determined location of the mobile device (based on which process monitoring may be controlled) may be provided as a global geographical position coordinates corresponding to the location of the device, and/or as a location context identifier for the device (e.g., indicating a floor of a building where the device is located, or some other type of an identifiable geographic region where the device may be located). The current location of the device may also be identified according to whether the device is located in a secure or non-secure area, and/or whether the device is in a public or private location.[0043] With reference to FIG. 1, shown is a schematic diagram of an example operating environment 100 in which a mobile device 108, e.g., a mobile device configured to perform process monitoring controlled based on the device's location, operates. The mobile device (also referred to as a wireless device or as a mobile station) 108 may be configured, in some embodiments, to operate and interact with multiple types of other communication systems/devices, including local area network devices (or nodes), such as WLA for indoor communication, femtocells, Bluetooth-based transceivers, and other types of indoor communication network nodes, wide area wireless network nodes, satellite communication systems, etc., and as such the mobile device 108 may include one or more interfaces to communicate with the various types of communications systems. As used herein, communication systems/devices with which the mobile device 108 may communicate are also referred to as access points (AP's).[0044] As noted, and as will be discussed in greater detail below, the mobile device is configured to also determine location of a device and to control monitoring of behavior of one or more activities / processes executing on the device based on the determined location of the device to identify potential one or more security-risky activities / processes from the monitored one or more executing activities / processes. In other words, the device is configured to control (e.g., modify, maintain, etc.) its monitoring functionality based on the determined location of the device such that monitoring various processes, activities, and operations executing on a device varies depending on where the device is located.[0045] As also noted, the environment 100 may contain one or more different types of wireless communication systems or nodes. Such nodes, also referred to as wireless access points (or WAPs) may include LAN and/or WAN wireless transceivers, including, for example, WiFi base stations, femto cell transceivers, Bluetooth transceivers, cellular base stations, WiMax transceivers, etc. Thus, for example, and with continued reference to FIG. 1, the environment 100 may include Local Area Network Wireless Access Points (LAN- WAPs) 106a-e that may be used for wireless voice and/or data communication with the mobile device 108. The LAN- WAPs 106a-e may also be utilized as independents sources of position data, e.g., through implementation of multilateration-based procedures based, for example, on time of arrival techniques. The LAN- WAPs 106a-e can be part of a Wireless Local Area Network (WLAN), which may operate in buildings and perform communications over smaller geographic regions than a WW AN. Additionally in some embodiments, the LAN- WAPs 106a-e could also be pico or femto cells. In some embodiments, the LAN- WAPs 106a-e may be part of, for example, WiFi networks (802.1 lx), cellular piconets and/or femtocells, Bluetooth Networks, etc. The LAN- WAPs 106a-e can also include a Qualcomm indoor positioning system (QUIPS™). A QUIPS implementation may, in some embodiments, be configured so that a mobile device can communicate with a server that provides the device with data (such as to provide the assistance data, e.g., maps, RF heat-maps, connectivity information, etc.) for a particular floor or some other region where the mobile device is located. Although five (5) LAN- WAP access points are depicted in FIG. 1, any number of such LAN- WAP 's may be used, and, in some embodiments, the environment 100 may include no LAN-WAPs access points at all, or may include a single LAN- WAP access point. Furthermore, each of the LAN-WAPs 106a-e depicted in FIG. 1 may be a moveable node, or may be otherwise capable of being relocated.[0046] As further shown in FIG. 1, the environment 100 may also include a plurality of one or more types of Wide Area Network Wireless Access Points (WAN-WAPs) 104a-c, which may be used for wireless voice and/or data communication, and may also serve as another source of independent information through which the mobile device 108 may determine its position/location. The WAN-WAPs 104a-c may be part of wide area wireless network (WWAN), which may include cellular base stations, and/or other wide area wireless systems, such as, for example, WiMAX (e.g., 802.16). A WWAN may include other known network components which are not shown in FIG. 1. Typically, each WAN-WAPs 104a- 104c within the WWAN may operate from fixed positions, and provide network coverage over large metropolitan and/or regional areas. Although three (3) WAN-WAPs are depicted in FIG. 1, any number of such WAN-WAPs may be used. In some embodiments, the environment 100 may include no WAN-WAPs at all, or may include a single WAN- WAP. Additionally, each of the WAN-WAPs 104a-c depicted in FIG. 1 may be a moveable node, or may otherwise be capable of being relocated.[0047] Communication to and from the mobile device 108 (to exchange data, enable position determination of the device 108, etc.) may thus also be implemented, in some embodiments, using various wireless communication networks such as a wide area wireless network (WWAN), a wireless local area network (WLAN), a wireless personal area network (WPAN), and so on. The term "network" and "system" may be used interchangeably. A WWAN may be a Code Division Multiple Access (CDMA) network, a Time Division Multiple Access (TDMA) network, a Frequency Division Multiple Access (FDMA) network, an Orthogonal Frequency Division Multiple Access (OFDMA) network, a Single-Carrier Frequency Division Multiple Access (SC-FDMA) network, a WiMax (IEEE 802.16), and so on. A CDMA network may implement one or more radio access technologies (RATs) such as cdma2000, Wideband-CDMA (W-CDMA), and so on. Cdma2000 includes IS-95, IS-2000, and/or IS-856 standards. A TDMA network may implement Global System for Mobile Communications (GSM), Digital Advanced Mobile Phone System (D-AMPS), or some other RAT. GSM and W-CDMA are described in documents from a consortium named "3rd Generation Partnership Project" (3GPP). Cdma2000 is described in documents from a consortium named "3rd Generation Partnership Project 2" (3GPP2). 3GPP and 3GPP2 documents are publicly available. A WLAN may also be implemented, at least in part, using an IEEE 802.1 lx network, and a WPAN may be a Bluetooth network, an IEEE 802.15x, or some other type of network. The techniques described herein may also be used for any combination of WW AN, WLAN and/or WPAN.[0048] When deriving position using the access points 104a-b and/or 106a-e, the mobile device 108 may utilize, for example, time of arrival techniques, optionally with the assistance of a positioning server 110 and a network 112. The positioning server (also referred to as a location manager) 110 may communicate with the mobile device 108 through the network 112.[0049] In some embodiments, and as further depicted in FIG. 1, the mobile device 108 may also be configured to at least receive information from satellites of a Satellite Positioning System (SPS) 102a-b, which may be used as an independent source of position information for the mobile device 108. The mobile device 108 may thus include one or more dedicated SPS receivers specifically designed to receive signals for deriving geo-location information from the SPS satellites. Thus, in some embodiments, the mobile device 108 may communicate with any one or a combination of the SPS satellites 102a-b, the WAN-WAPs 104a-c, and/or the LAN-WAPs 106a-e. In some embodiments, each of the aforementioned systems can provide an independent information estimate of the position for the mobile device 108 using different techniques. In some embodiments, the mobile device may combine the solutions derived from each of the different types of access points to improve the accuracy of the position data.[0050] In embodiments in which the mobile device 108 can receive satellite signals, the mobile device may utilize a receiver (e.g., a GNSS receiver) implemented for use with the SPS to extract position data from a plurality of signals transmitted by the SPS satellites 102a-b. Transmitted satellite signals may include, for example, signals marked with a repeating pseudo-random noise (PN) code of a set number of chips and may be located on ground based control stations, user equipment and/or space vehicles. Satellite positioning systems may include such systems as the Global Positioning System (GPS), Galileo, Glonass, Compass, Quasi-Zenith Satellite System (QZSS) over Japan, Indian Regional Navigational Satellite System (IRNSS) over India, Beidou over China, etc., and/or various augmentation systems (e.g., an Satellite Based Augmentation System (SBAS)) that may be associated with or otherwise enabled for use with one or more global and/or regional navigation satellite systems. By way of example but not limitation, an SBAS may include an augmentation system(s) that provides integrity information, differential corrections, etc., such as, e.g., Wide Area Augmentation System (WAAS), European Geostationary Navigation Overlay Service (EGNOS), Multi-functional Satellite Augmentation System (MSAS), GPS Aided Geo Augmented Navigation or GPS and Geo Augmented Navigation system (GAG AN), and/or the like.[0051] In some embodiments, the techniques/procedures presented herein are not restricted to global systems (e.g., GNSS) for SPS. For example, the techniques provided herein may be applied to or otherwise enabled for use in various regional systems, such as, e.g., Quasi-Zenith Satellite System (QZSS) over Japan, Indian Regional Navigational Satellite System (IRNSS) over India, Beidou over China, etc., and/or various augmentation systems (e.g., a Satellite Based Augmentation System (SBAS)) that may be associated with or otherwise enabled for use with one or more global and/or regional navigation satellite systems. By way of example but not limitation, an SBAS may include an augmentation system(s) that provides integrity information, differential corrections, etc., such as, e.g., Wide Area Augmentation System (WAAS), European Geostationary Navigation Overlay Service (EGNOS), Multi-functional Satellite Augmentation System (MSAS), GPS Aided Geo Augmented Navigation or GPS and Geo Augmented Navigation system (GAGAN), and/or the like. Thus, as used herein, an SPS may include any combination of one or more global and/or regional navigation satellite systems and/or augmentation systems, and SPS signals may include SPS, SPS-like, and/or other signals associated with such one or more SPS. [0052] As used herein, a mobile device or station (MS) refers to a device such as a cellular or other wireless communication device, personal communication system (PCS) device, personal navigation device (PND), Personal Information Manager (PIM), Personal Digital Assistant (PDA), a tablet device, a laptop or some other suitable mobile device which may be capable of receiving wireless/cellular communication and/or navigation signals, such as navigation positioning signals. The term "mobile station" (or "mobile device") is also intended to include devices which communicate with a personal navigation device (PND), such as by short-range wireless, infrared, wireline connection, or other connection, regardless of whether satellite signal reception, assistance data reception, and/or position-related processing occurs at the device or at the PND. Also, "mobile station" is intended to include all devices, including wireless communication devices, computers, laptops, tablet, etc., which are capable of communication with a server, such as via the Internet, WiFi, or other network, regardless of whether satellite signal reception, assistance data reception, and/or position-related processing occurs at the device, at a server, or at another device associated with the network. Any operable combinations of the above are also considered a "mobile station."[0053] With reference now to FIG. 2, a schematic diagram illustrating various components of an example mobile device 200, which may be similar to the mobile device 108 of FIG. 1, is shown. For the sake of simplicity, the various features / components / functions illustrated in the box diagram of FIG. 2 are connected together using a common bus to represent that these various features / components / functions are operatively coupled together. Other connections, mechanisms, features, functions, or the like, may be provided and adapted as necessary to operatively couple and configure a portable wireless device. Furthermore, one or more of the features or functions illustrated in the example of FIG. 2 may be further subdivided, or two or more of the features or functions illustrated in FIG. 2 may be combined. Additionally, one or more of the features or functions illustrated in FIG. 2 may be excluded. [0054] As shown, the mobile device 200 may include one or more local area network transceivers 206 that may be connected to one or more antennas 202. The one or more local area network transceivers 206 comprise suitable devices, hardware, and/or software for communicating with and/or detecting signals to/from one or more of the LAN-WAPs 106a-e depicted in FIG. 1, and/or directly with other wireless devices within a network. In some embodiments, the local area network transceiver(s) 206 may comprise a WiFi (802. l lx) communication transceiver suitable for communicating with one or more wireless access points; however, in some embodiments, the local area network transceiver(s) 206 may be configured to communicate with other types of local area networks, personal area networks (e.g., Bluetooth), etc. Additionally, any other type of wireless networking technologies may be used, for example, Ultra Wide Band, ZigBee, wireless USB, etc. [0055] The mobile device 200 may also include, in some implementations, one or more wide area network transceiver(s) 204 that may be connected to the one or more antennas 202. The wide area network transceiver 204 may comprise suitable devices, hardware, and/or software for communicating with and/or detecting signals from one or more of, for example, the WAN-WAPs 104a-c illustrated in FIG. 1, and/or directly with other wireless devices within a network. In some implementations, the wide area network transceiver(s) 204 may comprise a CDMA communication system suitable for communicating with a CDMA network of wireless base stations. In some implementations, the wireless communication system may comprise other types of cellular telephony networks, such as, for example, TDMA, GSM, etc. Additionally, any other type of wireless networking technologies may be used, including, for example, WiMax (802.16), etc.[0056] In some embodiments, an SPS receiver (also referred to as a global navigation satellite system (GNSS) receiver) 208 may also be included with the mobile device 200. The SPS receiver 208 may be connected to the one or more antennas 202 for receiving satellite signals. The SPS receiver 208 may comprise any suitable hardware and/or software for receiving and processing SPS signals. The SPS receiver 208 may request information as appropriate from the other systems, and may perform the computations necessary to determine the position of the mobile device 200 using, in part, measurements obtained by any suitable SPS procedure. [0057] In some embodiments, the mobile device 200 may also include one or more sensors 212 coupled to a processor 210 (also referred to as a controller). For example, the sensors 212 may include motion sensors (also referred to as inertial sensors) to provide relative movement and/or orientation information which is independent of motion data derived from signals received by the wide area network transceiver(s) 204, the local area network transceiver(s) 206, and/or the SPS receiver 208. By way of example but not limitation, the motion sensors may include an accelerometer 212a, a gyroscope 212b, a geomagnetic (magnetometer) sensor 212c (e.g., a compass), an altimeter (e.g., a barometric pressure altimeter; not shown), and/or other sensor types. In some embodiments, the accelerometer 212a may be implemented based on micro-electromechanical-system (MEMS). Other types of accelerometers may be used in place of, or in addition to MEMS-based accelerometer. Additionally, a 3D accelerometer, comprising three perpendicularly placed accelerometers, may be implemented. In some embodiments, the gyroscope 212b may include a gyroscope based on MEMS technology, and may be a single-axis gyroscope, a double-axis gyroscope, or a 3-D gyroscope configured to sense motion about, for example, three orthogonal axes. Other types of gyroscopes may be used in place of, or in addition to MEMS-based gyroscope. In some embodiments, a magnetometer, configured to measure a magnetic field intensity and/or direction (and, thus, may be configured to measure absolute orientation with respect to the magnetic north) may also be implemented based, for example, on MEMS technology. Such MEMS-base magnetometers may be configured to detect motion caused by the Lorentz force produced by a current through a MEMS conductor. Other types of magnetometers may also be used. An altimeter may, for example, be configured to provide altitude data and thus may facilitate determining a floor in an indoor structure (e.g., an office building, a shopping mall, etc.) where the device may be located. Based on data representative of altitude measurements performed by the altimeter, navigation tasks, such as obtaining assistance data (including maps) for a particular floor in the indoor structure may be performed.[0058] The output of the one or more sensors 212 may be combined in order to provide motion information. For example, estimated position of the mobile device 200 may be determined based on a previously determined position and distance traveled from that previously determined position as determined from the motion information derived from measurements by at least one of the one or more sensors. In some embodiments, the estimated position of the mobile device may be determined based on probabilistic models (e.g., implemented through a particle filter realized using the mobile device 200) using the outputs of the one or more sensors 212. As further shown in FIG. 2, in some embodiments, the one or more sensors 212 may also include a camera 212d (e.g., a charge-couple device (CCD)-type camera), which may produce still or moving images (e.g., a video sequence) that may be displayed on a user interface device, such as a display or a screen. Image data may also be used, in some embodiments, for navigation and location determination operations. [0059] The processor(s) (also referred to as a controller) 210 may be connected to the local area network transceiver(s) 206, the wide area network transceiver(s) 204, the SPS receiver 208, and/or the one or more sensors 212. The processor may include one or more microprocessors, microcontrollers, and/or digital signal processors that provide processing functions, as well as other computation and control functionality. The processor 210 may also include storage media (e.g., memory) 214 for storing data and software instructions for executing programmed functionality within the mobile device. The memory 214 may be on-board the processor 210 (e.g., within the same IC package), and/or the memory may be external memory to the processor. Further details regarding an example embodiment of a processor or computation system, which may be similar to the processor 210, are provided below in relation to FIG. 4.[0060] A number of software modules and data tables may reside in memory 214 and be utilized by the processor 210 in order to manage both communications with remote devices/nodes (such as the various access points depicted in FIG. 1), positioning determination functionality, and/or device control functionality. As will be described in greater details below, the processor 210 may also be configured, e.g., using software- based implementations, to control (e.g., modify, maintain, etc.) monitoring operations performed so as to monitor behavior of one or more activities / processes executing on the mobile device based on the determined location of the device in order to identify potential one or more security-risky activities / processes from the monitored one or more executing activities / processes. In some implementations, the monitoring operations are controlled based on a determination of whether the determined location is a private location or a public location, and/or whether the location is a secure location (e.g., a location with a heightened security sensitivity, such as a government building, where certain processes / activities are not permitted or are permitted at a reduced level only). Thus, in such implementations, location determination may include identifying whether the current position of the mobile device's location is within areas that are defined as secure areas, public areas, private areas, etc.[0061] As illustrated in FIG. 2, memory 214 may include a positioning module 216, an application module 218, a received signal strength indicator (RSSI) module 220, a round trip time (RTT) module 222, a process monitoring module 226, and/or an analysis module 228. It is to be noted that the functionality of the modules and/or data structures may be combined, separated, and/or be structured in different ways depending upon the implementation of the mobile device 200. For example, the RSSI module 220, the RTT module 222, and/or any of the other modules, may each be realized, at least partially, as a hardware-based implementation, and may thus include such devices as a dedicated antenna (e.g., a dedicated RTT and/or RSSI antenna), a dedicated processing unit to process and analyze signals received and/or transmitted via the antenna(s) (e.g., to determine signal strength of a received signals, determine timing information in relation to an RTT cycle), etc.[0062] The application module 218 may be a process running on the processor/controller 210 of the mobile device 200, which requests position information from the positioning module 216. Applications typically run within an upper layer of the software architectures, and may include indoor navigation applications, shopping applications, location aware service applications, etc. The positioning module 216 may derive the position of the mobile device 200 using information derived from various receivers and modules of the mobile device 200. For example, to determine the mobile device's position based on RTT measurements, reasonable estimates of processing time delays introduced by each access point may first be obtained and used to calibrate/adjust the measured RTTs. The measured RTTs may be determined by the RTT module 222, which can measure the timings of signals exchanged between the mobile device 200 and the access points to derive round trip time (RTT) information. Once measured, the RTT values may be passed to the positioning module 216 to assist in determining the position of the mobile device 200.[0063] Other information that may be determined from communications received by the mobile device 200 (e.g., using one of its transceivers) includes the received signal power, which may be represented in the form of RSSI (determined using the RSSI module 220). The RSSI module 220 may thus also provide data regarding the signals to the positioning module 216. When using RSSI measurements to determine a mobile device's position, appropriate calibration/adjustment procedures may need to be performed. A determined position of the mobile device 200 may then be provided to the application module 218.[0064] In some embodiments, the monitoring functionality of the device 200 may be implemented, at least in part, through a process monitoring module 226. The process monitoring module 226 implements (e.g., via software in this example, although the implementation can be realized alternatively or additional via hardware) a process, running on the processor/controller 210 of the mobile device 200, to monitor one or more of the various activities / processes performed by the device 200. As will be described in greater details below, the process module 226 implementation is configured to control monitoring functionality, e.g., track behavior of various activities and/or determine appropriate action based on the data procured through such monitoring, based, at least in part, on determined location of the device (the location of the device may be determined, for example, using the positioning module 216). The process monitoring module 226 may thus be configured, for example, to adjust one or more of monitoring frequency of one or more of the monitored process, level of detail obtained with respect to the monitored one or more processes, which activities / processes to monitor, etc., based on information relating to the location of the device (e.g., actual coordinates of the device, a determination of whether the device is located in a secure or non-secure area, etc.) [0065] In some embodiments, the memory 214 may also include an analysis module 228 configured to identify whether one or more of the device's activities / processes being monitored may be a malicious or benign process. As will be discussed in greater details below, the analysis module may identify the activity / process type using activity- classification rules / processes. In some implementations, the analysis module 228 may include a machine learning implementation in which classification of the type of the activity / process behavior (e.g., as being malicious or benign) is learned dynamically learned over time.[0066] In some embodiments, the mobile device 200 may also be configured to receive supplemental information that includes auxiliary position and/or motion data which may be determined from other sources (e.g., the one or more sensors 212). Such auxiliary position data may be incomplete or noisy, but may be useful as another source of independent information to enable position determination and/or other functionality. As illustrated in FIG. 2 (using dashed lines), mobile device 200 may optionally store auxiliary position/motion data 224 in memory which may be derived from information received from other sources as described below. Supplemental information may also include, but not be limited to, information that can be derived or based upon Bluetooth signals, beacons, RFID tags, and/or information derived from maps (e.g., receiving coordinates from a digital representation of a geographical map by, for example, a user interacting with a digital map).[0067] The mobile device 200 may further include a user interface 250 which provides any suitable interface systems, such as a microphone/speaker 252, keypad 254, and a display 256 that allows user interaction with the mobile device 200. The microphone/speaker 252 provides for voice communication services (e.g., using the wide area network transceiver(s) 204 and/or the local area network transceiver(s) 206). The keypad 254 comprises any suitable buttons for user input. The display 256 comprises any suitable display, such as, for example, a backlit LCD display, and may further include a touch screen display for additional user input modes.[0068] With reference now to FIG. 3, a flow chart of an example procedure 300 to control monitoring operations at a device of various processes performed by the device is shown. As illustrated, the example procedure 300 includes determining 310 location of a device (e.g., a device such as the device 108 or 200 of FIGS. 1 and 2, respectively). Generally, determining the device's location includes obtaining data to enable/facilitate location determination, and determining the location of the device based, at least in part, on the obtained data. In some embodiments, the location determination of the device may be performed by a module whose functionality is similar to that of the positioning module 216 depicted in FIG. 2. Accordingly, a mobile device at which the procedure 300 of FIG. 3 may be performed may be configured to receive signals from one or more remote transmitters such as any of the satellite and/or access points 102, 104, and/106 of FIG. 1 and to determine the receiving device's position based, for example, on multilateration techniques. For example, the positioning engine process 310 may be configured to determine RSSI or RTT parameters (e.g., using an RTT module, such as the RTT module 222 implemented in the example embodiment of the mobile device 200, and/or an RSSI module, such as the RSSI module 220 of the mobile device 200) associated with received signals from one or more remote transmitters, and based on the known locations of the remote transmitters to determine the position of the mobile device. In another example, the device's position may be determined based on signal profile identification techniques, e.g., by comparing determined parameter values of, for example, RSSI and/or RTT, to stored profiles that are associated with pre-determined positions. [0069] In embodiments in which the device's location is determined based on such metrics as RSSI and/or RTT, measurements of signals received from one or more remote transmitters, e.g., access points (each of which may be identified by an access point identifier, such as a unique MAC address associated with the access point), can be used to determine an estimate of the device's location. For example, a database (which may be stored locally at a memory module housed on the device at which the procedure 300 is implemented), containing geographic locations, processing delays, power profiles, RTT profiles, and other such information for multiple access points with known geographical positions, may be accessed and relevant data (e.g., for particular transmitters / access points from which signals at the receiver were received) may be obtained. The database data so obtained may be used to facilitate location determination of the device. For example, the relative distances of the receiver receiving the signals from the transmitters / access points transmitting the signals may be determined based, at least in part, on known locations for those transmitters / access points stored on the accessed database, and an estimation of the location of the device may be computed/derived (e.g., using multilateration procedures, such as a trilateration procedure). As noted, in some embodiments, the position of the mobile device may be also be determined, for example, by comparing the actual measured values of signal strength (or RSSI) and RTT obtained from one or more access points, to stored profiles to identify a profile matching (approximately or precisely) the set of metric values determined by the mobile device. A location estimate associated with a matching stored profile may then be deemed to be an estimate of the current location of the device receiving the transmitters' / access points' signals.[0070] In some embodiments, the mobile device on which the procedure 300 may be implemented may be operating inside an indoor environment where satellite signals and/or signals from WW AN access points are generally more difficult to receive, and therefore the location of the mobile device may be determined from signals received from one or more WLAN (e.g., WiFi devices, Bluetooth devices, femtocells, etc.), which may be similar to the WLAN access points 106a-e depicted in FIG. 1. [0071] In some embodiments, the access points providing the signals based on which location determination procedures may be performed may be part of a QUIPS™ (Qualcomm Indoor-Positioning System) implementation. In such embodiments, positioning determination may be performed as follows. Initially, an LCI discovery process is performed (an LCIs, or location context identifiers, refers to identifiers associated with such geographical areas as, for example, floors of a building). The discovery process causes transmission of a request to a server that identifies all LCIs. The discovery process results in determination of a coarse position of the mobile device based, for example, on MAC id's that are seen/detected by the mobile device. The server communicates a set of candidate LCIs to the mobile device with a list of access points. Following the LCI discovery process, an LCI disambiguation process is performed, where one or more criteria (such as the number of access points currently visible from each LCI, e.g., number of access points currently visible from each floor, maximum RSSI values from each LCI, median RSSI values from each LCI, etc.) may be applied to select an LCI from the candidate list. The chosen LCI represents a position estimate that is finer (i.e., has a lower uncertainty) than the position estimate resulting from the LCI discovery process. Once an LCI from a set of candidate LCIs, has been chosen, a positioning process based on, for example, RSSI and/or RTT may be performed. For example, targeted scans of access point(s), limited to those associated with the selected LCI, provide the RSSI or RTTs required to determine a position approximation for the mobile device's location.[0072] Thus, a user carrying a mobile device may travel within an area (e.g., an indoor environment). To provide the user with a location-based service, an estimated position of the mobile device within the travelled area may be determined. As noted, a current estimated location of a mobile device may be determined, for example, via a position fix obtained using one or more location determination procedure, such as multilateration techniques, based on signals received from remote transmitters (e.g., access points) via one or more transceivers. For example, an implementation may process signals received from one or more access points, such as WiFi-based wireless access points (e.g., when operating in an indoor environment), via, for example, transceivers such as the transceiver 206 of the example device of FIG. 2. Based on such received signals, an initial location of the device may be determined. If the device cannot subsequently receive signals or other information from which a substantially accurate location of the device can be determined (e.g., because the receiver may be traveling in an indoor environment where such signals are not available), estimates of the device's location may be determined based on the last position determined using signals / information from remote devices / systems, the distance traveled (as provided by various on-board sensors such as an accelerometer, gyroscope, etc.), and/or a determined orientation (determined using one or more of the measurements of the sensors coupled to the receiver). In some embodiments, determination of the location of the device may be further facilitated by assistance data (e.g., local maps) provided from an assistance data database. For example, the position of the receiver determined using signals / information from remote devices / systems, as well as estimated locations of the device determined using, for example, a particle filter implementation, may be provided in terms of a local coordinate system corresponding to local maps provided from the assistance data database. Furthermore, the determined estimated location of the system may be presented on such local maps obtained from the assistance data database.[0073] As noted, in some embodiments, determining the location of the mobile device may include identifying the type of area where the device is located, e.g., determining various characteristics of the location in which the device is located that are germane to control of the monitoring operations performed by the device. For example, in some embodiments, the location where the device is located may be determined to be one or more of, for example, a secure public location, a non-secure public location, a secure private location, and/or a non-secure private location. Other location types to define or characterize the determined location of the mobile device may also be used. For example, the characterization of secure/non-secure can be more finely defined using a scale (e.g., secure level 10, could be the highest level of security). As noted, the characterization of a location as being secure or non-secure may refer, in some embodiments, to the level of security sensitivity associated with the characterized location. For example, a secure area, be it private (e.g., premises of a private business) or public (premises of a government office), may refer to an area with low security tolerance for communication of data outside of the associated area.[0074] Determination of the type of area the mobile device is determined to be located in may be facilitated by assistance data associated with the area. Assistance data (e.g., maps, data records arranged as a database with data about the general region where the mobile device is located, etc.) may thus be accessed to identify information corresponding to the determined location of the device. When the position of the device is determined (e.g., based on signals from remote transmitters, sensors' measurements, etc.) as geographic coordinates, as an LCI, or as some other value indicative of the location, assistance data information corresponding to the determined location is identified (and may be retrieved). Such assistance data information may include the type of area the device is located in. For example, in some embodiments, assistance data may include a map with grid points associated with geographic coordinates. That assistance data map may be divided into various sections that each includes such information as the type of area that map section is defined to be. In this example, upon determination of the mobile device's geographic coordinates, the assistance data for the general region (e.g., map of a building, map of a county, map of some other geographic area, etc.) is accessed (the assistance data may have been stored locally at the device, or may need to be accessed at a remote server). The part of the assistance data corresponding to the determined location is then identified. For example, a point/grid on an assistance data map that corresponds to the determined location of the mobile device (e.g., determined based on signals received from remote transmitter, sensors' measurements, etc.) is identified on the map, and the area type associated with that identified grid/point is obtained. As noted, in some embodiments, the area-type information may indicate whether the area is considered to be private or public, and/or whether it is secure/non-secure.[0075] Turning back to FIG. 3, having determined the location of the device and/or the area-type where the mobile device is located, monitoring of behavior of one or more processes / activities executing on the device is controlled 320 based on the determined location of the device to, for example, identify potential one or more security-risky processes from the monitored one or more executing processes / activities. Processing corresponding to the operations 320 may thus include obtaining (e.g., receiving) the determined location, and controlling the monitoring functionality based on the obtained location.[0076] Controlling the monitoring functionality (e.g., the monitoring operations performed on the device for the purpose of, among other things, identify and/or control security risky processes on the device) may include modifying/adjusting the monitoring operations, maintaining the monitoring operations at their current level and/or configuration, etc. As noted, the determined location based on which monitoring functionality is controlled may be provided as a global geographic position coordinates (e.g., relative to some global map that includes the area where the device is determined to be located), a position coordinates in a local map (e.g., a map of a building, a map of a floor of a building, a map of a shopping mall, a map of some specific region, etc.), a location context identifier, or as some other value / identifier representative of a location of the mobile device. For example, in some embodiments, the determined location may be provided as a location-type identifier, including such location type identifiers as secure, non-secure, public, or private. In such embodiments, the monitoring functionality may be controlled to operate in one of several modes corresponding to the discrete number of location-type identifiers. In other words, the monitoring behavior may, in some embodiments, have one particular functionality / configuration corresponding to locations determined to be private secure locations, another functionality / configuration corresponding to locations determined to be public secure locations, yet another functionality / configuration corresponding to locations determined to be private nonsecure locations, and a further functionality / configuration corresponding to locations determined to be public non-secure locations.[0077] Controlling monitoring functionality to monitor behavior of executing device processes or activities may include one or more of, for example:• Adjusting frequency of the monitoring of one or more processes / activitiesexecuting on a mobile device based on the determined location of the device;• Adjusting level of detail obtained for the monitored behavior of the one or more processes / activities executing on the device based on the determined location of the device; and/or• Adjusting features being observed, including which of the one or more processes / activities executing on the device are to be monitored, based on the determined location of the device.[0078] In some implementations, adjusting frequency of monitoring of the processes / activities executing on the device based on the determined location of the device may include increasing or decreasing the frequency of observation of processes / activities running on the device. For example, monitoring frequency may need to be increased when the device is determined to be in a public and /or a secure area. On the other hand, the frequency of monitoring the device's processes can be decreased if the device is determined to be located at the user's home. Frequency modification may be made for individual processes / activities that are to be monitored so that different monitoring frequencies may be used for different activities / processes. Thus, for example, monitoring frequency for data transfer activity may be increased (e.g., to some predetermined level, or by some pre-determined factor) in response to a determination that the mobile device is presently located in area defined to be a private secure area (e.g., an area where unauthorized data transfer poses a risk). In contrast, monitoring frequency for another activity / process (e.g., camera use) at the present private-secure location may be modified by a different value or factor, and may result in an increased or decreased monitoring frequency for that activity / process.[0079] As noted, controlling the monitoring functionality may also include, in some embodiments, adjusting level of detail obtained for the monitored behavior of the one or more processes / activities executing on the mobile device based on the determined location of the device. For example, when the device is determined to be located in a secure area, the type of monitoring detail obtained through the monitoring may be heightened, and may include information on when a particular observed event happened, how many times that observed event occurred, etc. On the other hand, when the device is determined to be located in a non-secure area, it may be enough to record information indicating whether a particular observed event happened or not, but without recording any further details for that event (e.g., when did it happen, etc.) Here too, modification / adjustment of the level of detail obtained for the monitored activities / processes may be made for individual processes / activities that are to be monitored so that different levels of details may be used for different activities / processes.[0080] As also noted, controlling the monitoring functionality may include, in some embodiments, adjusting what features are being observed based on the determined location of the device. For example, in response to a determination that the device is located in a secure area, the behavior of the device's camera activities may be monitored (e.g., because capturing images in a secure area may be considered to be a security breach). However, if it is determined that the device is located in a non-secure area, then device's camera activity might not have to be monitored at all.[0081] Modifying / adjusting the monitoring functionality may include selecting one of several data sets defining various modes of monitoring functionalities. For example, each data set may define one or more frequency values specifying the frequency at which various executing activities / processes need to be monitored, one or more detail parameters specifying how much details needs to be procured for various activities / processes at the particular monitoring modes, one or more features (e.g., activities / processes) that need to be monitored at the particular monitoring mode, etc. Monitoring operations may then be performed in accordance with the selected data set. In some implementations, modifying / adjusting monitoring functionality may include modifying values of certain parameters / values defining monitoring functionality, including modifying such parameters that define the frequency at which various executing activities / processes need to be monitored, the level of details that needs to be obtained through the monitoring operation, and which features need to be monitored. [0082] In some embodiments, determination of whether monitored behavior of at least one of one or more processes executing on the device is to be classified as security-risky or not may also be based on the determined location. For example, rules (e.g., mapping rules) and other types of processes to determine whether at least one activity is benign or suspicious (e.g., potentially malicious process) can be controlled based on a mobile device's location. Thus, a particular device activity or process (e.g., data communication on a WiFi link, or some other process) may be determined to be benign (e.g., non-risky activity/process) when observed/monitored at a first location (which may be a location located at a non-secure area such as a user's home environment) because the classification rules / processes when applied at the first location result in a determination that the particular activity is not risky. On the other, if the particular activity is observed/monitored at a second location (e.g., a location associated with a secure area characterization), the classification rules / processes to classify activities (e.g., as risky or non-risky) may be such that when applied to data obtained through monitoring of device processes / activities at the second location (e.g., a secure area location) may result in a determination that the particular activity is risky.[0083] Another example of identification of monitored device processes / activities as risky or non-risky based on the determined location of the mobile device is a situation in which a device is controlled to take a picture while sitting on a beach and uploading the picture to some social media site. This activity, when performed at a non-secure area such as the beach may result in a determination (through application of location- dependent classification rules/processes) that the activity is benign and non-risky behavior. Conversely, taking a picture while in a government building or other secure area may be determined (e.g., using one or more of the location-dependent classification rules / processes) to be non-benign (i.e., security-risky) behavior.[0084] In some embodiments, the activity-classification rules / processes (e.g., to identify at least one activity behavior, from one or more activities/processes, as malicious / benign, and/or risky / non-risky) may include machine learning implementation in which determination of behavior type can be dynamically learned over time. Thus, the mobile device (or a remote system) may include a dynamically configurable learning / analysis module (which may be similar to the analysis module 228 of the device 200 depicted in FIG. 2) operable to recognize acceptable behavior as a function of location (e.g., learning that taking photos or recording audio may not be benign behavior in certain locations). In some implementations, such a machine learning module may be configured to iteratively analyze training input data (e.g., determined location associated with the input and an example monitored activity) and the training input data's corresponding output (e.g., classification information indicative of whether the input activity is deemed to be malicious or benign activity, or some other classification, at the input's associated location). Using the training data, such a machine learning module may be configured to derive functions, models, rules, processes, etc., that cause subsequent inputs of, for example, determined locations and certain activities/processes executable on the mobile device, to produce outputs (e.g., a classification of malicious / benign, risky / non-risky, etc.) that is consistent with the learning machine module's learned behavior. In some embodiments, the dynamically configurable learning machine module may enable configuration of the learning machine module for real-live data. For example, in response to actual inputs (such as a location and an activity), the learning machine module may prompt a user or technician to indicate whether the observed behavior should be deemed to be malicious or benign (or some other classification) at the present determined location. In some embodiments, the learning machine module may make an initial determination of whether for the given location and activity / process, the activity / process should be determined to be malicious or benign (or some other classification), and prompt a user or a technician to confirm or correct that initial determination. [0085] In some embodiments, the learning machine implementation may be realized based as a neural network system. A neural network includes interconnected processing elements (effectively the systems' neurons). The connections between processing elements in the neural network have weights that cause output from one processing element to be weighed before being provided as input to the next interconnected processing elements. The weight values between connections can be varied, thereby enabling the neural network to adapt (or learn) in response to training data it receives. In some embodiments, the learning machine may be implemented as a support vector machine configured to generate, for example, classification functions or general regression function. In some embodiments, the learning machine may be implemented using support vector machines, decision trees techniques, regression techniques to derive best-fit curves, and/or other types of machine learning procedures / techniques. [0086] In some embodiments, data obtained through the monitoring of device processes / activities (including determination, based on observed / monitored behavior of various device activities / processes, of whether those activities are benign or risky) may be used to cause certain actions to be implemented. For example, a determination that a particular activity / process is risky behavior (and thus may be a malicious process) may result in performance of one or more actions, including alerting an administrator of a potential security breach, disabling the activity / process (e.g., preventing or inhibiting the device from receiving or transmitting data via wireless communication links, disabling the device's camera), etc.[0087] Performing the procedures to determine location of a mobile device and control monitoring functionality of the mobile device based on the determined location may be facilitated by a processor-based computing system. With reference to FIG. 4, a schematic diagram of an example computing system 400 is shown. The computing system 400 may be housed in, for example, a handheld mobile device such as the devices 108 and 200 of FIGS. 1 and 2, respectively. The computing system 400 includes a processor-based device 410 such as a personal computer, a specialized computing device, and so forth, that typically includes a central processor unit 412. In addition to the CPU 412, the system includes main memory, cache memory and bus interface circuits (not shown). The processor-based device 410 may include a mass storage device 414, such as a hard drive and/or a flash drive associated with the computer system. The computing system 400 may further include a keyboard, or keypad, 416, and a monitor 420, e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor, that may be placed where a user can access them (e.g., a mobile device's screen). [0088] The processor-based device 410 is configured to, for example, implement procedures to determine location of a mobile device (which may also include determining the area-type, e.g., secure, non-secure, etc., in which the device is located), and/or control monitoring of behavior of one or more processes / activities executing on the device based on the determined location of the device to identify potential one or more security-risky processes from the monitored one or more executing processes / activities. The mass storage device 414 may thus include a computer program product that when executed on the processor-based device 410 causes the processor-based device to perform operations to facilitate the implementation of the above-described procedures. The processor-based device may further include peripheral devices to enable input/output functionality. Such peripheral devices may include, for example, a CD-ROM drive and/or flash drive, or a network connection, for downloading related content to the connected system. Such peripheral devices may also be used for downloading software containing computer instructions to enable general operation of the respective system/device. Alternatively and/or additionally, in some embodiments, special purpose logic circuitry, e.g., an FPGA (field programmable gate array), a DSP processor, or an ASIC (application-specific integrated circuit) may be used in the implementation of the computing system 400. Other modules that may be included with the processor-based device 410 are speakers, a sound card, a pointing device, e.g., a mouse or a trackball, by which the user can provide input to the computing system 400. The processor-based device 410 may include an operating system.[0089] Computer programs (also known as programs, software, software applications or code) include machine instructions for a programmable processor, and may be implemented in a high-level procedural and/or object-oriented programming language, and/or in assembly/machine language. As used herein, the term "machine-readable medium" refers to any non-transitory computer program product, apparatus and/or device (e.g., magnetic discs, optical disks, memory, Programmable Logic Devices (PLDs)) used to provide machine instructions and/or data to a programmable processor, including a non-transitory machine-readable medium that receives machine instructions as a machine- readable signal.[0090] Memory may be implemented within the processing unit or external to the processing unit. As used herein the term "memory" refers to any type of long term, short term, volatile, nonvolatile, or other memory and is not to be limited to any particular type of memory or number of memories, or type of storage media upon which memory is stored.[0091] If implemented in firmware and/or software, the functions may be stored as one or more instructions or code on a computer-readable medium. Examples include computer-readable media encoded with a data structure and computer-readable media encoded with a computer program. Computer-readable media includes physical computer storage media. A storage medium may be any available medium that can be accessed by a computer. By way of example, and not limitation, such computer-readable media can comprise RAM, ROM, EEPROM, CD-ROM or other optical disk storage, magnetic disk storage, semiconductor storage, or other storage devices, or any other medium that can be used to store desired program code in the form of instructions or data structures and that can be accessed by a computer; disk and disc, as used herein, includes compact disc (CD), laser disc, optical disc, digital versatile disc (DVD), floppy disk and Blu-ray disc where disks usually reproduce data magnetically, while discs reproduce data optically with lasers. Combinations of the above should also be included within the scope of computer- readable media.[0092] In addition to storage on computer-readable medium, instructions and/or data may be provided as signals on transmission media included in a communication apparatus. For example, a communication apparatus may include a transceiver receiving signals indicative of instructions and data. The instructions and data are configured to cause one or more processing units to implement the functions outlined in the claims. That is, the communication apparatus includes transmission media with signals indicative of information to perform disclosed functions. At a first time, the transmission media included in the communication apparatus may include a first portion of the information to perform the disclosed functions, while at a second time the transmission media included in the communication apparatus may include a second portion of the information to perform the disclosed functions.[0093] Although particular embodiments have been disclosed herein in detail, this has been done by way of example for purposes of illustration only, and is not intended to be limiting with respect to the scope of the appended claims, which follow. In particular, it is contemplated that various substitutions, alterations, and modifications may be made without departing from the spirit and scope of the invention as defined by the claims. Other aspects, advantages, and modifications are considered to be within the scope of the following claims. The claims presented are representative of the embodiments and features disclosed herein. Other unclaimed embodiments and features are also contemplated. Accordingly, other embodiments are within the scope of the following claims. |
In one embodiment, the present invention includes a method for determining from a data block in a buffer a number of first operands in a first portion of the buffer and a number of second operands in a second portion of the buffer. Based on these numbers, a cyclic redundancy checksum (CRC) operation may be iteratively performed on the first and second operands to obtain a checksum result. The first and second operands are of a different length, and the checksum operation may be executed using processor instructions corresponding to the different lengths. Other embodiments are described and claimed. |
1.A method comprising:Determining, according to a data block in the buffer, the number of first operands in the first portion of the buffer and the number of second operands in the second portion of the buffer;Performing a cyclic redundancy checksum (CRC) operation on the first operand, wherein the first operand has a first length;The CRC operation is iteratively performed on the second operand, wherein the second operand has a second length, the second length being greater than the first length.2.The method of claim 1 further comprising: performing said CRC operation on said first operand in response to said first user level instruction of said CRC operation, said first user level instruction corresponding to said first length.3.The method of claim 2, further comprising: performing said CRC operation on said second operand in response to said second user level instruction of said CRC operation, said second user level instruction corresponding to said second length.4.The method of claim 1 further comprising: performing said CRC operation on said first operand in a first block of a hardware engine of a general purpose processor, and in a second block of said hardware engine The second operand performs the CRC operation.5.The method of claim 1 further comprising determining a number of third operands in the third portion of the buffer based on the data block, wherein the third operand has the first length.6.The method of claim 5 further comprising iteratively performing said CRC operation on said third operand in response to said first user level instruction of said CRC operation, said first user level instruction corresponding to said The first length.7.A system comprising:a first component, configured to perform a checksum operation on data in the buffer component according to the first checksum instruction of the source data of the first width until reaching a natural alignment boundary of the source data of the second width;a second component for performing the checksum operation on data in the buffer component based on a second checksum instruction of source data of the second width.8.The system of claim 7 further comprising: means for determining a length of the head corresponding to the first portion of data in said buffer component, said first portion extending from a beginning of said buffer component to said Naturally aligned boundaries.9.The system of claim 8 wherein said means for determining a length of a header corresponding to first portion of said buffer component is further for determining a second portion of data corresponding to said buffer component The length of the body, the second portion begins with the natural alignment boundary.10.The system of claim 9 wherein said first component for performing a checksum operation on data in the buffer component comprises an execution unit of the processor for iteratively executing said first checksum instruction The first logical block.11.The system of claim 10 wherein said second means for performing said checksum operation on data in said buffer component comprises said execution unit in said processor for iterative execution a second logic block of the second checksum instruction.12.The system of claim 10, further comprising: means for storing an operational remainder of said checksum operation, wherein said means for storing an operational remainder of said checksum operation is for said A logic block provides at least a portion of the contents of the destination register and source data of the first width.13.A device comprising:A sequencer for dividing a block of any size stored in a buffer into at least a first portion and a second portion, the first portion comprising source data of a first width, the second portion comprising a source of a second width Data;An execution unit coupled to the buffer for sequentially performing a cyclic redundancy check (CRC) operation on the source data of the first width and a residual value from a destination location and source data of the second width And the residual value from the destination location performs the CRC operation and provides at least a portion of the output of the execution unit corresponding to the residual value to the destination location.14.The apparatus of claim 13 wherein said execution unit is operative to perform said CRC operation on said first width of source data in response to said first user level instruction and to said said first user level instruction The two-width source data performs the CRC operation.15.The apparatus of claim 14 wherein said execution unit includes first exclusive OR (XOR) tree logic for performing said CRC operation in response to said first user level instruction and for responding to said second user The second XOR tree logic of the CRC operation is performed by a level instruction, wherein the execution unit is located in a general purpose processor.16.The apparatus of claim 13 further comprising:a first register coupled between the buffer and the execution unit for receiving source data of the first width and source data of the second width from the buffer;A second register coupled to the execution unit for receiving the residual value from the execution unit and providing the residual value to the execution unit, wherein the second register corresponds to the destination location.17.The apparatus of claim 13 wherein said first portion extends from a beginning of said buffer to a first natural alignment boundary of source data of said second width.18.The apparatus of claim 17 wherein said sequencer is operative to divide said data block into a third portion of source data comprising said first width, wherein said third portion is sourced from said second width The last natural alignment boundary of the data extends to the end of the buffer. |
Validate data with processor instructionsTechnical fieldEmbodiments of the present invention relate to data processing and, more particularly, to determination of checksums such as cyclic redundancy check (CRC).Background techniqueIn a data processing system, data transmitted between a first location and a second location should be accurately received such that additional processing performed on the data at the second location is equally accurate. In addition, in order to be able to detect errors in data transmission, data verification is often performed. An example of data validation is by using a checksum attached to a packet to be transmitted. For example, a CRC sum can be generated by a transmission source and added to the data to be transmitted. The checksum can be calculated according to one of a number of different algorithms, which can then be compared to a similar checksum generated at the receiving end based on the received data. If the two checksums are the same, the receiving system can be confident that the transmitted data is error-free. However, if the generated checksum is different from the transmitted checksum, an error is indicated. This type of checksum is used in all networking technologies to detect transmission errors. Other uses include database integrity, application-level data integrity checks, and more.In different applications, there are different ways to implement CRC information. For example, CRC calculations can be performed in hardware or software. In order to implement CRC calculations in hardware, a dedicated hardware engine is typically provided within the system to perform CRC calculations. Thus, the data to be subjected to the CRC calculation is sent to the hardware engine for calculation of the CRC, which is then appended to the data for transmission, for example, from the system. There are various drawbacks to using this offload engine, including the increased overhead of sending data to the engine. In addition, because state-based overhead data is often required to be transferred, it is difficult to perform stateless hardware offloading, increasing complexity and slowing down the progress of useful work.Because many systems lack such an offload engine, CRC calculations are typically performed in software. In order to implement CRC calculations in software, a lookup table scheme is typically used. However, such software calculations of CRC values are notoriously slow computationally intensive operations. In addition, the memory usage of the lookup table can be large, which can affect performance. Therefore, these slow calculations can degrade network performance and consume processing resources. As an example, it takes 5 to 15 processor cycles to perform a CRC calculation per byte of data. Therefore, software CRC performance is too slow for general use in high speed networks.DRAWINGS1 is a flow chart of a method in accordance with one embodiment of the present invention.2 is a block diagram of a portion of a processor for performing a checksum operation, in accordance with one embodiment of the present invention.3 is a block diagram of another portion of a processor in accordance with one embodiment of the present invention.4 is a block diagram of a system in accordance with one embodiment of the present invention.Figure 5 is a flow diagram of a method for generating a checksum value, in accordance with one embodiment of the present invention.6 is a block diagram of a network configuration in which embodiments of the present invention may be used.Detailed waysIn various embodiments, a checksum operation can be implemented using an instruction set architecture (ISA) extension to calculate a checksum value. More specifically, user-level instructions can be provided within the ISA to enable a programmer to directly perform desired checksum operations, such as CRC operations, in a general purpose processor, such as a central processing unit (CPU), via the instructions. The CRC operation may be a 32-bit CRC operation (ie, a CRC32 operation that generates a 32-bit run remainder, as discussed further below), and in various embodiments, may correspond, for example, to the Institute of Electrical and Electronics Engineers (IEEE) 802.3. CRC used in the Ethernet protocol (released in 2002) or other protocols.In various implementations, various opcode instructions can be provided to perform CRC calculations on different data packets. For example, in some embodiments, different opcodes may be utilized to support CRC calculations on groups of 8, 16, 32, and 64 bits, although the scope of the invention is not limited in this respect. In this way, the CRC calculation can be quickly performed in hardware without a lookup table or the like. Moreover, these calculations can be performed using general purpose processor registers that are architecturally visible via integer operations performed in accordance with different opcodes. As a result, the CRC can be calculated in the processor without the overhead and complexity of offloading hardware such as network offload hardware. Therefore, a larger amount of data transfer can be performed (for example, from the per-second input/output (I/O) aspect). Note that although described herein primarily in connection with CRC operations, embodiments of the present invention may also be used to perform other checksum operations.Moreover, in order to be able to effectively utilize these user level instructions, embodiments of the present invention may also divide or segment data to be subjected to checksum operations. As an example, any size of data to be subjected to a checksum operation can be partitioned into multiple data sets, each data set having a different basic width. These basic widths may correspond to the width of different opcode instructions, such as 8, 16, 32 or 64 bits. In addition, these partitions can be selected such that most of the data is located in a partition corresponding to the widest width of the instructions, enabling efficient operation. In addition, the division between different portions (eg, the partition with the smallest width and the partition with the largest width) may correspond to the wide alignment of the natural alignment boundary. In this way, the checksum operation can be implemented by hardware with a minimum number of data iterations.Referring now to Figure 1, a flow diagram of a method in accordance with one embodiment of the present invention is shown. Method 100 can be used to obtain a checksum using user-level instructions implemented on processor hardware, such as an execution unit of a CPU. As shown in FIG. 1, method 100 can begin by performing a series of exclusive OR (XOR) operations on the data in the source and destination registers (block 110). Note that the XOR operation may correspond to a polynomial arithmetic operation, and more specifically, may correspond to a polynomial division operation. This operation can correspond to a polynomial division divided by a selected polynomial value. Although in different embodiments, the value can take many different formats, particularly in implementations for performing CRC32 operations, the polynomial can correspond to 11 EDC6F41H, although the scope of the invention is not limited in this respect. The data in the source register may correspond to, for example, data present in the processor pipeline that has been received by the processor or data present in the processor pipeline to be transferred therefrom. As an example, a set of data in a buffer corresponding to a desired group size (eg, 16 bits, 32 bits, etc.) may be provided to a source register, which may be a general purpose register of the processor. Alternatively, in some embodiments, source data can be obtained from a memory. The destination register may correspond to a storage unit for the running remainder obtained from the XOR operation. The destination register can also be a general purpose register of the processor.In various embodiments, XOR operations may be performed in dedicated hardware within the processor pipeline. For example, an execution unit (such as an integer execution unit) of a processor can be extended with circuitry for implementing a series of XOR operations. For example, the circuit may correspond to an XOR tree for processing a polynomial division with a desired polynomial as a divisor. In various embodiments, the polynomial used in the XOR operation can be hardwired to the logic gate of the XOR tree. In addition, the XOR tree can be configured to implement the required pre- and post-processing, such as bit reflections, etc. via XOR operations. In addition, the XOR tree logic can include multiple partitions, each configured to handle operations on different data sizes.Still referring to FIG. 1, next, the results corresponding to the operational remainders obtained from the XOR operation may be stored in a destination register (block 120). Note that after the system is initialized, the destination register can be set to a predetermined value, for example, both 1, all being 0 or another such value. The run remainder is then continuously updated with the results of the current checksum operation during the checksum operation. More specifically, the remainder of the polynomial division implemented by the current checksum operation can be stored in the destination register.Next, it can be determined if additional source data is present (decision block 130). For example, in some embodiments, the buffer may include data that the system has received and will verify the checksum. Data can be fed into the source registers in blocks to implement a checksum operation. Accordingly, it may be determined in decision block 130 whether additional source data is present in the buffer. As will be described further below, the source data in the buffer can be divided into segments having different base widths, where each base width corresponds to a different style of user-level checksum instructions. If additional source data is present, the next data block is provided to the source register and control passes back to block 110 as described above.If it is determined at decision block 130 that there is no additional source data, control passes to block 140. There, the result of the checksum operation can be provided as the current value (e.g., run remainder) stored in the destination register (block 140). As mentioned above, this checksum value can be used in many different ways. For example, in the case of receiving data, the calculated checksum can be compared to the received checksum to confirm that the data was accurately received. In the case of transmission, a checksum can be appended to the data to be transmitted so that the data can be verified at the receiving end. Currently, there are other uses for checksums, such as for hash functions or for generating numbers based on a pseudo-random numbering scheme.Depending on the desired architecture, a processor for implementing checksum operations in accordance with one embodiment of the present invention can take many different forms. Referring now to Figure 2, shown is a block diagram of a portion of a processor for performing a checksum operation in accordance with one embodiment of the present invention. As shown in FIG. 2, a portion of processor 300 is shown. More specifically, processor 300 includes an XOR tree 310, a first register 320, and a second register 330, all of which may be part of a processor pipeline. In various embodiments, the XOR tree 310 can be configured differently. For example, XOR tree 310 can be implemented with multiple 3-input XOR gates in the first stage, the outputs of these 3-input XOR gates being coupled to similar XOR gates in the second stage, and so on. In this embodiment, each level of the XOR tree can be one-third larger than the previous level. Of course, other configurations are also possible.Also shown in FIG. 2, processor 300 includes a buffer 340, which may also be located within the processor pipeline (eg, as a buffer, queue, etc.). Alternatively, buffer 340 can be a cache memory associated with processor 300. The buffer 340 may be an arbitrarily sized buffer for temporarily storing data to be subjected to a checksum operation. In some embodiments, the data may correspond to, for example, the size of a network protocol unit. Also shown in FIG. 2, a sequencer 335 can be coupled to the buffer 340. Sequencer 335 can include logic for performing data segmentation to efficiently divide data within buffer 340 into different segments in accordance with one embodiment of the present invention, wherein each segment is predetermined for performing a calibration of a given data width Inspection and operation.In the embodiment of FIG. 2, the first register 320 may correspond to a source register and the second register 330 may correspond to a destination register. In various embodiments, these registers may be general purpose registers within processor 300. Of course, processor 300 can include many other registers, logic, functional units, etc., and the portions shown in Figure 2 are for ease of illustration only.As shown in FIG. 2, in accordance with an embodiment of the present invention, at least a first portion of the first register 320 and a portion of the second register 330 are provided to the XOR tree 310 in order to perform a checksum. In the embodiment shown in FIG. 2 showing 8-bit CRC accumulation, a single byte of data (B0) is provided from the first register 320 to the XOR tree 310 while the 4-byte portion of the second register 330 is provided to XOR tree 310. This 4-byte portion may correspond to the running remainder of the CRC32 operation. Using this data, the XOR tree 310 can perform data processing via an XOR operation to generate a result including the remainder portion. The remainder portion may be the operational remainder stored back to the second register 330, as shown in FIG. In this way, CRC operations can be efficiently performed with minimal processor resources in a minimum cycle time. In the embodiment of FIG. 2, for an 8-bit accumulate operation, an additional portion of the first register 320 may be incrementally provided to the XOR tree 310 along with the current content of the second register 330 (ie, a 32-bit run remainder). Thus, to obtain a CRC checksum for 64-bit data in the first register 320, eight XOR operation iterations can be performed in the XOR tree 310, each iteration utilizing a single byte of data from the first register 320 and The current running remainder in the second register 330.Note that there may be different hardware to handle CRC calculations of different bit widths. For example, the logic can include different XOR tree structures to handle these CRC calculations. Referring now to Figure 3, shown is a block diagram of another portion of a processor in accordance with one embodiment of the present invention. As shown in FIG. 3, processor 400 includes a different XOR tree 410 (eg, in addition to XOR tree 310 in FIG. 2) that is coupled to receive data from first register 320 and second register 330. Also shown in Figure 3 is the presence of a buffer 340 that can be used to provide data for CRC calculations. Sequencer 335 can control the division of data in buffer 340 into different segments. Note that in the embodiment of FIG. 3, XOR tree 410 is configured to process 64-bit CRC accumulation. Thus, the entire contents of the first register 320 (ie, bytes B0-B7) can be coupled to the XOR tree 410 at once for processing with the data in the second register 330 in an XOR operation. The result data is stored back to the second register 330, the desired portion of which corresponds to the running remainder. Although described with respect to these particular implementations in Figures 2 and 3, it should be understood that the scope of the present invention is not limited in this respect, and in other embodiments, there may be different hardware configurations for performing CRC operations.Referring now to Table 1 below, a listing of example instructions for supporting an instruction set architecture (ISA) for CRC operations in accordance with various embodiments of the present invention is shown. As shown in Table 1, the CRC32 operation is performed using the source and destination registers using each instruction that can be referenced by the opcode. As shown in the table, there can be different flavors, each of which is used to perform CRC operations on destination operands and source operands of a given size. Therefore, referring to the first row in Table 1, the instruction is used to perform a CRC32 operation on the 8-bit source operand and the 32-bit destination operand. Similarly, the second row in Table 1 is used to perform a CRC32 operation on the 16-bit source operand and the 32-bit destination operand. In a similar manner, the third row in Table 1 shows instructions for performing a CRC32 operation on a 32-bit source operand and a 32-bit destination operand.Since the first three instructions are executed with the largest 32-bit data block, it is noted that these instructions are valid in both the 64-bit mode of operation and the legacy (ie, 32-bit) mode of operation. In contrast, the fourth and fifth rows in Table 1 represent CRC operations that would be performed on 8-bit and 64-bit source operands and 64-bit destination operands, respectively. Therefore, the last two instructions can only be executed in 64-bit mode of operation.Table 1In various embodiments, the programmer may use these user level instructions as, for example, native instructions for implementing CRC operations in accordance with, for example, the flowchart of FIG.Embodiments can be implemented in many different system types. Referring now to Figure 4, shown is a block diagram of a multiprocessor system in accordance with one embodiment of the present invention. As shown in FIG. 4, the multiprocessor system is a point-to-point interconnect system that includes a first processor 470 and a second processor 480 coupled via a point-to-point interconnect 450. As shown in FIG. 4, each of processors 470 and 480 can be a multi-core processor including first and second processor cores (ie, processor cores 474a and 474b and processor cores 484a and 484b). Although not shown for ease of illustration, in accordance with an embodiment of the present invention, first processor 470 and second processor 480 (and more specifically, cores therein) may include XORs within their execution units. Tree logic to execute user level CRC instructions. The first processor 470 also includes a memory controller hub (MCH) 472 and point-to-point (P-P) interfaces 476 and 478. Similarly, second processor 480 includes MCH 482 and P-P interfaces 486 and 488. As shown in FIG. 4, MCHs 472 and 482 couple the processors to respective memories, namely memory 432 and memory 434.First processor 470 and second processor 480 can be coupled to chipset 490 via P-P interconnects 452 and 454, respectively. As shown in FIG. 4, chipset 490 includes P-P interfaces 494 and 498. In addition, chipset 490 also includes an interface 492 for coupling chipset 490 to high performance graphics engine 438. In one embodiment, point-to-point interconnect 439 can couple these components. Chip set 490 can then be coupled to first bus 416 via interface 496.As shown in FIG. 4, various input/output (I/O) devices 414 can be coupled to a first bus 416 coupled with a bus bridge 418 that couples a first bus 416 to a second bus 420. I/O device 414 can include at least one component capable of providing intercommunication between the multiprocessor system and a network (not shown in Figure 4) that complies with any applicable protocol. In one embodiment, I/O device 414 can include any of digital and/or analog hardware and/or software that can process an I/O subsystem of one or more network protocol units to be transmitted and/or received over a network. combination. In one embodiment, the I/O subsystem may include, for example, a network interface card (NIC), which may include media such as a data link layer (DLL) defined in an Open Systems Interconnection (OSI) model of a networking protocol. Access Control (MAC) layer. The OSI model is defined by the International Organization for Standardization (ISO) at 1 rue de Varembé, Case postale 56 CH-1211 Geneva 20, Switzerland.Still referring to FIG. 4, in one embodiment, the second bus 420 coupled to the first bus 416 via the bus bridge 418 can be a low pin count (LPC) bus. Various devices may be coupled to the second bus 420, including, for example, a keyboard/mouse 422, a communication device 426, and a data storage unit 428, which in one embodiment may include code 430. Additionally, audio I/O 424 can also be coupled to second bus 420. Note that other architectures are also possible. For example, instead of the point-to-point architecture of Figure 4, the system can implement a multi-hop bus or other such architecture.As noted above, in various embodiments, the multiprocessor system of Figure 4 can be coupled to a network, which can be any network, such as the Internet, an intranet, a local area network (LAN), a storage area network (SAN), a wide area network ( WAN), Metropolitan Area Network (MAN) or wireless network. The network may exchange traffic with I/O device 414 using, for example, an Ethernet standard (described in the IEEE 802.3 protocol and related standards) or any other communication standard, which may include checksums in accordance with an embodiment of the present invention. .Note that the data entering the system can have, for example, a network protocol unit or any size. When received by the system, the data can be temporarily stored in a buffer, such as a buffer of any size. In order to efficiently perform checksum calculations such as CRC operations on the data, embodiments may divide the data into predetermined chunk sizes for efficient checksum operations. Referring now to FIG. 5, a flow diagram of a method for generating a CRC value for a data block of any size in a buffer of any size is illustrated, in accordance with one embodiment of the present invention. As shown in FIG. 5, method 500 can begin by initializing a CRC value (block 505). In one embodiment, the CRC value can be stored in a destination register. Although in various embodiments, the CRC values may have different initial values, in one embodiment, the initial CRC values may correspond to both being a logical one, and in one embodiment, for a CRC32 operation, the initial CRC values may correspond In FFFFH, the scope of the present invention is not limited thereto.Still referring to Figure 5, various lengths can then be determined for the data blocks in the buffer. More specifically, the head length (HL) can first be calculated (block 510). The HL may correspond to an initial amount of data in the buffer before the first natural alignment boundary of the wide version of the CRC operation occurs. For example, in implementations where user-level CRC instructions take different formats, the widest version can be used to perform on 64-bit operands, where the first natural alignment boundary can correspond to a natural alignment boundary in 64-bit data in any buffer. The first position. Thus, the HL calculated in block 510 may correspond to, for example, the number of bytes from the beginning of the buffer to the first natural alignment boundary of the 64-bit data. The header length can be less than 7 bytes, where the wide version of the instruction corresponds to 64 bits.Next, the body length (BL) can be calculated (block 515), which corresponds to the amount of data in the buffer from the first natural alignment boundary until any additional wide variation of the CRC operation is no longer performed. For example, with respect to a 64 bit wide operation, the body length can thus be terminated at a natural alignment boundary within 63 bits of the end of the buffer. After the body length is calculated in block 515, control passes to block 520. There, the tail length (TL) can be calculated (block 520). The tail length may correspond to the remaining data blocks in the buffer from the last natural alignment boundary to the end of the buffer. Different entities can perform the above operations. In one embodiment, the sequencer may perform an analysis of the buffers to generate various lengths, which may be a software implemented state machine, such as the sequencer 335 shown in Figures 2 and 3. After completing the calculation of the different lengths of the buffer portion, the offset can be set to zero (block 525). In one embodiment, the zero offset may correspond to the beginning of the buffer.Still referring to Figure 5, it can then be determined if HL is greater than zero (decision block 530). If greater than zero, this means that data is still present in the first part of the buffer. Therefore, control passes to block 535. There, a narrow version of the CRC operation corresponding to the user level CRC instruction can be executed (block 535). More specifically, the CRC operation can be performed using source data and destination data at the buffer offset location (ie, corresponding to the start of the buffer in the first iteration), where the destination data can correspond to the destination The running residual value in the register (ie, the CRC value after initialization in the first iteration). Although CRC operations can be implemented in a variety of ways, in one implementation, CRC operations can be performed in dedicated hardware for processor pipelines that perform CRC operations on narrow data, such as single-byte source data.After completing the CRC operation, control passes to block 540. There, the offset can be set equal to the current offset plus the size of the narrow data format, such as 1 byte (block 540). Next, HL can be decremented by 1 (block 545). Therefore, these operations push the buffer to the next part of the source data. Control then passes to decision block 530 to determine if the header length is still greater than zero (decision block 530). If so, blocks 535, 540, and 545 are executed in a round-robin fashion until the source data in the first portion of the buffer is exhausted. When the source data in the first portion of the buffer is used up (ie, the first natural alignment boundary has been reached), decision block 530 will determine that HL is not greater than zero, and thus control passes to decision block 550.At decision block 550, it may be determined if the length of the subject is greater than zero. If so, the wide format (e.g., 64 bits) of data present in the second portion of the buffer will be processed. Therefore, control passes to block 555. There, a wide version of the CRC operation corresponding to the user level CRC instruction can be executed (block 555). More specifically, the CRC operation can be performed using source data and destination data at the current buffer offset location (ie, the first natural alignment boundary corresponding to 64-bit data in the first iteration), where The destination data may correspond to the current running residual value in the destination register (ie, the CRC value that exists after the narrow format execution is completed in the first iteration). Although the CRC operation can be implemented in a variety of ways, in one implementation, the CRC value can be performed in dedicated hardware for the pipeline that performs CRC operations on wide data (eg, 8-byte source data). Note that in various embodiments, the dedicated hardware may be different than hardware that performs a narrow format. After completing the CRC operation, control passes to block 560. There, the offset can be set equal to the current offset plus the size of the wide data format, for example 8 bytes (block 560). Next, BL can be decremented by 1 (block 565).Control then passes to decision block 550. When the loop including blocks 555, 560, and 565 has been executed one or more times to cause the body length to decrement to zero, decision block 550 determines that no additional BL is left, and control passes to decision block 570. There, it can be determined if the tail length is greater than zero (decision block 570). If greater than zero, control passes to block 575. A narrow version of the CRC operation, which may also correspond to the user level CRC instruction, may be performed (block 575). More specifically, the CRC can be performed using source data and destination data at the buffer offset position (ie, the last natural alignment boundary of the wide data corresponding to the end of the buffer in the first iteration). Operation, wherein the destination data may correspond to a run residual value in the destination register (ie, the current CRC value at the end of the wide format CRC operation in the first iteration). In one implementation, CRC operations may be performed in dedicated hardware of a processor pipeline for performing CRC operations on narrow data. After completing the CRC operation, control passes to block 580. There, the offset can be set equal to the current offset plus the size of the narrow data format, such as 1 byte (block 580). Next, TL can be decremented by 1 (block 585).Control then passes to decision block 570. When it is determined at decision block 570 that the tail length is not greater than zero, this means that there is no additional data in the buffer. Therefore, control passes to block 590. There, the CRC value can be provided, for example, to a predetermined location for use according to the needs of a particular application (block 590). Therefore, the CRC value can correspond to a checksum of the amount of data in the buffer. In one embodiment, a destination register that stores an incrementally accumulated CRC value during execution of method 500 may provide the value for the desired use. An example of such use may be as a calculated checksum for addition to data to be transmitted from the system or as a generated checksum for comparing the checksum received with the input data. Alternatively, the checksum can be used as a hash function, generated pseudo-random number, and the like.In a particular embodiment, method 500 can be used in conjunction with two different user-level instructions (corresponding to a narrow format and a wide format) to implement CRC operations on different data sizes. In one embodiment, the narrow format may correspond to a single byte, while the wide format corresponds to 8 bytes, although the scope of the invention is not limited in this respect. For example, in other embodiments, additional data segments in the buffer may be implemented in an additional style (eg, 16-bit or 32-bit block) corresponding to CRC operations. In the embodiment shown in FIG. 5, with method 500, a narrow version of the CRC instruction can be executed up to the first natural alignment boundary to effectively use the wide format of the CRC instruction. The wide format can then be used for the data body in the buffer, and then the narrow format of the instruction can be executed on any remaining data starting from the last natural alignment boundary of the buffer. Although described with respect to this particular implementation in FIG. 5, it should be understood that the scope of the present invention is not limited thereto.With embodiments of the present invention, for example, data verification can be performed using one or more CRC instructions that are more efficient in time than a pure software-based approach. That is, in accordance with an embodiment of the present invention, the processor can perform a calculation of the CRC value less than a period based on the software-based method. These CRC instructions can be more cache efficient because they can take up less instruction cache space or can have less instruction cache footprint than software-based methods. In addition, there is no need to look up tables to avoid data cache pollution impact. Furthermore, achieving CRC operation by utilizing fewer processor cycles reduces power consumption. Thus, some embodiments may be implemented in a portable or wireless system that typically operates on battery power, although the scope of the invention is not limited in this respect.Referring now to Figure 6, a block diagram of a network configuration in which embodiments of the present invention may be utilized is shown. As shown in Figure 6, network system 600 can link various entities. In particular, as shown in FIG. 6, enterprise network 605 can be coupled to a storage area network (SAN) 650 via a metropolitan area network (MAN) 640. Although shown with this particular implementation in the embodiment of Figure 6, it should be understood that the scope of the invention is not limited thereto. Still referring to FIG. 6, enterprise network 605 can include various components, including various systems, such as personal computers (PCs) 610a and 610b coupled to switch 625 via link 612. Enterprise network 605 may be an Ethernet-based enterprise network, which may also include a data center 620 that is also coupled to switch 625 via link 618, which may include one or more servers 615a and 615b. In one embodiment, links 612 and 618 may be Ethernet links, such as 1 Gigabit Ethernet (GbE) links, although other such links are also possible. In one embodiment, switch 625 can include a MAC, a switch fabric, and the like.Switch 625 can then be coupled to a multiple service providing platform (MSPP) 630 via link 628, where link 628 can also be an Ethernet link. In various embodiments, MSPP 630 may include different components including, for example, a transceiver, a multiplexer/demultiplexer, a framer, a MAC, and the like. MSPP 630 is coupled to MAN 640, for example, via an optical link, such as an optical carrier (OC)-192 level optical link.Still referring to FIG. 6, the MAN 640 can be coupled to the SAN 650 via link 645. The SAN 650 can include various components including, for example, an adapter 652, a controller 654, and a plurality of storage devices 656, which can be independent redundant disk arrays (RAID) or other such storage mechanisms. Adapter 652 is capable of communicating with storage device 656 in accordance with various protocols, such as Small Computer System Interface (SCSI), Fibre Channel (FC), and/or Serial Advanced Technology Attachment (S-ATA).To confirm the validity of the data communicated through network system 600, various components within the system can perform data validation, such as CRC calculations in accordance with one embodiment of the present invention. Thus, for example, the processors within servers 615a and 615b, computers 610a and 610b, and controller 654 of SAN 650 can each be adapted to be based on user level checksum instructions such as those provided in embodiments of the present invention. Perform a CRC operation. Although described with this particular implementation in the embodiment of FIG. 6, it should be understood that the scope of the present invention is not limited thereto.Embodiments can be implemented in code and can be stored on a storage medium having instructions stored thereon that can be used to program the system to execute the instructions. The storage medium may include, but is not limited to, any type of disc including a floppy disk, an optical disk, a compact disk read only memory (CD-ROM), a rewritable compact disk (CD-RW), and a magneto-optical disk; a semiconductor device such as Read only memory (ROM), random access memory (RAM) (such as dynamic random access memory (DRAM), static random access memory (SRAM)), erasable programmable read only memory (EPROM), flash memory An electrically erasable programmable read only memory (EEPROM), magnetic or optical card; or any other type of medium suitable for storing electronic instructions.While the invention has been described in terms of a limited number of embodiments, many modifications and changes will be apparent to those skilled in the art. All such modifications and changes that fall within the spirit and scope of the invention are intended to be embraced by the appended claims. |
A memory subsystem having memory cells formed on integrated circuit dies is disclosed. After receiving a command from a host system to store data, the memory subsystem queues the command to allocate pages of memory cells in a plurality of dies based on a determination that each of the dies is available to perform a data programming operation for the command. Based on the page allocation, the memory subsystem generates a portion of a media layout to at least map logical addresses of the data identified in the command to the allocated pages and receives the data from the host system. The memory subsystem stores the data into the pages using a multi-pass programming technique, where an atomic multi-pass programming operation may use at least two pages in separate planes in one or more dies to program at least a portion of the data. |
1.A method comprising:receiving, in a memory subsystem, a command from a host system identifying a size of data to be stored in the memory subsystem;queuing the command in the memory subsystem having memory cells formed on a plurality of integrated circuit dies;allocating a page of memory cells in the plurality of integrated circuit dice based on determining that each of the plurality of dice of the plurality of integrated circuit dice can be used to execute the commanded data programming operation;generating a portion of the media layout to map at least the logical address of the data identified in the command to the allocated page;receiving the data from the host system in the memory subsystem in response to the command after the generating the portion of the media layout; andThe data is stored into the page using a multi-pass programming technique, wherein an atomic multi-pass programming operation is configured to use at least one of a separate die of the plurality of integrated circuit dies or a separate plane of a die two pages to program at least a portion of the data.2.1. The method of claim 1, wherein the portion of the data is programmed into the at least two pages in an atomic operation.3.3. The method of claim 2, wherein a first page of the at least two pages is in a first integrated circuit die; and a second page of the at least two pages is in a second integrated circuit die .4.3. The method of claim 3, wherein the multi-pass programming operation includes a first programming pass for the first page and a second programming pass for the second page.5.5. The method of claim 4, wherein the first pass is programmed in a first mode; and the second pass is programmed in a second mode.6.6. The method of claim 5, wherein the first mode and the second mode are different from:Single Level Cell (SLC) mode;Multilevel Cell (MLC) mode;Three Level Cell (TLC) mode; andFour-level cell (QLC) mode.7.6. The method of claim 6, wherein the allocating comprises minimizing a discrepancy between a storage capacity of the page programmed using the multi-pass programming technique and the size of the data identified in the command match.8.7. The method of claim 7, wherein the pages are allocated from a set of blocks configured to be erased together.9.9. The method of claim 8, wherein the allocating is based on a programming mode of a memory cell identified for a next available page in the block set.10.A non-transitory computer storage medium storing instructions that, when executed in a memory subsystem, cause the memory subsystem to perform a method, the method comprising:receiving, in the memory subsystem, a command from a host system identifying a size of data to be stored in the memory subsystem;queuing the command in the memory subsystem having memory cells formed on a plurality of integrated circuit dies;allocating a page of memory cells in the plurality of integrated circuit dice based on determining that each of the plurality of dice of the plurality of integrated circuit dice can be used to execute the commanded data programming operation;generating a portion of the media layout to map at least the logical address of the data identified in the command to the allocated page;receiving the data from the host system in the memory subsystem in response to the command after the generating the portion of the media layout; andThe data is stored into the pages using a multi-pass programming technique, wherein an atomic multi-pass programming operation is configured to use the data in separate ones of the plurality of integrated circuit dies or in separate planes in one die at least two pages to program at least a portion of the data.11.11. The non-transitory computer storage medium of claim 10, wherein the data is received together from the host system by one communication; and the at least two pages comprise a first page and a second page in a first integrated circuit die 2nd page in IC die.12.11. The non-transitory computer storage medium of claim 11, wherein the multi-pass programming operation comprises a first pass of programming the first page and a second pass of programming the second page.13.13. The non-transitory computer storage medium of claim 12, wherein the first pass is programmed in a first mode; and the second pass is programmed in a second mode; and the first mode and the Said second mode is a different one of the following:A pattern in which one bit is stored in each memory cell;A pattern of two bits is stored in each memory cell;a pattern of three bits stored in each memory cell; andA four-bit pattern is stored in each memory cell.14.14. The non-transitory computer storage medium of claim 13, wherein the allocating comprises minimizing the storage capacity of the page programmed using the multi-pass programming technique and the data identified in the command Mismatch between sizes.15.The non-transitory computer storage medium of claim 13, wherein the method further comprises:storing a page map with entries each identifying a page in a block and a memory cell programming mode for the page;wherein the allocation is based on the memory cell programming mode identified in the page map.16.A memory subsystem includes:a plurality of integrated circuit dies having memory cells;at least one processing device configured to:receiving a command from a host system identifying a size of data to be stored in the memory subsystem;queue the command;allocating a page of memory cells in the plurality of integrated circuit dice based on determining that each of the plurality of dice of the plurality of integrated circuit dice can be used to execute the commanded data programming operation;generating a portion of the media layout to map at least the logical address of the data identified in the command to the allocated page;receiving the data from the host system in the memory subsystem in response to the command after the page has been allocated for the data; andThe data is stored into the page using a multi-pass programming technique, wherein an atomic multi-pass programming operation is configured to use at least two of the separate planes in one or more of the plurality of integrated circuit dies pages to program at least a portion of the data.17.17. The memory subsystem of claim 16, wherein the at least one processing device is further configured to store a page map having entries each identifying a page in a block and a memory cell programming mode for the page ;wherein the allocation is based on the memory cell programming mode identified in the page map.18.18. The memory subsystem of claim 17, wherein the multi-pass programming operation comprises:performing a first pass of programming the first page in the first integrated circuit die in the first mode; andA second pass of programming is performed on the second page in the second integrated circuit die in the second mode.19.19. The memory subsystem of claim 18, wherein the first mode and the second mode are different of:A pattern in which each memory cell stores one bit;Each memory cell stores a pattern of two bits;Each memory cell stores a pattern of three bits; andEach memory cell stores a four-bit pattern.20.18. The memory subsystem of claim 17, wherein the page is allocated to minimize a storage capacity of the page programmed using the multi-pass programming technique and the size of the data identified in the command mismatch between. |
Multi-pass data programming in memory subsystems with multiple dies and planesrelated applicationsThis application claims "Multi-Pass Data Programming in a Memory Sub-System having Multiple Dies and Planes", filed June 14, 2019 and entitled "Multi-Pass Data Programming in a Memory Sub-System having Multiple Dies and Planes" )" and U.S. Provisional Patent Application No. 62/861,786, filed May 4, 2020 and entitled "Multi-Pass Data Programming in Memory Subsystems with Multiple Dies and Planes" a Memory Sub-System having Multiple Dies and Planes)" of US Patent Application Serial No. 16/866,326, the entire disclosure of which is hereby incorporated by reference herein.technical fieldAt least some embodiments disclosed herein relate generally to memory systems, and more specifically, but not limited to, dynamic data for multi-pass data programming in memory subsystems having multiple integrated circuit dies and planes of memory cells place.Background techniqueA memory subsystem may include one or more memory devices that store data. The memory devices may be, for example, non-volatile memory devices and volatile memory devices. In general, a host system may utilize a memory subsystem to store data at and retrieve data from a memory device.Description of drawingsEmbodiments are shown by way of example and not limitation in the figures of the accompanying drawings, in which like reference numerals refer to similar elements.FIG. 1 illustrates an example computing system including a memory subsystem in accordance with some embodiments of the present disclosure.2 shows a dynamic data placer configured to determine media layout in a manner that reduces and/or avoids conflicts in concurrent media accesses when writing data.3 shows an example of a memory subsystem with dynamic data placement.4 shows an example of a data structure configured to support dynamic data placement.Figure 5 shows an example of dynamic media layout determination.6 illustrates a block set allocated across an integrated circuit die for multi-pass programming of data.7 shows a method of dynamic data placement for multi-pass programming of data across an integrated circuit die.8 is a block diagram of an example computer system in which embodiments of the disclosure may operate.detailed descriptionAt least some aspects of the present disclosure relate to dynamic data placement in a memory subsystem for avoiding conflicts between concurrent streams of sequential writes in logical address space. The memory subsystem may be a storage device, a memory module, or a mixture of storage devices and memory modules. Examples of memory devices and memory modules are described below in conjunction with FIG. 1 . In general, a host system may utilize a memory subsystem that includes one or more components, such as memory devices that store data. The host system can provide data to be stored at the memory subsystem and can request data to be retrieved from the memory subsystem.The media layout specifies the mapping between addresses used in commands received from the host system in the memory subsystem and physical memory locations in the memory subsystem's memory media. A fixed media layout can result in media access conflicts between active write streams, increased buffer lifetime, and/or increased buffering requirements. The buffer age corresponds to the age of data buffered in the memory subsystem prior to committing, writing, storing or programming the data to the memory medium of the memory subsystem. For example, the host system to which the memory subsystem is connected, a garbage collection process running in the memory subsystem, and/or one or more write streams from the host system (eg, names used to configure in the memory subsystem) writing in different regions of the space) can generate multiple streams of write commands. A memory medium may have multiple memory devices capable of writing data in parallel. Thus, at least some of the write command streams may be executed in parallel in the memory subsystem when data is committed into the memory medium of the memory subsystem. However, one memory device can support one write operation at a time. An access conflict occurs when two write commands are mapped through the media layout to operate on the same memory device. Each conflict increases the corresponding buffer lifetime. The media layout can be randomized by mapping logical addresses to random memory locations in the memory media of the memory subsystem. Randomized media layout reduces conflict. However, when using a predetermined media layout, collisions may still occur even when the number of write streams is equal to or less than the number of memory devices that can independently perform write operations in parallel.At least some aspects of the present disclosure address the above and other deficiencies through dynamic data placement. For example, determination of the portion of the media layout for the logical address used in the incoming write command may be deferred until the write command can be executed without a conflict. When the memory medium is configured on an integrated circuit die (eg, as NAND memory cells), the media layout determination may be based on the identification of the integrated circuit die available for performing write operations during input/output scheduling. The media layout is determined such that logical addresses of commands to be executed in parallel are mapped to different integrated circuit dies available for concurrent/parallel operation without conflict. Thus, media access conflicts between write commands from different active streams can be completely avoided. When the number of active write streams is less than the number of integrated circuit dies in the memory subsystem, no media access conflicts can occur when using dynamic media layout. In general, a write stream contains a set of commands that write, trim, and rewrite a set of data together as a group. Within a group, data can be written into logical space sequentially, randomly or pseudo-sequentially. Preferably, the data in the group is written into an erase block set, wherein the memory cells in the erase block set store data for a stream, but not data from other streams. An erase block set may be erased to remove data for that stream without erasing data for other streams. In some cases, conflicts may occur when logical addresses of different streams are mapped to the same set of erase blocks where the data of the different streams cannot be individually erased. Such conflicts can also be avoided through dynamic media layout techniques. Optionally, the data to be stored in the memory subsystem can be dynamically placed across the planes of multiple integrated circuit dies and memory cells for the allocated storage capacity of the next atomic write operation and the allocated storage capacity to be stored in the allocated memory. Multi-pass programming for an optimal or improved match between the size of the data in the storage capacity.1 illustrates an example computing system 100 that includes a memory subsystem 110 in accordance with some embodiments of the present disclosure. Memory subsystem 110 may include media such as one or more volatile memory devices (eg, memory device 102 ), one or more non-volatile memory devices (eg, memory device 104 ), or a combination of such media .Memory subsystem 110 may be a storage device, a memory module, or a mixture of storage devices and memory modules. Examples of storage devices include solid state drives (SSD), flash drives, universal serial bus (USB) flash drives, embedded multimedia controller (eMMC) drives, universal flash storage (UFS) drives, secure digital (SD) cards, and hard disk drives (HDDs). Examples of memory modules include dual inline memory modules (DIMMs), small outline DIMMs (SO-DIMMs), and various types of non-volatile dual inline memory modules (NVDIMMs).Computing system 100 may be a computing device, such as a desktop computer, a laptop computer, a web server, a mobile device, a vehicle (eg, an airplane, drone, train, car, or other vehicle), Internet of Things (IoT) enabled A device, an embedded computer (eg, an embedded computer contained in a vehicle, industrial equipment, or networked business device), or such computing device that includes memory and processing means.Computing system 100 may include host system 120 coupled to one or more memory subsystems 110 . FIG. 1 shows an example of a host system 120 coupled to a memory subsystem 110 . As used herein, "coupled to" or "coupled with" generally refers to a connection between components, which may be an indirect communication connection or a direct communication connection (eg, without intervening components), whether wired or wireless , including connections such as electrical connections, optical connections, magnetic connections, and the like.Host system 120 may include a processor chipset (eg, processing device 118) and a software stack executed by the processor chipset. A processor chipset may include one or more cores, one or more cache memories, a memory controller (eg, controller 116 ) (eg, an NVDIMM controller), and a storage protocol controller (eg, (PeripheralComponent Interconnect Express , PCIe) peripheral component interconnect high-speed controller, (Serial Advanced Technology Attachment, SATA) serial advanced technology attachment controller). The host system 120 uses the memory subsystem 110 , for example, to write data to and read data from the memory subsystem 110 .Host system 120 may be coupled to memory subsystem 110 via a physical host interface. Examples of physical host interfaces include, but are not limited to, Serial Advanced Technology Attachment (SATA) interfaces, Peripheral Component Interconnect Express (PCIe) interfaces, Universal Serial Bus (USB) interfaces, Fibre Channel, Serial Attached SCSI (SAS), Double Data Rate (DDR) memory bus, Small Computer System Interface (SCSI), Dual Inline Memory Module (DIMM) interface (eg, Double Data Rate (DDR) capable DIMM socket), Open NAND Flash interface (ONFI), Double Data Rate (DDR), Low Power Double Data Rate (LPDDR), or any other interface. A physical host interface may be used to transfer data between host system 120 and memory subsystem 110 . When memory subsystem 110 is coupled with host system 120 through a PCIe interface, host system 120 may further utilize an NVM Express (NVMe) interface to access components (eg, memory device 104). The physical host interface may provide an interface for transferring control, address, data, and other signals between memory subsystem 110 and host system 120 . FIG. 1 shows a memory subsystem 110 as an example. In general, host system 120 may access multiple memory subsystems via the same communication connection, multiple individual communication connections, and/or a combination of communication connections.The processing device 118 of the host system 120 may be, for example, a microprocessor, a central processing unit (CPU), a processing core of a processor, an execution unit, or the like. In some cases, controller 116 may be referred to as a memory controller, a memory management unit, and/or an initiator. In one example, controller 116 controls communications over a bus coupled between host system 120 and memory subsystem 110 . In general, controller 116 may send commands or requests to memory subsystem 110 for desired accesses to memory devices 102 , 104 . Controller 116 may further include interface circuitry for communicating with memory subsystem 110 . Interface circuitry may translate responses received from memory subsystem 110 into information for host system 120 .The controller 116 of the host system 120 may communicate with the controller 115 of the memory subsystem 110 to perform operations such as reading, writing or erasing data at the memory devices 102 , 104 and other such operations . In some cases, the controller 116 is integrated within the same package of the processing device 118 . In other cases, the controller 116 is separate from the packaging of the processing device 118 . Controller 116 and/or processing device 118 may include hardware such as one or more integrated circuits (ICs) and/or discrete components, buffer memory, cache memory, or combinations thereof. Controller 116 and/or processing device 118 may be a microcontroller, special purpose logic circuitry (eg, field programmable gate array (FPGA), application specific integrated circuit (ASIC), etc.), or another suitable processor.The memory devices 102, 104 may include any combination of different types of non-volatile memory components and/or volatile memory components. Volatile memory devices (eg, memory device 102) may be, but are not limited to, random access memory (RAM), such as dynamic random access memory (DRAM) and synchronous dynamic random access memory (SDRAM).Some examples of non-volatile memory components include NAND-type flash memory and write-in-place memory, such as three-dimensional crosspoint ("3D crosspoint") memory. Cross-point arrays of non-volatile memory can perform bit storage based on changes in bulk resistance in conjunction with stackable cross-grid data access arrays. Additionally, in contrast to many flash-based memories, cross-point non-volatile memory can perform write-in-place operations, where non-volatile memory cells can be written to without pre-erasing the non-volatile memory cells. programming. The NAND-type flash memory includes, for example, two-dimensional NAND (2D NAND) and three-dimensional NAND (3D NAND).Each of memory devices 104 may include one or more arrays of memory cells. One type of memory cell, such as a single level cell (SLC), can store one bit per cell. Other types of memory cells, such as multi-level cell (MLC), three-level cell (TLC), quad-level cell (QLC), and five-level cell (PLC), can store multiple bits per cell. In some embodiments, each of the memory devices 104 may include one or more arrays of memory cells, such as SLC, MLC, TLC, QLC, or any combination of such memory cells. In some embodiments, a particular memory device may include an SLC portion of memory cells, as well as an MLC portion, a TLC portion, or a QLC portion. The memory cells of memory device 104 may be grouped into pages, which may refer to logical units of the memory device used to store data. For some types of memory (eg, NAND), pages may be grouped to form blocks.Although non-volatile memory devices are described, such as 3D cross-point and NAND-type memories (eg, 2DNAND, 3D NAND), memory device 104 may be based on any other type of nonvolatile memory, such as read only memory (ROM). ), phase change memory (PCM), optional memory, other chalcogenide based memories, ferroelectric transistor random access memory (FeTRAM), ferroelectric random access memory (FeRAM), magnetic random access memory (MRAM), Spin Transfer Torque (STT) - MRAM, Conductive Bridge RAM (CBRAM), Resistive Random Access Memory (RRAM), Oxide-Based RRAM (OxRAM), or Non-(NOR) Flash and Electrically Erasable Memory Program read-only memory (EEPROM).Memory subsystem controller 115 (or, for simplicity, controller 115 ) may communicate with memory device 104 to perform operations such as reading data, writing data, or erasing data at memory device 104 and other operations Such operations (eg, in response to commands dispatched by the controller 116 on the command bus). Controller 115 may include hardware such as one or more integrated circuits (ICs) and/or discrete components, buffer memory, or a combination thereof. The hardware may include digital circuitry with dedicated (ie, hard-coded) logic to perform the operations described herein. Controller 115 may be a microcontroller, special purpose logic circuitry (eg, field programmable gate array (FPGA), application specific integrated circuit (ASIC), etc.), or another suitable processor.Controller 115 may include a processing device 117 (processor) configured to execute instructions stored in local memory 119 . In the example shown, local memory 119 of controller 115 includes embedded memory configured to store instructions for performing operations of control memory subsystem 110 , including processing memory subsystem 110 and host system 120 . Various processes, operations, logic flows, and routines for communication between.In some embodiments, local memory 119 may include memory registers that store memory pointers, fetched data, and the like. Local memory 119 may also include read only memory (ROM) for storing microcode. Although the example memory subsystem 110 in FIG. 1 has been shown to include the controller 115, in another embodiment of the present disclosure, the memory subsystem 110 does not include the controller 115 and may instead rely on external controls (eg, , provided by an external host or by a processor or controller separate from the memory subsystem).In general, controller 115 may receive commands or operations from host system 120 and may convert the commands or operations into instructions or appropriate commands to achieve the desired access to memory device 104 . Controller 115 may be responsible for other operations such as wear leveling operations, garbage collection operations, error detection and correction code (ECC) operations, cryptographic operations, cache operations, and logical addresses (eg, logical block addresses (LBA), namespaces) Address translations associated with memory device 104 to and from physical addresses (eg, physical block addresses). Controller 115 may also include host interface circuitry to communicate with host system 120 via a physical host interface. The host interface circuitry may translate commands received from the host system into command instructions to access memory device 104 and responses associated with memory device 104 into information for host system 120 .Memory subsystem 110 may also include additional circuitry or components not shown. In some embodiments, memory subsystem 110 may include a cache or buffer (eg, DRAM) and address circuitry (eg, row and column decoders) that may be received from controller 115 address and decode the address to access memory device 104 .In some embodiments, memory device 104 includes a local media controller 105 that operates in conjunction with memory subsystem controller 115 to perform operations on one or more memory cells of memory device 104 . An external controller (eg, memory subsystem controller 115 ) may manage memory device 104 externally (eg, perform media management operations on memory device 104 ). In some embodiments, memory device 104 is a managed memory device, which is a raw memory device combined with a local controller (eg, local controller 105 ) that performs media management within the same memory device package. An example of a managed memory device is a managed NAND (MNAND) device.Computing system 100 includes a dynamic data placer 113 in memory subsystem 110 that dynamically determines a media layout to place data associated with logical addresses in media units/memory devices 102-104. In some embodiments, controller 115 in memory subsystem 110 includes at least a portion of dynamic data placer 113 . In other embodiments, or in combinations, controller 116 and/or processing device 118 in host system 120 includes at least a portion of dynamic data placer 113 . For example, controller 115 , controller 116 , and/or processing device 118 may include logic circuitry that implements dynamic data placer 113 . For example, the controller 115 or the processing device 118 (processor) of the host system 120 may be configured to execute instructions stored in memory for performing the operations of the dynamic data placer 113 described herein. In some embodiments, dynamic data placer 113 is implemented in an integrated circuit chip housed in memory subsystem 110 . In other embodiments, dynamic data placer 113 is part of the operating system, device driver, or application of host system 120 .Dynamic data placer 113 may determine availability of data for writing, programming, storing, committing data from media units/memory devices 102 to 104 at the time of input/output scheduling in memory subsystem 110 The media layout of a portion of the logical address where the data is placed at the logical address in 104 . When a media unit/memory device (eg, 102 or 104) is available to commit/program data, write commands are scheduled for execution in memory subsystem 110; and dynamic data placer 113 generates media for write commands Part of the layout and map logical addresses used in write commands to identify memory locations using a media unit/memory device (eg, 102 or 104). Execution of the write command causes the memory subsystem 110 to commit/program the data associated with the write command into the media unit/memory device (eg, 102 or 104). Since it is known that a media unit/memory device (eg, 102 or 104) can be used to commit/program data independent of the operation of other media units/memory devices (eg, 102 or 104), there is no There is a media access conflict. When multiple media units/memory devices (eg, 102 and 104) are available, logical addresses used in commands from multiple write streams may be mapped to multiple media units/memory, respectively, by part of a dynamically generated media layout Means (eg, 102 and 104) such that there are no media access conflicts when executing commands from multiple write streams. Additional details regarding the operation of dynamic data placer 113 are described below.2 shows dynamic data placer 113 configured to determine media layout 130 in a manner that reduces and/or avoids conflicts in concurrent media accesses when writing data. For example, dynamic data placer 113 and media layout 130 may be implemented in computer system 100 of FIG. 1 .In Figure 2, multiple write commands 123A-123N are scheduled for parallel execution. The number of write commands 123A-123N scheduled for parallel execution is based on the number of media units/memory devices 109A-109N (eg, memory devices 102 and/or 104 shown in FIG. 1) available for parallel operation. Write commands 123A-123N may each come from multiple write streams.Write commands 123A through 123N use logical block addressing (LBA) addresses 131 . . . 133 to specify locations for write operations.In scheduling write commands 123A-123N, dynamic data placer 113 generates a mapping of logical block addressing (LBA) addresses 131...133 to physical addresses 141...143. Having determined that media units/memory devices 109A-109N are available for parallel write operations, dynamic data placer 113 maps each of LBA addresses 131...133 to a different one of media units/memory devices 109A...109N. Thus, physical addresses 141...143 for LBA addresses 131...133 correspond to memory regions 151...153 in different media units/memory devices 109A...109N. Since neither of the physical addresses 141 . . . 143 are used for memory regions in the same media unit (eg, 109A or 109N), no conflict occurs when the write commands 123A . . . 123N are executed in parallel. Therefore, media access conflicts are eliminated.In general, write operations across different media units/memory devices 109A-109N may be inconsistent. Thus, when a subset of media units/memory devices 109A...109N becomes available for the next write operation, another subset of media units/memory devices 109A...109N may still be busy with their operations and not available for the next write into operation. Some of the media units/memory devices 109A... 109N may be busy performing other operations such as read operations, erase operations, and are therefore unavailable to perform write operations. In general, when scheduling one or more write commands for the available subset of media units/memory devices 109A...109N, dynamic data placer 113 generates a portion of media layout 103 to place the LBA addresses of the scheduled write commands Physical addresses that map to memory regions in the available subset of media units/memory devices 109A...109N. Thus, scheduled commands can be executed in the event of a media access conflict.3 shows an example of a memory subsystem where dynamic data is placed. For example, the memory subsystem of FIG. 3 may be implemented in the memory subsystem 110 of FIG. 1 using the dynamic data placer 113 of FIG. 2 . However, the techniques of FIGS. 1 and 2 are not limited to the implementation of the memory subsystem shown in FIG. 3 . For example, the techniques may be implemented by a flat block device, a namespace enabled device, or a partitioned namespace enabled device (eg, the memory subsystem shown in FIG. 3). Accordingly, the disclosure presented herein is not limited to the example of FIG. 3 .In FIG. 3 , namespace 201 is configured on the media storage capacity of memory subsystem 110 . Namespace 201 provides a logical block addressing space that can be used by host system 120 to specify memory locations for read or write operations. Namespace 201 may be allocated over a portion of the media storage capacity of memory subsystem 110 or the entire media storage capacity of memory subsystem 110 . In some cases, multiple namespaces may be allocated on separate, non-overlapping portions of the media storage capacity of memory subsystem 110 .In FIG. 3 , the namespace 201 is configured with a plurality of zones 211 , 213 . . . 219 . Each zone (eg, 211) in the namespace allows random read access to LBA addresses in zone (eg, 211) and sequential write access to LBA addresses in zone (eg, 211) , but does not allow random write access to random LBA addresses in area (211). Therefore, writing data into the area (eg, 211 ) is performed in a predetermined sequential order in the LBA address space of the namespace 201 .When configuring a zone (eg, 211) in namespace 201, it is possible (eg, for simplicity) to pre-determine a media layout for that zone (eg, 211). LBA addresses in a region (eg, 211 ) may be pre-mapped into media 203 of memory subsystem 110 . However, as discussed above, such predetermined media layouts can cause media access conflicts when there are multiple parallel write streams. Randomizing the mapping from LBA addresses in a region (eg, 211) to memory locations in media 203 can reduce, but not eliminate, collisions.Preferably, dynamic data placer 113 is configured in memory subsystem 110 to create portions of media layout 130 when scheduling write commands for execution, thereby completely eliminating conflicts.For example, medium 203 of memory subsystem 110 may have multiple integrated circuit dies 205 . . . 207 . Each of the integrated circuit dies (eg, 205 ) may have multiple planes 221 . . . 223 of memory cells (eg, NAND memory cells). Each of the planes (eg, 221 ) may have multiple blocks 231 . . . 233 of memory cells (eg, NAND memory cells). Each of the blocks (eg, 231 ) may have multiple pages 241 . . . 243 of memory cells (eg, NAND memory cells). The memory cells in each page (eg, 241 ) are configured to be programmed to store/write/commit data together in an atomic operation; and the memory cells in each block (eg, 231 ) are configured to be in an atomic operation Erase data together.When a write command (eg, 123A) for storing data in one area (eg, 211) and another write command (eg, 123N) for storing data in another area (eg, 213) ) are scheduled for parallel execution, resulting in two integrated circuit dies (eg, 205 and 207 ) being available for concurrent operation, the dynamic data placer 113 will write the LBA addresses (eg, 123A and 123N) of the commands (eg, 123A and 123N) , 131 and 133) are mapped into pages located in different dies (eg, 205 and 207). Therefore, media access conflicts can be avoided.4 shows an example of a data structure configured to support dynamic data placement. For example, the media layout 130 of FIG. 2 or 3 may be implemented using the data structure of FIG. 4 .In Figure 4, zone map 301 is configured to provide media layout information for a zone (eg, 211) in a namespace (eg, 201). Zone map 301 may have multiple entries. Each entry in the zone map 301 identifies information about the zone (eg, 211 ), such as the starting LBA address 311 of the zone (eg, 211 ), the block set identifier 313 of the zone (eg, 211 ), the zone (eg, 211 ) 211), the cursor value 315, the state 317 of the zone (eg, 211), and so on.Host system 120 writes data in a region (eg, 211) starting at region start LBA address 311. Host system 120 writes data in regions (eg, 211) sequentially in LBA space. After a certain amount of data has been written into the area (eg, 211), the current starting LBA address for writing subsequent data is identified by cursor value 315. Each write command for the zone moves the cursor value 315 to the new starting LBA address for the next write command for the zone. State 317 may have a value indicating that the region (eg, 211 ) is empty, full, implicitly open, explicitly open, closed, and the like.In Figure 4, logical-to-physical block mapping 303 is configured to facilitate translation of LBA addresses (eg, 331) to physical addresses in the medium (eg, 203).The logical to physical block map 303 may have multiple entries. The LBA address (eg, 331 ) may be used or translated into an index to an entry in the logical-to-physical block map 303 . An index can be used to find an entry for an LBA address (eg, 331). Each entry in the logical-to-physical block map 303 identifies the physical address of a memory block in the medium (eg, 203 ) for an LBA address (eg, 331 ). For example, the physical address of a memory block in the medium (eg, 203) may include a die identifier 333, a block identifier 335, a page map entry identifier 337, and the like.Die identifier 333 identifies a particular integrated circuit die (eg, 205 or 207 ) in media 203 of memory subsystem 110 .Block identifier 335 identifies a particular block of memory (eg, NAND flash memory) within an integrated circuit die (eg, 205 or 207 ) identified using die identifier 333 .Page map entry identifier 337 identifies an entry in page map 305 .Page map 305 may have multiple entries. Each entry in page map 305 may include a page identifier 351 that identifies a page of memory cells within a block of memory cells (eg, NAND memory cells). For example, page identifier 351 may include the wordline number of the page and the subblock number of the page in a block of NAND memory cells. Additionally, the entry for the page may contain the programming mode 353 for the page. For example, pages can be programmed in SLC mode, MLC mode, TLC mode, or QLC mode. When configured in SLC mode, each memory cell in the page will store one bit of data. When configured in MLC mode, each memory cell in a page will store two bits of data. When configured in TLC mode, each memory cell in a page will store three bits of data. When configured in QLC mode, each memory cell in a page will store four bits of data. Different pages in an integrated circuit die (eg, 205 or 207) may have different data programming modes.In FIG. 4, chunkset table 307 stores data on aspects of the dynamic media layout of the control area (eg, 211).Chunk set table 307 may have multiple entries. Each entry in the block set table 307 identifies the number/count 371 of integrated circuit dies (eg, 205 and 207) in which the data of the region (eg, 211) is stored. For each of the integrated circuit dies (eg, 205 and 207) for the region (eg, 211), the entry of the block set table 307 has a die identifier 373, a block identifier 375, a page map entry identifier 377 et al.The die identifier 373 identifies the particular integrated circuit die (eg, 205 or 207 ) in the media 203 of the memory subsystem 110 on which the storable area (eg, 211 ) is subsequent data.Block identifier 375 identifies a particular block (eg, 231 or 233 ) of memory (eg, NAND flash memory) within the integrated circuit die (eg, 205 or 207 ) identified using die identifier 373 at which block Subsequent data in the storable area (eg, 211) in (eg, 231 or 233).Page map entry identifier 337 identifies an entry in page map 305 that identifies a page (eg, 241 or 241 ) that can be used to store subsequent data for a region (eg, 211 ).For example, memory subsystem 110 receives multiple streams of write commands. For example, each respective stream of the plurality of streams is configured to write data sequentially in the logical address space in one embodiment; and in another embodiment, the streams of the plurality of streams are configured to write data at In one embodiment data is written in the logical address space pseudo-sequentially or randomly. Each write stream contains a set of commands labeled to write, trim, rewrite a set of data together as a group. Within a group, data can be written in logical space sequentially, randomly or pseudo-sequentially. Preferably, the data in the group is written into an erase block set, wherein the memory cells in the erase block set store data for a stream, but not data from other streams. An erase block set can be erased to remove data for a stream without erasing data for other streams.For example, each of the write streams is permitted to write sequentially at an LBA address in a region (eg, 211 ) in an allocated namespace (eg, 201 ) on media 203 of memory subsystem 110, However, out-of-order writing of data in the LBA address space is prohibited.Dynamic data placer 113 of memory subsystem 110 identifies multiple media units (eg, 109A-109N) in the memory subsystem that are available for concurrently writing data.The dynamic data placer 113 selects the first command from the plurality of streams for concurrent execution in the plurality of media units available for writing data.In response to the first command being selected for concurrent execution in multiple media units, dynamic data placer 113 dynamically generates and stores a portion of media layout 130 from the logical address identified by the first command in the logical address space A physical address that maps to a memory unit in multiple media units.The memory subsystem 110 executes the first command concurrently by storing data into the memory cells according to the physical addresses.For example, when the first command is scheduled for execution, the second command may be executed on a subset of the memory cells of the media of memory subsystem 110 . Therefore, the subset of memory cells used to execute the second command is not available for the first command. After the first command is dispatched and the portion of the media layout for the logical address used in the first command is determined, the first command may be executed concurrently in multiple media units and/or with a second in the remaining media units of memory subsystem 110 Commands are executed concurrently.For example, after identifying a plurality of memory cells (eg, integrated circuit dies) available to execute the next command, the dynamic data placer 113 may identify from the block set table 307 the physical addresses available to store data for the next command . The physical address can be used to update the corresponding entry in the logical-to-physical block map 303 for the LBA address used in the next command.For example, when the integrated circuit die (eg, 205) is free to write data, the dynamic data placer 113 may determine the number of regions in the memory cells that can be written/programmed into the integrated circuit die (eg, 205). Order. From the block set table 307, the dynamic data placer 113 locates the entry for the region (eg, 205), locates the block identifier 375 and the page map entry identifier 377 associated with the identifier 373 of the integrated circuit die (eg, 205) , and use the die identifier 373, block identifier 375, and page map entry identifier 377 to update the corresponding fields of the entry in the logical to physical block map 303 with the LBA address 331 used in the command for the region (eg, 211). Thus, for LBA address 331, a command for a region (eg, 211) can be executed without a media access conflict.Figure 5 shows an example of dynamic media layout determination.In the example of Figure 5, two concurrent write streams 420 and 430 are shown. Stream 420 has entries 421 , 423 , 425 . . . 429 to be written into memory cells of integrated circuit dies 441 , 443 , 445 . . . Stream 430 has entries 431 , 433 , 435 . . . 439 to be written into memory cells of integrated circuit dies 441 , 443 , 445 . . . If item 421 of stream 420 and item 431 of stream 452 are allocated to write to the same die (eg, 441), a conflict will occur because the die (eg, 441) cannot be used for concurrent write streams Item 421 of 420 and item 431 of stream 430. Thus, a dynamic data placer (eg, 113 in Figures 1, 2, or 3) assigns items 421 and 431 of concurrent streams 420 and 430 to pages 451 and 454 in different dies 441 and 443, as shown in Figure 5 Shows. Similarly, items 423 and 433 from concurrent streams 420 and 430 are allocated to pages 453 and 42 in different dies 443 and 441 . For example, when item 425 of stream 420 is allocated to page 455 in die 455, concurrent item 435 is allocated to be written to a page in another die; and page 457 in die 445 may be allocated to Assigned to item 439 of stream 430 that is not written/programmed concurrently with item 425 of stream 420. Therefore, conflicts are avoided. Dynamic media layout changes the order of items written relative to the order of dies 441, 443, 445... . For example, items 421-429 of stream 420 are written to dies 441, 443, 445... in one order; and items 431-439 of stream 430 are written to dies 441, 443, . 445... so that streams 420 and 430 do not access the same die at the same time. In Figure 5, the data of different streams 420 and 430 are marked to be written into different erase blocks. For example, page 451 in die 441 for storing data item 421 of stream 420 and page 452 in die 441 for storing data item 431 of stream 430 are in separate erase block sets, so that Page 451 of stream 420 is erased without erasing page 452 storing data of stream 430 and makes it possible to erase page 452 of stream 430 without erasing page 451 storing data of stream 420 .In at least some embodiments disclosed herein, dynamic data placer 113 may place data across multiple integrated circuit dies (eg, 205-207) and planes of memory cells (eg, 221-223) for retrieval by a host system The data provided by 120 is programmed in multiple passes for storage in memory subsystem 110 . The flexibility to program data multiple times across multiple integrated circuit dies (eg, 205 to 207 ) and planes (eg, 221 to 223 ) allows dynamic data placer 113 to improve dynamic data for the next atomic write operation. A match between the allocated storage capacity and the size of the data to be stored in the allocated storage capacity. Improved matching may reduce or eliminate the need for zero padding for data programming operations, reduce the time to buffer data in the memory subsystem, reduce wear and storage amplification, and improve memory performance.For example, memory subsystem 110 may have NAND flash memory. Atomic write/program operations program pages (eg, 241 ) of memory cells together to store data. If the size of the data to be programmed/written into the page is less than the size of the page, zeros (or other values) may be padded/added to the data to program the entire page (eg, 241) together. However, padded zeros (or other values) can reduce the utilization of the storage capacity of a page (eg, 241) and may increase wear amplification and storage amplification. On the other hand, if memory subsystem 110 receives more data than can be programmed into a page (eg, 241), a portion of the received data may be buffered in memory subsystem 110 for the next atomic write into operation. However, buffering excess data in memory to wait for the next operation may increase the time and amount of data to be buffered in memory subsystem 110, and thus increase the volatile buffer memory used for memory subsystem 110 during a power failure event (eg, 119) power is supplied until the data in the buffer memory (eg, 119) can be flushed to the capacity requirements of the power failover circuit in the non-volatile memory.Atomic write operations can be implemented in NAND devices in various ways. Using single-pass programming techniques, atomic write operations in NAND devices can program/store data into single-plane pages, dual-plane pages, quad-plane pages, or multi-plane pages. Using multi-pass programming techniques, atomic write operations in NAND devices can program/store data into pages in single-level cell (SLC) mode and program/store data into pages in multi-level cell (MLC) mode Medium, program/store data into a page in three-level cell (TLC) mode or program/store data into a page in four-level cell (QLC) mode. Pages programmed in atomic write operations may have different sizes in different modes. For example, using a multi-pass programming approach, an SLC page may have a size of 64 kilobytes (KB), an MLC or TLC page may have a size of 128KB, and a QLC page may have a size of 64KB.When pages of data for different write streams of different programming modes are interleaved in a NAND device, among blocks of NAND memory cells (eg, 221-223) on different integrated circuit dies (eg, 205-207) of the NAND device , the size of the next available page can vary.When a NAND device supports multi-pass programming techniques, a given amount of data can be programmed for different passes with different combinations of programming modes and locations of memory pages. For example, when the memory subsystem 110 receives 192KB of data from the host system, the NAND device may be configured to pair the data using three first passes of SLC programming on three single-plane pages in three integrated circuit dies, respectively Programming is performed with each of the integrated circuit die performing the atomic operation of the first pass SLC programming of 64KB of data. Alternatively, NAND devices may be configured to use a first pass of SLC programming on a single-plane page in one integrated circuit die and on another single-plane page in the same integrated circuit die or in another integrated circuit die The data is programmed using a second pass of TLC or MLC programming.Using various programming options, dynamic data placer 113 may be based on the availability of integrated circuit dice 205-207 to perform data programming operations and the next available block (eg, 205) in integrated circuit dice (eg, 205) available to perform data programming operations. For example, a data programming mode (eg, 353) of 241) to dynamically determine data placement in integrated circuit dies 205-207.For example, when memory subsystem 110 receives one or more commands from the host system to store an amount of host data of a given size, dynamic data placer 113 (eg, in local storage 119 ) queues the one or more commands And a portion of media layout 130 is determined for the physical placement of data in integrated circuit dies 205-207. When the integrated circuit die (eg, 205 ) is available to perform a data programming operation, the dynamic data placer 113 allocates a portion of the host data (to be retrieved from the host system 120 ) for use in the integrated circuit die (eg, 205 ) data programming operations. The amount of data allocated to the integrated circuit die (eg, 205) is based on the data programming mode (353) of the page (eg, 241) in the available block (eg, 231). Such operations of assigning data to the next available integrated circuit die are repeated until all host data is assigned to a set of integrated circuit dies (eg, 205 and 207 ), wherein the integrated circuit dies (eg, 205 and 207 ) ) are each used to store a portion of host data using one atomic data write operation. Memory capacity (eg, pages) allocated from multiple integrated circuit dies (eg, 205 and 207 ) can be combined for multiple programming passes. In response to the completion of the physical storage allocation, the memory subsystem 110 may allocate buffer space for transferring host data; and transfer different portions of data into different circuit dies (eg, 205 and 207 ) according to the dynamically determined physical storage allocation, The integrated circuit dies (eg, 205 and 207) are caused to perform respective data programming operations to store their data portions.6 shows a block set 281 allocated across integrated circuit dies 205-207 for multi-pass programming of data.In FIG. 6, integrated circuit die A 205 has planes 221-223 and blocks (eg, 231-233); and integrated circuit die B 207 has planes 261-263 and blocks (eg, 271-273).Chunk sets 281 are allocated for streaming. Data for a stream is stored in chunkset 281; and data for other streams is not stored in chunkset 281. Therefore, when the block set 281 is erased, only the data of the stream is erased. All data of the stream may be erased by erase block set 281 .Chunk sets 281 may be identified using entries in chunk set table 307 shown in FIG. 4 . In general, block set 281 may be allocated over a subset of integrated circuit dies (eg, 205 , 207 . . . ) in medium 203 . For each of the blocks (eg, 271), the entry in the block set table 307 identifies the die (eg, 207) using the die identifier (eg, 373), the die (eg, 375) using the block identifier (eg, 375) A block (eg, 271) within a slice (eg, 207) and a page map entry identifier 377 is used to identify the next page in the block that can be used to store data. Page map entry identifier 373 identifies an entry in page map 305 . Entries in page map 305 show page identifiers 351 and programming modes 353 for pages within a block (eg, 271).In block set 281, dynamic data placer 113 may allocate a page from one die (eg, 205) that can be used to program data, and repeat the allocation from another die (eg, 207). Dynamic data placer 113 may allocate separate pages from different dies for multi-pass programming and select dies for allocation to reduce or eliminate padding until all of the memory subsystems are transferred together from host system 120 through one communication until the host data is allocated.7 shows a method of dynamic data placement for multi-pass programming of data across an integrated circuit die. The method of FIG. 7 may be performed by processing logic, which may include hardware (eg, a processing device, circuitry, special purpose logic, programmable logic, microcode, hardware of a device, integrated circuits, etc.), software (eg, in processing instructions to run or execute on the device), or a combination thereof. In some embodiments, the method of FIG. 7 is performed, at least in part, by the dynamic data placer 113 of FIGS. 1 , 2 or 3 . Although shown in a particular order or sequence, unless otherwise specified, the order of the processes may be modified. Accordingly, the illustrated embodiments should be understood as examples only, and the illustrated processes may be performed in a different order, and some processes may be performed in parallel. Additionally, one or more processes may be omitted in various embodiments. Therefore, not all procedures are required in every embodiment. Other process flows are possible.At block 401 , memory subsystem 110 receives a command from host system 120 that identifies the size of the data to be stored in memory subsystem 110 .At block 403, commands are queued in memory subsystem 110 having memory cells formed on a plurality of integrated circuit dies 205-207.At block 405, the dynamic data placer 113 allocates the plurality of the plurality of dice 205-207 based on determining that each of the plurality of dice (eg, 205 and 207) is available to execute the commanded data programming operation A page of memory cells in a slice (eg, 205 and 207).At block 407, dynamic data placer 113 generates a portion of media layout 130 to map at least the logical address of the data identified in the command to the allocated page.At block 409, after generating the portion of the media layout and/or after allocating the page, the memory subsystem 110 receives data from the host system in response to the command.At block 411, the memory subsystem 110 stores data into pages using a multi-pass programming technique, where an atomic multi-pass programming operation may use in separate dies or separate planes of the multiple integrated circuit dies (eg, a single die at least two pages in two planes) to program at least a portion of the data. For example, based on per-plane page mapping and die availability, data received from the host system can be mapped in a flexible manner for programming across single, dual, or 4 planes in a single or dual die. A single die map can fit into the minimum size of the stream.For example, portions of data may be programmed into at least two pages in atomic operations. Each of the dies is instructed to perform a write operation. Each of the dies is not instructed to perform a repeat write operation for the command.The at least two pages may include a first page in the first integrated circuit die and a second page in the second integrated circuit die. A multi-pass programming operation may include a first programming pass for the first page and a second programming pass for the second page. The first pass may be programmed in the first mode, and the second pass may be programmed in the second mode. For example, the first mode and the second mode are different ones of: a single-level cell (SLC) mode; a multi-level cell (MLC) mode; a three-level cell (TLC) mode; and a four-level cell ( QLC) mode.For example, allocation of pages may be performed to minimize mismatches between the storage capacity of pages programmed using multi-pass programming techniques and the size of the data identified in the command.Optionally, pages may be allocated from a set of blocks that are configured to be erased together.For example, dynamic data placer 113 may store page map 305 with entries each identifying a page in a block and a memory cell programming mode (eg, 353) for the page. Dynamic data placer 113 may allocate pages based on the memory cell programming mode (eg, 353 ) identified in page map 305 . The programming mode (eg, 353 ) indicates the size of the available pages; and the dynamic data placer 113 allocates the pages so that the allocated storage capacity matches the size of the data to be received from the host system 120 .In some implementations, the communication channel between the processing device 118 and the memory subsystem 110 comprises a computer network, such as a local area network, a wireless local area network, a wireless personal area network, a cellular communication network, a broadband high-speed always-connected wireless communication connection (eg, current or next generation mobile network links); and the processing device 118 and the memory subsystem may be configured to communicate with each other using data storage management and usage commands similar to those in the NVMe protocol.Memory subsystem 110 may typically have non-volatile storage media. Examples of non-volatile storage media include memory cells formed in integrated circuits and magnetic materials coated on hard disks. Non-volatile storage media can maintain data/information stored therein without consuming power. Memory cells may be implemented using various memory/storage technologies such as NAND gates, NOR gates, phase change memory (PCM), magnetic random access memory (MRAM), resistive type random access memory, cross-point storage, and memory devices (eg, 3D XPoint memory). Crosspoint memory devices use transistorless memory elements, each of which has memory cells and selectors stacked together in columns. The columns of memory elements are connected by two vertical wire layers, one layer above the column of memory elements and the other layer below the column of memory elements. Each memory element can be individually selected at the intersection of a wire on each of the two layers. Crosspoint memory devices are fast and non-volatile, and can be used as a general-purpose memory pool for processing and storage.A controller (eg, 115 ) of the memory subsystem (eg, 110 ) may execute firmware to perform operations in response to communications from the processing device 118 . In general, firmware is a type of computer program that provides control, monitoring, and data manipulation of engineered computing devices.Some embodiments involving the operation of the controller 115 may be implemented using computer instructions executed by the controller 115 , such as firmware of the controller 115 . In some cases, hardware circuitry may be used to implement at least some of the functions. Firmware may be initially stored in a non-volatile storage medium or another non-volatile device and loaded into volatile DRAM and/or in-processor cache for execution by controller 115 .A non-transitory computer storage medium may be used to store instructions for firmware of a memory subsystem (eg, 110). When executed by the controller 115 and/or the processing device 117, the instructions cause the controller 115 and/or the processing device 117 to perform the methods discussed above.8 illustrates an example machine of computer system 500 within which a set of instructions may be executed to cause the machine to perform any one or more of the methods discussed herein. In some embodiments, computer system 500 may correspond to a host system (eg, host system 120 of FIG. 1 ) that includes, be coupled to, or utilize a memory subsystem (eg, memory subsystem 110 of FIG. 1 ) or may be used to execute dynamic Operation of data placer 113 (eg, executing instructions to perform operations corresponding to dynamic data placer 113 described with reference to FIGS. 1-7). In alternative embodiments, the machines may be connected (eg, networked) to other machines in a LAN, intranet, extranet, and/or the Internet. A machine may be in the capacity of a server or client machine in a client-server network environment as a peer machine in a peer-to-peer (or decentralized) network environment or as a server or client machine in a cloud computing infrastructure or environment to operate.The machine may be a personal computer (PC), tablet PC, set-top box (STB), personal digital assistant (PDA), cellular phone, network appliance, server, network router, switch or bridge, or capable (sequential or otherwise) Any machine that executes a set of instructions that specify actions to be taken by said machine. Additionally, although describing a single machine, the term "machine" should also be considered to encompass any collection of machines that, individually or collectively, execute a set (or sets) of instructions to perform any of the methods discussed herein or more.Example computer system 500 includes processing device 502, main memory 504 (eg, read only memory (ROM), flash memory, such as synchronous DRAM (SDRAM) or RambusDRAM (RDRAM), in communication with each other via bus 530 (which may include multiple buses) ), dynamic random access memory (DRAM), static random access memory (SRAM), etc.), and data storage system 518.Processing device 502 represents one or more general-purpose processing devices, such as microprocessors, central processing units, and the like. More specifically, the processing device may be a complex instruction set computing (CISC) microprocessor, a reduced instruction set computing (RISC) microprocessor, a very long instruction word (VLIW) microprocessor, or a processor implementing other instruction sets, Or a processor that implements a combination of instruction sets. Processing device 502 may also be one or more special-purpose processing devices, such as application specific integrated circuits (ASICs), field programmable gate arrays (FPGAs), digital signal processors (DSPs), network processors, and the like. Processing device 502 is configured to execute instructions 526 for performing the operations and steps discussed herein. Computer system 500 may further include a network interface device 508 to communicate over network 520 .Data storage system 518 may include machine-readable storage media 524 (also referred to as computer-readable media) having stored thereon one or more sets of instructions 526 or embodying any of the methods or functions described herein one or more software. Instructions 526 may also reside entirely or at least partially within main memory 504 and/or within processing device 502 during execution thereof by computer system 500, which also constitute machine-readable storage media. Machine-readable storage medium 524, data storage system 518, and/or main memory 504 may correspond to memory subsystem 110 of FIG.In one embodiment, the instructions 526 include instructions to implement functions corresponding to the dynamic data placer 113 (eg, the dynamic data placer 113 described with reference to FIGS. 1-7). Although machine-readable storage medium 524 is shown as a single medium in example embodiments, the term "machine-readable storage medium" should be considered to encompass a single medium or multiple media that store one or more sets of instructions. The term "machine-readable storage medium" shall also be considered to encompass any medium capable of storing or encoding a set of instructions for execution by a machine and causing the machine to perform any one or more of the methods of the present disclosure. Accordingly, the term "machine-readable storage medium" should be considered to include, but not be limited to, solid-state memory, optical media, and magnetic media.Some portions of the previous detailed description have been presented in terms of algorithms and symbolic representations of operations on data bits within a computer memory. These algorithmic descriptions and representations are the means used by those skilled in the data processing arts to most effectively convey the substance of their work to others skilled in the art. An algorithm herein generally refers to a self-consistent sequence of operations that produce a desired result. The operations are those requiring physical manipulations of physical quantities. Usually, though not necessarily, these quantities take the form of electrical or magnetic signals capable of being stored, combined, compared, and otherwise manipulated. It has proven convenient at times, principally for reasons of common usage, to refer to these signals as bits, values, elements, symbols, characters, terms, numbers, or the like.It should be borne in mind, however, that all of these and similar terms are to be associated with the appropriate physical quantities and are merely convenient labels applied to these quantities. The present disclosure may relate to computers that manipulate and transform data represented as physical (electronic) quantities within the registers and memory of a computer system to other data similarly represented as physical quantities within the computer system's memory or registers or other such information storage systems The actions and processes of a system or similar electronic computing device.The present disclosure also relates to apparatus for performing the operations herein. This apparatus may be specially constructed for the intended purposes, or it may comprise a general-purpose computer selectively activated or reconfigured by a computer program stored in the computer. Such computer programs may be stored in a computer-readable storage medium, such as, but not limited to, any type of disk, including floppy disks, optical disks, CD-ROMs and magneto-optical disks, read only memory (ROM), random access memory (RAM), EPROMs, EEPROMs, magnetic or optical cards, or any type of media suitable for storing electronic instructions, are each coupled to a computer system bus.The algorithms and displays presented herein are not inherently related to any particular computer or other device. Various general-purpose systems may be used with programs in accordance with the teachings herein, or it may prove convenient to construct more specialized apparatus to perform the methods. The structure of a variety of these systems will be presented as set forth in the description below. Additionally, the present disclosure is not described with reference to any particular programming language. It will be appreciated that various programming languages may be used to implement the teachings of the present disclosure as described herein.The present disclosure may be provided as a computer program product or software, which may include a machine-readable medium having stored thereon instructions that may be used to program a computer system (or other electronic device) to perform processes in accordance with the present disclosure. A machine-readable medium includes any mechanism for storing information in a form readable by a machine (eg, a computer). In some embodiments, machine-readable (eg, computer-readable) media include machine (eg, computer-readable) storage media such as read-only memory ("ROM"), random-access memory ("RAM"), magnetic disks Storage media, optical storage media, flash memory devices, etc.In this description, various functions and operations may be described as being performed by or caused by computer instructions for simplicity of description. However, those skilled in the art will recognize that such expressions mean that functions originate from the execution of computer instructions by one or more controllers or processors (eg, microprocessors). Alternatively or in combination, the functions and operations may be implemented using special purpose circuitry, with or without software instructions, eg, using application specific integrated circuits (ASICs) or field programmable gate arrays (FPGAs). Embodiments may be implemented using hardwired circuitry without or in combination with software instructions. Thus, the techniques are not limited to any specific combination of hardware circuitry and software, nor to any specific source of instructions executed by a data processing system.In the foregoing specification, embodiments of the present disclosure have been described with reference to specific example embodiments of the present disclosure. It will be apparent that various modifications may be made to the present invention without departing from the broader spirit and scope of the disclosed embodiments as set forth in the following claims. Accordingly, the specification and drawings are to be regarded in an illustrative rather than a restrictive sense. |
An integrated circuit contains a central processing unit ("CPU"), a graphic control hub ("GCH"), a memory control hub ("MCH"), and a phase lock loop ("PLL"). The GCH, MCH, and PLL are coupled to the CPU. The MCH controls memory transactions. The PLL is configured to allow the CPU to operate at more than one power consumption states. |
We claim: 1. An integrated circuit comprising:a central processing unit ("CPU"); a graphic control hub ("GCH") coupled to said CPU; a memory control hub ("MCH") coupled to said CPU and configured to control memory transactions; and a phase lock loop ("CPLL") coupled to said CPU and configured to allow said CPU to operate at more than one power consumption state. 2. The integrated circuit of claim 1, where said CPU is configured to operate at more than one clock frequency for conserving power consumption.3. The integrated circuit of claim 1, where said PLL provides more than one clock frequency.4. The integrated circuit of claim 1, further comprising:a memory interface coupled to said MCH and configured to communicate with various external memory devices; and an input and output ("I/O") interface coupled to said MCH and configured to control I/O traffic. 5. The integrated circuit of claim 1, wherein said integrated circuit is further coupled to an I/O controller and a clock device.6. The integrated circuit of claim 1, wherein said CPU is capable of operating at more than one voltage level in response to clock signals from said PLL.7. The integrated circuit of claim 1, wherein said MCH is capable of operating at more than one frequency mode in response to clock signals from said PLL.8. The integrated circuit of claim 1, wherein said MCH is capable of operating at more than one voltage level in response to clock signals from said PLL.9. The integrated circuit of claim 1, wherein said GCH is capable of operating at more than one frequency mode in response to clock signals from said PLL.10. The integrated circuit of claim 1, wherein said GCH is capable of operating at more than one voltage level in response to clock signals from said PLL.11. The integrated circuit of claim 1, wherein said MCH controls Rambus(TM) Dynamic Random Access Memory ("RDRAM").12. A method comprising:suspending a phase lock loop ("PLL") that is embedded in an integrated circuit ("IC") from providing a first clock frequency; suspending a central processor unit ("CPU") that is embedded in said IC from execution in response to said suspension of PLL; suspending a graphic control hub ("GCH") that is embedded in said IC from execution in response to said suspension of PLL; resuming said PLL for providing a second clock frequency; and resuming said CPU in response to said second clock frequency. 13. The method of claim 12, further comprising:suspending a memory control hub ("MCH") that is embedded in said IC from execution in response to said suspension of PLL; and resuming said MCH in response to said second clock frequency. 14. The method of claim 12, wherein said suspending PLL further comprises entering a suspension state in response to results of temperature and current calibration.15. A method comprising:suspending a phase lock loop ("PLL") that is embedded in an integrated circuit ("IC") from providing a first voltage level; suspending a central processor unit ("CPU") that is embedded in said IC from execution in response to said suspension of PLL; resuming said PLL for providing a second voltage level; and resuming said CPU in response to said second voltage level. 16. The method of claim 15, further comprising:suspending a memory control hub ("MCH") that is embedded in said IC from execution in response to said suspension of PLL; and resuming said MCH in response to said second voltage level. 17. The method of claim 15, wherein said suspending PLL further comprises entering a suspension state in response to results of temperature and current calibration.18. The method of claim 15, further comprising:suspending a graphic control hub ("GCH") that is embedded in said IC from execution in response to said suspension of PLL; and resuming said GCH in response to said second voltage level. 19. A device comprising:a central processing unit ("CPU") deposited on an integrated circuit ("IC"); a graphic control hub ("GCH") deposited on said IC and coupled to said CPU for image processing; a memory control hub ("MCH") deposited on said IC and coupled to said CPU for controlling data transactions; and a phase lock loop ("PLL") deposited on said IC and coupled to said CPU, said PLL configured to switch said CPU to operate at one of several clock frequencies. 20. The device of claim 19, wherein said PLL is further configured to switch said CPU to operate at one of several voltage levels for conserving power consumption.21. The device of claim 19, wherein said PLL is further configured to switch said GCH to operate at one of several voltage levels for conserving power consumption.22. The device of claim 19, wherein said PLL is further configured to switch said GCH to operate at one of several clock frequencies for conserving power consumption.23. The device of claim 19, wherein said PLL is further configured to switch said MCH to operate at one of several voltage levels for conserving power consumption.24. The device of claim 19, wherein said PLL is further configured to switch said MCH to operate at one of several clock frequencies for conserving power consumption. |
BACKGROUND OF THE INVENTION1. Field of the InventionThe present invention relates generally to the field of computer systems. More specifically, the present invention relates to the conservation of power consumption in a computer system.2. Description of the Related ArtAs more systems become portable, increased reliance will necessarily be placed on portable power supplies, particularly batteries. Reducing power consumption by processors becomes increasingly important as the industry moves to maximize battery life. Even in stationary systems, excessive power consumption translates into higher operational costs. Additionally, increasingly stringent governmental requirements and environmental standards militate toward reducing the power consumed in a computer system where possible.A typical high performance system consumes a large amount of power because the system generally uses high-speed microprocessors and co-processors. System reliability and battery life are problematic for a system that consumes excessive power. For example, a typical high frequency microprocessor may increase temperature rapidly when the microprocessor consumes full power and operates at peak performance.However, many applications, such as word processing, do not require the microprocessor to operate at full power because a typical high performance microprocessor can support more than a typical word processor. Accordingly, it is not necessary to keep a high performance system operating at full power at all times because running at full power not only reduces the battery life, but also affects overall system reliability.Therefore, it is wasteful to keep a system running at full power at all times.BRIEF DESCRIPTION OF THE DRAWINGSThe present invention will be understood more fully from the detailed description given below and from the accompanying drawings of various embodiments of the invention, which, however, should not be taken to limit the invention to the specific embodiments, but are for explanation and understanding only.FIG. 1 illustrates one embodiment of a single PLL based CPU system.FIG. 2 is a state diagram illustrating one embodiment of power consumption states.FIG. 3 is a state diagram illustrating one embodiment of power consumption states having four states.FIG. 4 is block diagram illustrating a system that is able to enter different power consumption states.FIG. 5 is a block diagram illustrating one embodiment of a system clock.FIG. 6 is a timing diagram illustrating a process for switching between power consumption states.FIG. 7 is a flowchart illustrating a process of switching power consumption states.FIG. 8 is a flowchart illustrating a process of entering a low power consumption state from a high power consumption state.DETAILED DESCRIPTIONA method and an apparatus for conserving system power consumption are described.In the following description, numerous specific details are set forth for purposes of explanation, in order to provide a thorough understanding of the present invention. It will be apparent, however, to one skilled in the art that the present invention can be practiced without these specific details. In other instances, well-known structures and devices are shown in block diagram form in order to avoid obscuring the present invention.Some portions of the detailed descriptions that follow are presented in terms of algorithms and symbolic representations of operations on data bits within a computer memory. These algorithmic descriptions and representations are the means used by those skilled in the data processing arts to most effectively convey the substance of their work to others skilled in the art. An algorithm is here, and generally, conceived to be a self-consistent sequence of steps leading to a desired result. The steps are those requiring physical manipulations of physical quantities. Usually, though not necessarily, these quantities take the form of electrical or magnetic signals capable of being stored, transferred, combined, compared, and otherwise manipulated. Principally for reasons of common usage, it has proven convenient at times to refer to these signals as bits, values, elements, symbols, characters, terms, numbers, or the like.It should be borne in mind, however, that all of these and similar terms are to be associated with the appropriate physical quantities and are merely convenient labels applied to these quantities. Unless specifically stated otherwise in the following discussions, it is appreciated that throughout the present invention, discussions utilizing terms such as "processing" or "computing" or "calculating" or "determining" or "displaying" or the like, refer to the action and processes of a computer system or similar electronic computing device that manipulates and transforms data represented as physical (electronic) quantities within the computer system's registers and memories into other data similarly represented as physical quantities within the computer system memories or registers or other such information storage, transmission or display devices.The present invention also relates to an apparatus for performing the operations herein. This apparatus may be specially constructed for the required purposes, or it may comprise a general-purpose computer selectively activated or reconfigured by a computer program stored in the computer. Such a computer program may be stored in a computer readable storage medium, such as, but not limited to, any type of disk including floppy disks, optical disks, CD-ROMs, and magnetic-optical disks, read-only memories (ROMs), random access memories (RAMs), EPROMs, EEPROMs, magnetic or optical cards, or any type of media suitable for storing electronic instructions, and each coupled to a computer system bus.The algorithms and displays presented herein are not inherently related to any particular computer or other apparatus. Various general-purpose systems may be used with programs in accordance with the teachings herein, or it may prove convenient to construct a more specialized apparatus to perform the required method steps. The required structure for a variety of these systems will appear from the description below. In addition, the present invention is not described with reference to any particular programming language. It will be appreciated that a variety of programming languages may be used to implement the teachings of the invention as described herein.OVERVIEWA mechanism for conserving system power consumption using multiple power consumption states is disclosed. In one embodiment, the system dynamically transits between a high power consumption state and a low power consumption state, which is also known as Geyserville transition, according to computing power required by the applications. For example, the central processing unit ("CPU") transits from a high power consumption state to a low power consumption state when the CPU only needs to support a simple application, such as, for example, a word processor.In an alternative embodiment, a single phase lock loop ("PLL") is used to generate various clock signals, which are used by a CPU, a graphic control hub ("GCH"), and a memory control hub ("MCH"). In this embodiment, the PLL, CPU, GCH, and MCH are integrated in an integrated circuit ("IC"). In another embodiment, the CPU is configured to operate more than one clock frequencies. In an alternative embodiment, the CPU can operate at more than one voltage levels.FIG. 1 illustrates one embodiment of a single PLL based CPU system 100. Computer system 100 includes a processor 112, a clock 130, a memory 104, a memory controller 150, a graphic controller 152, and an input and output ("I/O") controller 140. Graphic controller 152 is coupled to a display 121. I/O controller 140 is coupled to a keyboard 122, a hard copy device 124, and a cursor control device 123.Processor 112 includes, but is not limited to, a microprocessor such as an Intel Architecture Microprocessor, manufactured by Intel Corporation of Santa Clara, Calif., the corporate assignee of the present invention. Processor 112 may also be another processor such as the PowerPC(TM), Alpha(TM), etc.In one embodiment, memory controller 150 controls memory 104 and memory 104 may be a random access memory (RAM) or other dynamic storage device for storing information and instructions. Memory 104 also may be used for storing temporary variables or other intermediate information during execution of instructions by processor 112. Computer system 100 may also comprise a read only memory (ROM) and/or other static storage device for storing static information and instructions for processor 112.Graphic controller 152 controls display 121, such as cathode ray tube (CRT) or liquid crystal display (LCD), for displaying information to a computer user. In one embodiment, I/O controller 140 is coupled to processor 112 via memory controller 150. I/O controller 140 controls input and output devices such as keyboard 122, cursor control device 123, and hard copy device 124. Cursor control 123 may be a mouse, trackball, trackpad, stylus, or cursor direction keys for communicating direction information and command selections to processor 112, and for controlling cursor movement on display 121.Hard copy device 124 may be used for printing instructions, data, or other information on a medium such as paper, film, or similar types of media. Furthermore, a sound recording and playback device such as a speaker and/or microphone may optionally be coupled to I/O controller 140 for audio interfacing with computer system 100. Clock 130 is used to provide various clock signals to different components, such as processor 112, memory controller 150, etc.In one embodiment, processor 112, graphic controller 152, and memory controller 150 may be integrated onto a single chip. In another embodiment, processor 112, graphic controller 152, I/O controller 140, and memory controller 150 may be integrated onto a single chip. Note that any or all of the components of system 100 and associated hardware may be used in the present invention. However, it can be appreciated that other configurations of the computer system may include some or all of the devices.FIG. 2 is a state diagram 200 illustrating one embodiment of power consumption states. State diagram 200 contains a high power state 202 and a low power state 204. High power state 202 indicates high clock frequency and high operating voltage while low power state 204 indicates low clock frequency and low operating voltage. For example, high power state 202 may operate at 700 megahertz (MHz) with operating voltage at 1.8 volt (v) while low power state 204 operates at 400 MHz with operating voltage at 1.3v. To conserve power consumption, a system or a CPU may, in one embodiment, transit dynamically between high power state 202 and lower power state 204 according to the computing power required by the applications.In another embodiment, a system dynamically switches between high power state 202 and low power state 204 without user intervention. For example, multiple transitions between high power state 202 and low power state 204 may take place between keystrokes. During high power state 202, in one embodiment the CPU consumes full power and is able to perform full functions. However, during low power state 204, in one embodiment the CPU consumes lower power and is only able to perform some functions. Note that high power state 202 may consume double or triple the amount of power than low power state 204.Power consumption can be calculated in terms of voltage and frequency. The mathematic equation for the power consumption is listed as follows.P[proportional to]CV<2>fWhere P represents power and C represents a constant. Also, V represents voltage while f represents frequency. For example, if high power state 202 operates at 700 MHz with 1.8v, the power consumption for high power state PH would bePH[proportional to]CV<2>f=C*(1.8)2*700=2268CIf low power state 204 operates at 400 MHz with 1.3v, the power consumption for low power state PL would bePL[proportional to]CV<2>f=C*(1.3)2*400=676CThus, PH consumes mores than three times the power that PL consumes.FIG. 3 is a state diagram 300 illustrating one embodiment of power consumption states having four states. State diagram 300 contains CO 302, Cl 304, C2306, and C3308 states. Additional states may be added but they are not important to understanding the present invention.In one embodiment, C0302 state is an active power consumption state where a CPU performs full range of functions and consumes full power.During C0302 state, power management for conserving power is not employed. C1304 state is, in one embodiment, an auto-halt power consumption state where the advanced-power management ("APM") for conserving power may be performed. A CPU running at C1304 state commonly consumes less power than the CPU running at C0302 state. For example, during C1304 state instructions are commonly not executed and the instruction cache is commonly empty.In one embodiment, C2306 state is a stop-grant power consumption state where less power is consumed in C2306 state than in either C0302 state or C1304 state. For example, during C2306 state the clock signals for the CPU may be stopped. In another embodiment, the CPU is partially shut down. For example, the main portion of the CPU is shut down while the snoop portion of the CPU is still active for monitoring the front site bus. To enter C2306 state, the CUP can either be at C1304 state or C0302 state. Likewise, C2306 state can move directly to C0302 state without entering C1304 state first.In one embodiment, C3308 state is known as deep sleep state where some components of a system, including the CPU, are shut down. In this embodiment, the CPU is completely shut down so that the clock frequency can be changed at C3308 state. To enter C3308 state, the CPU is, in one embodiment, configured to enter C2306 state before entering C3308 state.In an alternative embodiment, the CPU can switch directly from C0302 state to C3308 state.FIG. 4 is a block diagram 400 illustrating a system that is able to enter different power consumption states. Block diagram 400 includes a clock device 420, a processing unit ("PU") 401, memory devices 422, and an input and output control hub ("ICH") 416. PU 401 further includes a CPU 402, a PLL 404, a graphic control hub ("GCH") 406, a memory control hub ("MCH") 408, a memory interface ("MI") 410, and an input/output ("I/O") interface 412. Other blocks or devices may be added in block diagram 400 but they are not pertinent to understanding the present invention.In one embodiment, Clock device 420 provides clock signals to various devices including PU 401. In another embodiment, clock device 420 provides multiple clock frequencies to facilitate multiple power consumption states. For example, clock device 420 provides 700 MHz clock signal to PU 401 during high power consumption state while clock device 420 provides 400 MHz clock signal to PU 401 during low power consumption state. In yet another embodiment, clock device 420 supplies clock signals to memory 422.In one embodiment, Memory 422 contains multiple high-performance memory banks. In one embodiment, high-performance DRAMs (Direct Random Access Memory), such as, for example, Rambus(TM) DRAM ("RDRAM") may be used for memory 422. In an alternative embodiment, high-speed SRAM (Static Random Access Memory) may be used for memory 422.In one embodiment, ICH 416 controls data transaction between PU 401 and external devices, such as, for example, the main memory, system bus, and various input devices. In this embodiment, ICH 416 does not transit between power consumption states. I/O interface 412 is used to communicate between PU 401 and ICH 416. In one embodiment, I/O interface 412 contains its own PLL device so that when PLL 404 stops providing clock signals I/O interface 412 can still be alive for monitoring the traffic between PU 401 and ICH 416.PLL 404 receives clock signals from clock device 420 and redistributes clock signals to various components including CPU 402, GCH 406, and MCH 408. During C3 state, in one embodiment the clock signal from PLL 404 to CPU 402 may be stopped for conserving power. When the clock signal stops, CPU 402 stops execution, which normally conserves power consumption. Once CPU 402 stops execution, in one embodiment the execution can be resumed by new clock signals. In one embodiment, the new clock signal from PLL 404 may have a different clock frequency, such as a slower clock frequency, for conserving power consumption. In another embodiment, at C3 state, CPU 402 may be powered down by PLL 404 and subsequently powered up with a different voltage level.In one embodiment, GCH 406 receives clock signals from PLL 404 and controls graphic implementations. In one embodiment, MCH 408 also receives clock signals from PLL 404 and it controls memory access via MI 410. In one embodiment, MI 410 is tailored to specific memories used in memory 422. For example, if RDRAM is used in memory 422, MI 410 may be a Rambus(TM) ASIC cell ("RAC"), which is used to communicate between PU 401 and RDRAM. PU 401 is, in one embodiment, integrated into a single integrated circuit ("IC") for conserving power consumption.In one operation, PLL 404 is, in one embodiment, powered down during C3 state. Once PLL 404 is powered down, PLL 404 suspends clock distribution in PU 401. After the clock signals from PLL 404 are suspended, various components, such as, for example, CPU 402, GCH 406 and MCH 408, are shut down. Once CPU 402 is suspended, CPU 402 can be subsequently resumed with a lower clock frequency, which may require less power to operate.FIG. 5 is a block diagram 500 illustrating one embodiment of a clock configuration. In one embodiment, block diagram 500 contains a clock generator 504, a Direct Rambus(TM) Clock Generator ("DRCG") 508, RDRAM 530, and a clock distributor 520. DRCG 508 further contains a PLL 502 and a phase aligner 510. Clock distributor 520 also contains a PLL 522 and a phase aligner 512. Other blocks may be added to block diagram 500, but they are not important to understanding the invention.In one embodiment, clock generator 504 sends clock signals to PLL 502 and PLL 522 via clock bus 544, 546, respectively. In one embodiment, PLL 502 is used to distribute clock signals to DRCG 508 where DRCG 508 further distributes clock signals to RDRAM 530. In order to regulate the clock signals between DRCG 508 and clock distributor 520, phase aligners 510 and 512 are used to synchronize the clock signals.In one operation, during C3 state, the reference clock, which is carried by clock bus 544, from clock generator 504 to DRCG 508 is active, in one embodiment. However, phase aligner 512 is suspended so that the clock distributor 520 stops distributing clock signals. In one embodiment, when the clock generator suspends clock distribution to RDRAM 530, RDRAM 530 still receives clock signals from DRCG 508, which is used for memory refresh. After frequency and voltage transition, phase aligner 510 and 512 are resumed and a new power consumption state may be entered.FIG. 6 is a timing diagram 600 illustrating a process for switching between power consumption states, such as Geyserville transition. Geyserville transition is a power consumption transition that switches from a high power consumption state or C0 state to a low power consumption state or C3 state.In one embodiment, CPU writes a Geyserville transition request, also known as Geyserville write ("GWt"), to the Geyserville control register, to initiate a Geyserville transition. When CPU issues GWt 640 on the CPU front side bus ("FSB") 601 at clock cycle 670, FSB snoop is locked. GWt 640 then is forwarded to hub interface 604 in which MCH receives GWt 624. Next, GWt 624 is further forwarded to ICH where a Geyserville transition sequence is introduced. When the stop CPU clock signal is activated on CPU FSB 601 at clock cycle 671, a goto-Geyserville ("Go_Gy") signal 626 is issued on hub interface 604.Once Go_Gy signal 626 is active, the transition from C0 state 660 to C2 state 662 takes place. At clock cycle 672, a maintenance procedure 607 is performed. In one embodiment, maintenance procedure 607 performs temperature and current calibration, memory refresh, and current calibration. After execution of maintenance procedure 607, a command of acknowledged Geyserville ("Ack_Gy") 628 is initiated on hub interface 604.After Ack_Gy 628 is issued on hub interface 604, MCH sends permission to perform Geyserville transition. At clock cycle 673, the output of the phase detector or aligner is stopped. In one embodiment, the DRCG feedback path is kept alive. Next, the frequency and voltage transitions takes place before the end of clock cycle 673. After the voltage transition, which may take longer than frequency transition, bus ratio is changed and then, FSB Snoop is resumed. At clock cycle 674, the devices transit into nap state from the power down state.FIG. 7 is a flowchart 700 illustrating a process of switching power consumption levels. A process begins at the start block and proceeds to block 702. At block 702, the process suspends the PLL from providing a first clock frequency. After block 702, the process proceeds to block 704. At block 704, the process suspends the CPU. After block 704, the process proceeds to block 706 where the process suspends the GCH. After block 706, the process proceeds to block 708. At block 708, the process resumes the PLL with a second clock frequency. After block 708, the process proceeds to block 710 where the process resumes the CPU in response to the second clock frequency. After block 710, the process ends at end block.FIG. 8 is a flowchart 800 illustrating a process of entering a low power consumption level from a high power consumption level. A process begins at start block and proceeds to block 802. At block 802, the process initiates a transition and locks FSB snoops. After block 802, the process moves to block 804 where the process starts the transition sequence. After block 804, the process proceeds to block 806. At block 806, the process performs temperature and current calibrations, memory refresh, and calibration broadcast. After block 806, the process proceeds to block 808 where the process exits nap state or C2 state. After block 808, the process proceeds to block 812. At block 812, the process suspends the output of the phase aligner. After block 812, the process proceeds to block 814, where the process starts frequency and voltage transitions. After block 814, the process proceeds to block 816. At block 816, the process waits for the transition to complete. After block 816, the process proceeds to block 818, where the process enables FSB snoops. After block 818, the process proceeds to block 820 where the process enters the nap state or C2 state. After block 820, the process ends.In the foregoing detailed description, the method and apparatus of the present invention have been described with reference to specific exemplary embodiments thereof. However, it will be evident that various modifications and changes may be made thereto without departing from the broader spirit and scope of the present invention. The present specification and figures are accordingly to be regarded as illustrative rather than restrictive.Thus, a method and a system for conserving power consumption have been described. |
A processor provides a register for storing an address space number (ASN). Operating system software may assign different ASNs to different processes. The processor may include a TLB to cache translations, and the TLB may record the ASN from the ASN register in a TLB entry being loaded. Thus, translations may be associated with processes through the ASNs. Generally, a TLB hit will be detected in an entry if the virtual address to be translated matches the virtual address tag and the ASN matches the ASN stored in the register. Additionally, the processor may use an indication from the translation table entries to indicate whether or not a translation is global. If a translation is global, then the ASN comparison is not included in detecting a hit in the TLB. Thus, translations which are used by more than one process may not occupy multiple TLB entries. Instead, a hit may be detected on the TLB entry storing the global translation even though the recorded ASN may not match the current ASN. In one embodiment, if ASNs are disabled, the TLB may be flushed on context switches. However, the indication from the translation table entries used to indicate that the translation is global may be used (when ASNs are disabled) by the TLB to selectively invalidate non-global translations on a context switch while not invalidating global translations. |
What is claimed is: 1. A processor comprising:a first register configured to store a first value indicative of a first process being executed by said processor; a second register coupled to a translation lookaside buffer (TLB) wherein said second register is configured to store an enable indication indicative of whether or not said first value in said first register is enabled for use; and said TLB coupled to said first register, said TLB including at least a first entry, wherein said first entry is configured to store at least: (i) a portion of a first virtual address; (ii) a second value indicative of a second process being executed by said processor at a time that said first entry is loaded with said first virtual address; and (iii) a first indication from a translation table entry corresponding to said first virtual address; wherein said TLB is configured to selectively include, dependent upon said first indication and said enable indication is in an enabled state, a comparison of said first value to said second value in determining if a second virtual address hits in said first entry; and wherein said TLB is coupled to receive a signal indicating that a base address of a translation table is being updated, and wherein said TLB is configured to selectively invalidate said first entry dependent upon said first indication, said enable indication being in a disabled state, and said signal. 2. The processor as recited in claim 1 wherein said TLB is configured to include said comparison if said enable indication is in said enabled state and said first indication is in a first state.3. The processor as recited in claim 2 wherein said TLB is configured not to include said comparison if said first indication is in a second state even if said enable indication is in said enabled state.4. The processor as recited in claim 1 wherein, if said enable indication is in said disabled state, said TLB is configured not to include said comparison.5. The processor as recited in claim 1 further comprising a third register configured to store said base address of said translation table.6. The processor as recited in claim 1 further comprising a fourth register coupled to said TLB, wherein said fourth register is configured to store a second enable indication, and wherein said TLB is configured to selectively invalidate said first entry further dependent upon said second enable indication.7. The processor as recited in claim 6 wherein said TLB is configured to invalidate said first entry if said second enable indication is in said enabled state and said first indication is in a first state.8. The processor as recited in claim 7 wherein said TLB is configured to invalidate said first entry if said second enable indication is in said disabled state.9. The processor as recited in claim 7 wherein said TLB is configured not to invalidate said first entry if said second enable indication is in said enabled state and said first indication is in a second state.10. The processor as recited in claim 1 wherein said TLB is configured to determine that said second virtual address hits in said first entry responsive to: (i) said portion of said first virtual address equaling a corresponding portion of said second virtual address; (ii) said first value equaling said second value; and (iii) said first indication being in a first state.11. The processor as recited in claim 1 wherein said TLB is configured to determine that said second virtual address hits in said first entry responsive to: (i) said portion of first virtual address equaling said corresponding portion of said second virtual address; and (ii) said first indication being in a second state.12. A method comprising:presenting a first virtual address to a translation lookaside buffer (TLB) for translation; determining if said first virtual address is a hit in a first entry of said TLB, said first entry storing at least: (i) a portion of a second virtual address; (ii) a first value indicative of a first process being executed at a time that said first entry is loaded with said second virtual address; and (iii) a first indication from a translation table entry corresponding to said second virtual address, said determining selectively including comparing said first value to a second value indicative of a second process being executed during said determining, and wherein said selectively including is dependent upon said first indication and an enable indication being in an enabled state; updating a register storing a base address of a translation table; and selectively invalidating said first entry dependent on said first indication if said enable indication is in said disabled state. 13. The method as recited in claim 12 wherein said selectively including comprises including said comparing if said enable indication is in an enabled state and said first indication is in a first state.14. The method as recited in claim 13 wherein said selectively including comprises excluding said comparing if said first indication is in a second state even if said enable indication is in said enabled state.15. The method as recited in claim 12 wherein said selectively including comprises excluding said comparing if said enable indication is in a disabled state.16. The method as recited in claim 12 wherein said determining comprises determining a hit responsive to: (i) said portion of said second virtual address equaling a corresponding portion of said first virtual address; (ii) said first value equaling said second value; and(iii) said first indication being in a first state. 17. The method as recited in claim 16 wherein said determining comprises determining said hit responsive to: (i) said portion of said second virtual address equaling said corresponding portion of said first virtual address; and (ii) said first indication being in a second state. |
BACKGROUND OF THE INVENTION1. Field of the InventionThis invention is related to the field of processors and, more particularly, to address translation mechanisms within processors.2. Description of the Related ArtProcessors typically support virtual address translation. Generally, address translation is a process in which a virtual address (generated from one or more address operands of an instruction) is translated to a physical address which identifies a memory location in a memory to which the processor is coupled. Address translation allows for numerous benefits.For example, by providing address translation, a virtual address space exceeding the actual physical memory space of the computer system may be supported. The application programmer (to which the virtual address space is visible and the physical address space is typically invisible) may be insulated from the different amounts of memory that may be supplied in different computer systems. The operating system on the computer system may allocate physical memory to various virtual addresses, and may store instructions and data for other virtual addresses on a slower backup storage (e.g. disk storage). Generally, a block of contiguous virtual addresses is mapped to a corresponding block of physical addresses by a translation table entry in a translation table maintained by the operating system. The block of contiguous addresses is referred to as a page.As another example, the translation table entry may include protection information for the page. As the processor translates addresses of memory requests, the processor may verify that the type of request being executed is permitted according to the protection information. If the request is not permitted, the processor may generate an exception instead of completing the request. Thus, the operating system may control the manner in which each process accesses each page.An additional advantage of virtual addressing may be enjoyed by multitasking operating systems. Various processes which may be concurrently executing within the computer system may produce the same virtual addresses. However, the virtual addresses of one process may be allocated to different physical pages than the same virtual addresses of another process. Thus, the instructions and data belonging to one process may be protected from access and update by another process.Typically, the operating system maintains one or more translation tables in memory. The translation tables are a predefined data structure including a plurality of translation table entries, each translation table entry storing a translation which maps a page of virtual addresses to a corresponding page of physical addresses. The processor searches the translation tables for a translation for each virtual address generated by the processor. Depending upon the definition of the translation table structure, several memory accesses may be performed prior to finding the correct translation table entry in the translation table.In order to speed the translation process, most processors implement translation lookaside buffers (TLBs). The TLBs are implemented within the processor and cache translation information from previously used translation table entries. Prior to searching the translation tables in memory for a translation of a virtual address, the processor searches the TLBs. Typically, a portion of the virtual address is compared to virtual address tags stored in the TLB. If a hit in the TLB is detected (i.e. a virtual tag match is detected), the corresponding physical address stored in the TLB is used.Unfortunately, since the same virtual address may have different translations for different processes, the TLBs typically must be flushed during each process switch (or context switch). If the process which is switched out is switched back in a short time later, the translations corresponding to that process must still be reloaded from memory into the TLB (even though they might not have been deleted if it weren't for the flushing during the context switch). Processor performance may be lost due to the time required to reload the TLB with the translations corresponding to the process. A method for reducing the number of TLB invalidations due to context switches is therefore desired.SUMMARY OF THE INVENTIONThe problems outlined above are in large part solved by a processor as described herein. The processor provides a register for storing an address space number (ASN). Operating system software may assign different ASNs to different processes, and thus the ASN may identify a process. The processor may include a TLB to cache translations, and the TLB may record the ASN from the ASN register in a TLB entry being loaded. Thus, translations may be associated with processes through the ASNs. Generally, a TLB hit will be detected in an entry if the virtual address to be translated matches the virtual address tag and the ASN matches the ASN stored in the register. Accordingly, the TLB need not be invalidated on context switches.Additionally, the processor may use an indication from the translation table entries to indicate whether or not a translation is global. If a translation is global, then the ASN comparison is not included in detecting a hit in the TLB (and thus determining if the cache translation may be used to translate the virtual address). In other words, the ASN comparison does not affect the detection of a hit on a global translation. Thus, translations which are used by more than one process may not occupy multiple TLB entries. Instead, a hit may be detected on the TLB entry storing the global translation even though the recorded ASN may not match the current ASN. TLB entry usage may thus be more efficient.In one embodiment, ASNs may be enabled through an enable indication. If ASNs are disabled, the TLB may be flushed on context switches. However, the indication from the translation table entries used to indicate that the translation is global may be used (when ASNs are disabled) by the TLB to selectively invalidate non-global translations on a context switch while not invalidating global translations on the context switch.Broadly speaking, a processor is contemplated. The processor comprises a first register and a TLB coupled to the first register. The first register is configured to store a first value indicative of a first process being executed by the processor. The TLB includes at least a first entry, wherein the first entry is configured to store at least: (i) a portion of a first virtual address; (ii) a second value indicative of a second process being executed by the processor at a time that the first entry is loaded with the first virtual address; and (iii) a first indication from a translation table entry corresponding to the first virtual address. The TLB is configured to selectively include, dependent upon the first indication, a comparison of the first value to the second value in determining if a second virtual address hits in the first entry.Additionally, a method is contemplated. A first virtual address is presented to a TLB for translation. The TLB determines if the first virtual address is a hit in a first entry of the TLB. The first entry stores at least: (i) a portion of a second virtual address; (ii) a first value indicative of a first process being executed at a time that the first entry is loaded with the second virtual address; and (iii) a first indication from a translation table entry corresponding to the second virtual address. The determination selectively includes comparing said first value to a second value indicative of a second process being executed during the determination. The selective including is dependent upon the first indication.BRIEF DESCRIPTION OF THE DRAWINGSOther objects and advantages of the invention will become apparent upon reading the following detailed description and upon reference to the accompanying drawings in which:FIG. 1 is a block diagram of one embodiment of a processor.FIG. 2 is a block diagram of one embodiment of a translation lookaside buffer.FIG. 3 is a block diagram of one embodiment of a translation lookaside buffer entry and corresponding circuitry for detecting a hit.FIG. 4 is a flowchart illustrating operation of one embodiment of a translation lookaside buffer in invalidating entries.FIG. 5 is a block diagram of one embodiment of a page table entry.FIG. 6 is a block diagram of one embodiment of a page directory entry.FIG. 7 is a block diagram of a first embodiment of a computer system including the processor shown in FIG. 1.FIG. 8 is a block diagram of a second embodiment of a computer system including the processor shown in FIG. 1.While the invention is susceptible to various modifications and alternative forms, specific embodiments thereof are shown by way of example in the drawings and will herein be described in detail. It should be understood, however, that the drawings and detailed description thereto are not intended to limit the invention to the particular form disclosed, but on the contrary, the intention is to cover all modifications, equivalents and alternatives falling within the spirit and scope of the present invention as defined by the appended claims.DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTSTurning now to FIG. 1, a block diagram illustrating one embodiment of a processor 10 is shown. Other embodiments are possible and contemplated. In the embodiment of FIG. 1, processor 10 includes an instruction cache 12 (which includes an instruction translation lookaside buffer, or ITLB 20), an execution core 14, a data cache 16 (which includes a data TLB or DTLB 24), an external interface unit 18, a register file 22, and a set of control registers 26-32. Instruction cache 12 is coupled to external interface unit 18 and execution core 14. Execution core 14 is further coupled to register file 22 and data cache 16. Data cache 16 is further coupled to external interface unit 18. External interface unit 18 is further coupled to an external interface. Control registers 26-30 are coupled to ITLB 20 and DTLB 24, and control registers 26-32 may be coupled to execution core 14 (not shown in FIG. 1 for simplicity in the drawing).Generally speaking, processor 10 is configured to use address space numbers (ASNs) to identify processes to which translations cached in ITLB 20 and/or DTLB 24 belong. ASNs may be implemented by one or both of the TLBs, as desired. The below discussion will refer to TLBs which implement ASNs, unless otherwise noted.More particularly, ASNs may be used to identify different processes. The operating system may assign different ASNs to different processes and may load the ASN corresponding to a particular process into control register 30 when performing a context switch to the particular process. The TLBs may record the ASN stored in control register 30 in each TLB entry as the entry is filled with a translation. Thus, the translation is associated with the particular process through the ASN. When determining if a translation for a virtual address is stored in the TLB, the TLB may qualify the virtual address comparison to the virtual tags in the TLB with a comparison of the corresponding ASNs recorded in the TLB to the ASN stored in control register 30. A hit on a TLB entry may be detected if the ASN stored in the TLB entry matches the ASN stored in control register 30 and the virtual address matches the virtual tag in the entry. Since translations are associated with processes through the ASNs, the TLB need not be invalidated on context switches since the ASN comparison may prevent a process from using translations for another process. The translations corresponding to a process may still be stored in the TLB the next time that process is activated, and hits may be detected without having to reload the TLB (if the entries weren't overwritten with translations accessed by an intervening process). Performance may thus be improved.It may be desirable to allow multiple processes to have access to certain translations (global translations). For example, translations related to operating system services may be used by any process. Additionally, several processes may be related to a particular application program and thus may be provided shared access to certain pages. Rather than having multiple entries allocated in the TLB for the same global translation with different ASNs, processor 10 may use an indication from the translation to determine whether or not the ASNs are included in detecting a TLB hit for that translation. Thus, the TLBs may qualify the comparison of ASNs with the value of the indication. If the indication indicates that the ASNs are not included (because the translation is indicated as global by the indication), then a hit may be detected on a TLB entry for a first process even though the TLB entry may have been loaded when a different process is executing. Thus, the global translation is not reloaded into the TLB with the ASN of the first process. Instead, a hit is detected on the previously loaded translation information. Allocating multiple TLB entries to the same global translation may thus be avoided, allowing more efficient use of the TLB. The indication used to determine whether or not ASNs are included in the hit determination is referred to in one embodiment below as the G bit (or global bit). If the G bit is set, then the translation is global and ASNs are not included in the hit determination. If the G bit is clear, the translation is not global ASNs are included in the hit determination. However, other embodiments are possible using different bits.In one embodiment, the use of ASNs may be optional and may be enabled via an ASN enable indication (ASNE indication) stored in control register 26. If the ASNE indication is in an enabled state, the TLBs may use ASNs as described above. If the ASNE indication is in a disabled state, then ASNs are ignored in the determination of TLB hits. Additionally, if the ASNE indication is in a disabled state, TLB entries may be invalidated during context switches. However, the G bit from each translation may be used when ASNs are disabled to selectively invalidate a TLB entry corresponding to that translation during context switches. If the G bit is set, then the TLB entry is not invalidated and if the G bit is clear, then the TLB entry is invalidated. The G bit may be used to selectively invalidate TLB entries even if the TLB does not implement ASNs.In one embodiment, the use of the G bit (for either determining if the ASNs are included or for selectively inhibiting TLB invalidation) may be enabled via an enable indication as well (the PGE indication stored in control register 28). If the PGE indication is in a disabled state and the ASNE indication is in an enabled state, ASNs always are included in determine TLB hits. If the PGE indication is in a disabled state and the ASNE indication is in a disabled state, all TLB entries are invalidated during a context switch (i.e. the TLB is flushed). If the PGE indication is in an enabled state and the ASNE indication is in a disabled state, TLB entries are selectively invalidated based on the G bit from each translation. If the PGE indication is in an enabled state and the ANSE indication is in enabled state, the ASNs are selectively included in the TLB hit determination based on the value of the G bit of the corresponding translation.It is noted that TLB entries are referred to herein as being loaded (or reloaded) from a translation table entry or loaded (or reloaded) with a translation. Loading (or reloading) a TLB entry refers to storing translation information corresponding to the translation into the TLB entry. The translation information may comprise a subset or superset of the translation in the translation table entry, and may include information derived from the translation in the translation table entry and from other information (e.g. the ASN from control register 30).It is noted that enable indications may be described herein as bits with the enabled state being the set state of the bit and the disabled state being the cleared state of the bit. However, other encodings are possible, including encodings in which multiple bits are used and encodings in which the enabled state is the clear state and the disabled state is the set state. Accordingly, the remainder of this description may refer to the ASNE indication in control register 26 as the ASNE bit, with the enabled state being set and the disabled state being clear. Furthermore, the PGE indication in control register 28 may be referred to herein as the PGE bit, with the enabled state being set and the disabled state being clear. However, other encodings of these indication are contemplated, as set forth above.Control register 32 is used to store the page directory base address which processor 10 uses, when a TLB miss is detected, to search for a translation corresponding to the virtual address for which the TLB miss is detected. Generally, the page directory base address specifies the base address of the translation table in memory, and the virtual address is used in conjunction with the base address to access translation table entries in the translation table. Different processes may have different translation tables, and thus control register 32 may be updated during a context switch. In one embodiment, update of control register 32 is the event which causes TLB entries to be invalidated when ASNs are not in use (since those TLB entries may have been loaded from the translation table having a different base address than the base address being stored into control register 32). Thus, execution core 14 may signal ITLB 20 and DTLB 24 when an instruction which updates control register 32 is executed, and receipt of the signal may cause the TLB to selectively invalidate entries (if the ASNE bit is clear and the PGE bit is set) or to flush all entries (if the ASNE bit is clear and the PGE bit is clear). If the ASNE bit is set, then no invalidations may be performed in response to the signal.It is noted that control registers 26-32 may be implemented as architected control registers. Alternatively, one or both of the control registers may be implemented as model specific registers. Furthermore, control registers may be combined if desired.Generally, instruction cache 12 is a high speed cache memory for storing instruction bytes. Execution core 14 fetches instructions from instruction cache 12 for execution. Instruction cache 12 may employ any suitable cache organization, including direct-mapped, set associative, and fully associative configurations. If an instruction fetch misses in instruction cache 12, instruction cache 12 may communicate with external interface unit 18 to fill the missing cache line into instruction cache 12. Additionally, instruction cache 12 may include ITLB 20 to provide physical address translations for virtual addresses fetched from instruction cache 12.Execution core 14 executes the instructions fetched from instruction cache 12. Execution core 14 fetches register operands from register file 22 and updates destination register operands in register file 22. Similarly, execution core 14 fetches memory operands from data cache 16 and updates destination memory locations in data cache 16, subject to the cacheability of the memory operands and hitting in data cache 16. Additionally, execution core 14 may be configured, responsive to executing certain instructions, to update the contents of one or more of control registers 26-32.Execution core 14 may employ any suitable construction. For example, execution core 14 may be a superpipelined core, a superscalar core, or a combination thereof. Execution core 14 may employ out of order speculative execution or in order execution, according to design choice.Register file 22 may include the registers specified by the processor architecture employed by processor 10. For example, register file 22 may include 64 bit registers which may be accessed as 64 bit, 32 bit, 16 bit, or 8 bit registers as indicated by the operating mode of processor 10 and any overrides for a particular instruction. In one embodiment, the registers included in register file 22 may include the LEAX, LEBX, LECX, LEDX, LEDI, LESI, LESP, and LEBP registers. Register file 22 may further include the LEIP register. Alternatively, execution core 14 may employ a form of register renaming in which any register within register file 22 may be mapped to an architected register. The number of registers in register file 22 may be implementation dependent for such an embodiment.Data cache 16 is a high speed cache memory configured to store data. Data cache 16 may employ any suitable cache organization, including direct-mapped, set associative, and fully associative configurations. If a data fetch or update misses in data cache 16, data cache 16 may communicate with external interface unit 18 to fill the missing cache line into data cache 16. Additionally, if data cache 16 employs a writeback caching policy, updated cache lines which are being cast out of data cache 16 may be communicated to external interface unit 18 to be written back to memory. Data cache 16 may include DTLB 24 to provide physical address translations for virtual addresses presented to data cache 16.External interface unit 18 communicates with portions of the system external to processor 10. External interface unit 18 may communicate cache lines for instruction cache 12 and data cache 16 as described above.It is noted that processor 10 may include an integrated level 2 (L2) cache, if desired. Furthermore, external interface unit 18 may be configured to communicate with a backside cache in addition to communicating with the system.Turning now to FIG. 2, a block diagram of one embodiment of a TLB 40 is shown. Other embodiments are possible and contemplated. TLB 40 may be used to implement one or both of ITLB 20 and DTLB 24, depending upon which of the TLBs use ASNs. As illustrated in FIG. 2, TLB 40 includes a translation storage 42 and a control circuit 44. Translation storage 42 is coupled to receive a virtual address (VA) from the cache corresponding to TLB40 (e.g. data cache 16 if TLB 40 is DTLB 24 and instruction cache 12 if TLB 40 is ITLB 20) and is coupled to receive the ASN from control register 30 and an update address and information (from an update circuit (not shown) or from execution core 14 if microcode is used to load TLB entries). Translation storage 42 is coupled to provide a physical address (PA) to the cache and is further coupled to control circuit 44. Control circuit 44 is coupled to provide hit and exception information to the cache, and is coupled to receive the virtual address from the cache, the ASNE bit from control register 26, the PGE bit from control register 28, the ASN from control register 30, and a WR_PDBR signal from execution core 14.In response to a virtual address received from the cache, TLB 40 determines whether or not the virtual address is a hit in translation storage 42 and provides the corresponding physical address if a hit is detected. More particularly, the virtual address may be used to select one or more entries in translation storage 42 which may be eligible to store a translation corresponding to the virtual address (depending upon the structure of the translation storage 42). The virtual address is also provided to control circuit 44, which compares at least a portion of the virtual address to a virtual address tag stored in the selected entry (or entries). Additionally, if ASNs are enabled (as indicated by the ASNE bit), control circuit 44 may compare the ASN from register 30 to the ASN stored in the selected entry (or entries). Furthermore, the ASN comparison may be selectively applied to the selected entry (or entries) if the PGE bit is set. If a hit is detected, control circuit 44 may assert a hit signal to the cache. Additionally, if more than one entry is selected from translation storage 42 in response to the virtual address, control circuit 44 may signal translation storage 42 of the entry from which to read the physical address. Translation storage 42 provides the physical address to the cache.Generally, translation storage 42 is a memory comprising entries. For example, entries 46A-46C are illustrated in FIG. 2, and additional entries may be provided as desired. Each entry 46A-46C is configured to store translation information corresponding to a particular translation. Translation storage 42 may comprise any suitable structure. For example, translation storage 42 may be a direct mapped, set associative, or fully associative memory. In one particular embodiment, translation storage 42 may be a fully associative memory implemented as a content-addressable memory (CAM). For example, the portion of the entry storing virtual address information may be compared to the input virtual address in the CAM. Translation storage 42 may provide a hit signal for each entry based on the CAM of the virtual address to control circuit 42. Additionally, the portion of the entry storing the ASN may be a CAM and translation storage 42 may provide a compare signal for each entry indicating whether or not the stored ASN matches the current ASN from control register 30. In set associative or direct mapped embodiments, a portion of the virtual address may be an index to select an entry (direct mapped) or entries (set associative) which may store translation information for the virtual address. In such an embodiment, the virtual address tag stored in each entry and compared to the input virtual address may exclude the index bits.In addition to detecting hits for input virtual addresses, control circuit 44 may be configured to handle invalidations of entries in translation storage 42 if an update to control register 32 is detected. Execution core 14 provides a WR_PDBR signal which execution core 14 asserts in response to executing an instruction which updates control register 32. Based on the settings of the ASNE bit and PGE bit, control circuit 14 determines which of the entries to invalidate. If the ASNE bit is set, control circuit 14 does not invalidate any TLB entries (since the ASNs differentiate between translations belonging to various processes). If the ASNE bit is clear and the GPE bit is set, control circuit 44 may invalidate only those TLB entries for which the G bit in the corresponding translation is clear. Finally, if the ASNE bit is clear and the GPE bit is clear, control circuit 44 may invalidate all entries (i.e. flush the TLB).If a virtual address provided by the cache misses in TLB 40, processor 10 searches the translation tables in memory to find the translation corresponding to the virtual address. If a translation is found, translation storage 42 is updated with the information. The searching of the translation tables and the update may be handled using a variety of mechanisms. For example, hardware (i.e. an update circuit) may be designed which searches the table and provides the update information to translation storage 42. Alternatively, a microcode routine may be executed by execution core 14 to perform the search and provide the update. The update information includes the virtual address which caused the TLB miss and may include a portion or all of the information from the translation as well as any information derived from the translation, if applicable. Additionally, in the present embodiment, the ASN from register 30 is provided (shown separate from the other update information in FIG. 2). Control circuit 44 may select an entry to be updated and indicate the selected entry to translation storage 42. Any suitable replacement strategy may be used, depending upon the structure of translation storage 42.If control circuit 44 detects a hit in an entry of translation storage 42, control circuit 44 may also examine the other attributes from the translation which are stored in the entry to ensure that the operation being attempted is permitted by the other attributes (e.g. protection information and privilege level information may be part of the other attributes). If the operation is not permitted, control circuit 44 may signal an exception in addition to the hit signal.It is noted that, although control circuit 44 is shown separate from translation storage 42, a portion of control circuit 44 may be integrated into translation storage 42 (e.g. the comparators for comparing the virtual address and ASNs).Turning next to FIG. 3, a block diagram of one embodiment of a TLB entry 46A and corresponding control circuitry from control circuit 44 for detecting a hit in entry 46A is shown. The circuitry shown in FIG. 3 is exemplary only, and other embodiments may use other circuitry (including Boolean equivalents to the circuitry shown). Furthermore, the circuitry shown may not be dedicated to entry 46A (e.g. in a direct mapped or set associative embodiment of the TLB, eligible entries may be read from translation storage 42 and the circuitry may operate upon the output of translation storage 42. Other embodiments are possible and contemplated.In the embodiment of FIG. 3, entry 46A includes a virtual address field 50, an ASN field 52, a G bit 54, a valid bit 56, a physical address field 58, and an other attributes field 60. Virtual address field 50 stores at least a portion of the virtual address corresponding to the entry. More particularly, virtual address field 50 may not include the index portion of virtual address if translation storage 42 is a set associative or direct mapped storage. Additionally, the portion of the virtual address which defines an offset within the smallest translation page may not be stored (since the offset portion is not translated). Physical address field 58 stores the corresponding physical address defined for the virtual address according to the corresponding translation. Again, physical address field 58 may not store the offset portion, since the offset portion is provided untranslated from the virtual address. ASN field 52 stores the ASN which was stored in control register 30 when the entry 46A was loaded with the present translation. G bit 54 is the G bit from the translation entry (see, e.g. FIGS. 5 and 6 below). Valid bit 56 indicates whether or not entry 46A is storing valid translation information, and other attributes field 60 stores other attributes from the translation which may be used for protection checking, etc.The circuitry shown in FIG. 3 includes a comparator 62 coupled to receive the contents of virtual address field 50 and to receive the input virtual address, a comparator 64 coupled to receive the contents of ASN field 52 and to receive the ASN from control register 30, an AND gate 66 coupled to receive the PGE bit from control register 28 and the G bit from entry 46A, an OR gate 68 coupled to receive and invert the ASNE bit from control register 26, to receive the output of comparator 64, and to receive the output of AND gate 66, and an AND gate 70 coupled to receive the output of OR gate 68, the output of comparator 62, and the valid bit from entry 46A. The output of AND gate 70 is the hit signal for entry 46A, and indicates that a hit (asserted) or miss (deasserted) is detected.Comparator 62 compares the virtual address from field 50 to the input virtual address, and asserts its output signal if the addresses are equal. In one embodiment, various sizes of pages may be supported by processor 10. Thus, virtual address field 50 and the input virtual address to comparator 62 may comprise the page portion of the virtual address (less any index bits, if applicable) for the smallest page size. If the translation is for a larger page size, the address bits within virtual address field 50 and the input virtual address to comparator 62 which are actually offset bits within the larger page may be masked. Alternatively, comparator 62 may be implemented as several comparators comparing the page portion for the largest page size and the remaining portions according to the other supported page sizes. Output signals of the comparators may be masked and combined according to the page size of the translation stored in entry 46A. Other attributes field 46A may include information identifying the page size of the translation.Comparator 64 compares the ASN from ASN field 52 to the ASN from control register 30, and asserts its output signal if the ASNs are equal. The output signal is an input to OR gate 68.OR gate 68 determines whether or not the output signal of comparator 64 affects the hit determination. More particularly, the output signal of comparator 64 passes through OR gate 68 if the ASNE bit is set (and thus the inversion of the ASNE bit is clear) and either the PGE bit is clear or the G bit 54 is clear (deasserting the output of AND gate 66). Accordingly, the ASN comparison is selectively included in the hit determination. Viewed in another way, ASN comparison may be selectively masked out of the hit determination.AND gate 66 provides the enabling function of the PGE bit for G bit 54. If the PGE bit is clear, the G bit is masked off by AND gate 66. If the PGE bit is set, the value of the G bit 54 is passed though AND gate 66.AND gate 70 generates the hit signal responsive to the output of comparator 62, the output of OR gate 68, and the valid bit 56. Thus, a hit signal is asserted (indicating hit) if comparator 62 detects a virtual address match for the portion being compared, entry 46A is valid, and the output of OR gate 68 is asserted.It is noted that the circuitry included in FIG. 3 provides for both an ASNE bit and a PGE bit to enable the ASN comparison and the overriding of the comparison via the G bit. Other embodiments may eliminate one or both of the enable indications, and the circuitry in FIG. 3 would be changed accordingly. For example, if the ASNE bit is not used, OR gate 68 may eliminate the input for the ASNE bit. Similarly, if the PGE bit is not used, AND gate 66 may be eliminated and the G bit 54 may be input to OR gate 68.It is noted that, while the circuitry shown in FIG. 3 is described as being part of control circuit 44, parts of the circuitry may be integrated into translation storage 42. For example, comparators 62 and/or 64 may be integrated into translation storage 42.Turning next to FIG. 4, a flowchart is shown illustrating operation of one embodiment of control circuit 44 for invalidating entries in translation storage 42. Other embodiments are possible and contemplated. While the operations shown in FIG. 4 are illustrated in a particular order for ease of understanding, any equivalent order may be used. Furthermore, operations may be performed in parallel by circuitry within control circuit 44.Control circuit 44 detects a change in the ASNE bit (decision block 80). If a change in the ASNE bit is detected, control circuit 44 flushes the TLB (operation 82). The TLB is flushed in this case because improper translation may occur if not flushed. For example, if the ASNE bit were set (enabling ASNs) and is cleared, the TLB would cease comparing ASNs to qualify TLB hits. However, since the ASNE bit was enabled, it is possible that translations not belonging to the current process are stored in the TLB. Thus, to ensure that translations not belonging to the current process are not used by the current process, the TLB may be flushed. Similarly, if the ASNE bit were cleared and is set, the ASNs of translations currently in the TLB may not have valid ASNs attached to them (since ASNs were not in use).If the control circuit 44 is not informed of a write to control register 32 (e.g. via an assertion of the WR_PDBR signal-decision block 84), no invalidations may be required. On the other hand, if control circuit 44 is informed of a write to control register 32, control circuit 44 may determine if ASNs are enabled via the ASNE bit (decision block 86). If ASNs are enabled, then again no invalidations may be required. However, if ASNs are not enabled, control circuit 44 may determine if global translations are enabled (e.g. if the GPE bit is set-decision block 88). If global pages are not enabled, control circuit 44 flushes the TLB (operation 82). If global pages are enabled, control circuit 44 selectively invalidates TLB entries for which the G bit is clear (operation 90). In other words, TLB entries for which the G bit is set are inhibited from invalidation.Turning now to FIGS. 5 and 6, a block diagram of a first embodiment of various translation table entries are shown. Other embodiments are possible and contemplated. The embodiment shown may be used in embodiments of processor 10 designed according to the x86 processor architecture (also known as IA-32). A page table entry 100 used when physical address extension mode is not enabled and a page table entry 102 used when physical address extension mode is enabled are shown in FIG. 5, and a page directory entry 104 used when physical address extension mode is not enabled and page size extension is enabled and a page directory entry 106 used when physical address extension mode is enabled are shown in FIG. 6. Each of the translation table entries 100, 102, 104, and 106 include a page base address field 110, an available field (AVL) 112, a G bit 114, a D bit 116, an A bit 118, a PCD bit 120, a PWT bit 122, a U/S bit 124, a R/W bit 126, and a P bit 128.Page base address field 110 is the physical address of the page allocated for virtual addresses translated by the corresponding translation 100, 102, and 104. Page table entries 100 and 102 are used for a 4 kilobyte page size, and thus specify the physical address bits exclusive of the least significant 12 bits. Page directory entry 104 is used for a 4 Megabyte page size and thus specifies the physical address bits exclusive of the least significant 22 bits. Page directory entry 104 is used for a 2 Megabyte page size and thus specifies the physical address bits exclusive of the least significant 21 bits. The least significant bits not included in the page base address field 110 are provided untranslated from the virtual address.Available field 112 is not interpreted by processor 10 and may be used by software (e.g. the operating system) for any purpose. G bit 114 has been described above for both the case of ASNs enabled and ASNs disabled. D bit 116 is set by processor 10 if a byte within the page identified by the page base address field 110 has been modified by processor 10 due to execution of instructions. The A bit 118 is set by processor 10 if the page has been accessed by processor 10. PCD bit 120 indicates whether or not the page is cacheable (e.g. whether or not bytes from the page may be stored in instruction cache 12 or data cache 16). PWT bit 122 indicates whether or not the page is to be treated write-through by data cache 16. U/S bit 124 indicates whether the page is assigned user privilege level or supervisor privilege level. R/W bit 126 indicates whether the page is read-only or read-write. P bit 128 indicates whether or not the translation is valid.For the embodiment illustrated in FIGS. 5 and 6, access to the translation tables may be as follows: For page table entry 100, the page directory base address stored in control register 32 points to the base address of a page directory which stores page directory entries (similar in form to page table entry 100 except that the G bit 114 is ignored and the D bit 116 is set to zero). A portion of the virtual address is used as an index into the page directory and a page directory entry is selected. The page base address field 110 of the selected page directory entry is the base address of a page table which stores page table entries 100. Another portion of the virtual address is used as an index into the page table to select a corresponding page table entry 100. For page table entry 102, the translation table access is similar to page table entry 100 except that a page directory pointer table which stores page directory pointers is accessed prior to the page directory. The page directory base address stored in control register32 points to the page directory pointer table, and a portion of the virtual address is used to select a page directory pointer which is the base address of the page directory from which a page directory entry is selected. For page directory entry 104, the page directory base address stored in control register 32 points to the base address of a page directory which stores page directory entries 104. A portion of the virtual address is used as an index into the page directory and a corresponding page directory entry 104 is selected. For page directory entry 106, the translation table access is similar to page directory entry 104, except that the page directory pointer table is used as described above for page table entry 102.Computer SystemsTurning now to FIG. 7, a block diagram of one embodiment of a computer system 200 including processor 10 coupled to a variety of system components through a bus bridge 202 is shown. Other embodiments are possible and contemplated. In the depicted system, a main memory 204 is coupled to bus bridge 202 through a memory bus 206, and a graphics controller 208 is coupled to bus bridge 202 through an AGP bus 210. Finally, a plurality of PCI devices 212A-212B are coupled to bus bridge 202 through a PCI bus 214. A secondary bus bridge 216 may further be provided to accommodate an electrical interface to one or more EISA or ISA devices 218 through an EISA/ISA bus 220. Processor 10 is coupled to bus bridge 202 through a CPU bus 224 and to an optional L2 cache 228. Together, CPU bus 224 and the interface to L2 cache 228 may comprise an external interface to which external interface unit 18 may couple.Bus bridge 202 provides an interface between processor 10, main memory 204, graphics controller 208, and devices attached to PCI bus 214. When an operation is received from one of the devices connected to bus bridge 202, bus bridge 202 identifies the target of the operation (e.g. a particular device or, in the case of PCI bus 214, that the target is on PCI bus 214). Bus bridge 202 routes the operation to the targeted device. Bus bridge 202 generally translates an operation from the protocol used by the source device or bus to the protocol used by the target device or bus.In addition to providing an interface to an ISA/EISA bus for PCI bus 214, secondary bus bridge 216 may further incorporate additional functionality, as desired. An input/output controller (not shown), either external from or integrated with secondary bus bridge 216, may also be included within computer system 200 to provide operational support for a keyboard and mouse 222 and for various serial and parallel ports, as desired. An external cache unit (not shown) may further be coupled to CPU bus 224 between processor 10 and bus bridge 202 in other embodiments. Alternatively, the external cache may be coupled to bus bridge 202 and cache control logic for the external cache may be integrated into bus bridge 202. L2 cache 228 is further shown in a backside configuration to processor 10. It is noted that L2 cache 228 may be separate from processor 10, integrated into a cartridge (e.g. slot 1 or slot A) with processor 10, or even integrated onto a semiconductor substrate with processor 10.Main memory 204 is a memory in which application programs are stored and from which processor 10 primarily executes. A suitable main memory 204 comprises DRAM (Dynamic Random Access Memory). For example, a plurality of banks of SDRAM (Synchronous DRAM) or Rambus DRAM (RDRAM) may be suitable.PCI devices 212A-212B are illustrative of a variety of peripheral devices such as, for example, network interface cards, video accelerators, audio cards, hard or floppy disk drives or drive controllers, SCSI (Small Computer Systems Interface) adapters and telephony cards. Similarly, ISA device 218 is illustrative of various types of peripheral devices, such as a modem, a sound card, and a variety of data acquisition cards such as GPIB or field bus interface cards.Graphics controller 208 is provided to control the rendering of text and images on a display 226. Graphics controller 208 may embody a typical graphics accelerator generally known in the art to render three-dimensional data structures which can be effectively shifted into and from main memory 204. Graphics controller 208 may therefore be a master of AGP bus 210 in that it can request and receive access to a target interface within bus bridge 202 to thereby obtain access to main memory 204. A dedicated graphics bus accommodates rapid retrieval of data from main memory 204. For certain operations, graphics controller 208 may further be configured to generate PCI protocol transactions on AGP bus 210. The AGP interface of bus bridge 202 may thus include functionality to support both AGP protocol transactions as well as PCI protocol target and initiator transactions. Display 226 is any electronic display upon which an image or text can be presented. A suitable display 226 includes a cathode ray tube ("CRT"), a liquid crystal display ("LCD"), etc.It is noted that, while the AGP, PCI, and ISA or EISA buses have been used as examples in the above description, any bus architectures may be substituted as desired. It is further noted that computer system 200 may be a multiprocessing computer system including additional processors (e.g. processor 10a shown as an optional component of computer system 200). Processor 10a may be similar to processor 10. More particularly, processor 10a may be an identical copy of processor 10. Processor 10a may be connected to bus bridge 202 via an independent bus (as shown in FIG. 7) or may share CPU bus 224 with processor 10. Furthermore, processor 10a may be coupled to an optional L2 cache 228a similar to L2 cache 228.Turning now to FIG. 8, another embodiment of a computer system 300 is shown. Other embodiments are possible and contemplated. In the embodiment of FIG. 8, computer system 300 includes several processing nodes 312A, 312B, 312C, and 312D. Each processing node is coupled to a respective memory 314A-314D via a memory controller 316A-316D included within each respective processing node 312A-312D. Additionally, processing nodes 312A-312D include interface logic used to communicate between the processing nodes 312A-312D. For example, processing node 312A includes interface logic 318A for communicating with processing node 312B, interface logic 318B for communicating with processing node 312C, and a third interface logic 318C for communicating with yet another processing node (not shown). Similarly, processing node 312B includes interface logic 318D, 318E, and 318F; processing node 312C includes interface logic 318G, 318H, and 318I; and processing node 312D includes interface logic 318J, 318K, and 318L. Processing node 312D is coupled to communicate with a plurality of input/output devices (e.g. devices 320A-320B in a daisy chain configuration) via interface logic 318L. Other processing nodes may communicate with other I/O devices in a similar fashion.Processing nodes 312A-312D implement a packet-based link for inter-processing node communication. In the present embodiment, the link is implemented as sets of unidirectional lines (e.g. lines 324A are used to transmit packets from processing node 312A to processing node 312B and lines 324B are used to transmit packets from processing node 312B to processing node 312A). Other sets of lines 324C-324H are used to transmit packets between other processing nodes as illustrated in FIG. 8. Generally, each set of lines 324 may include one or more data lines, one or more clock lines corresponding to the data lines, and one or more control lines indicating the type of packet being conveyed. The link may be operated in a cache coherent fashion for communication between processing nodes or in a noncoherent fashion for communication between a processing node and an I/O device (or a bus bridge to an I/O bus of conventional construction such as the PCI bus or ISA bus). Furthermore, the link may be operated in a non-coherent fashion using a daisy-chain structure between I/O devices as shown. It is noted that a packet to be transmitted from one processing node to another may pass through one or more intermediate nodes. For example, a packet transmitted by processing node 312A to processing node 312D may pass through either processing node 312B or processing node 312C as shown in FIG. 8. Any suitable routing algorithm may be used. Other embodiments of computer system 300 may include more or fewer processing nodes then the embodiment shown in FIG. 8.Generally, the packets may be transmitted as one or more bit times on the lines 324 between nodes. A bit time may be the rising or falling edge of the clock signal on the corresponding clock lines. The packets may include command packets for initiating transactions, probe packets for maintaining cache coherency, and response packets from responding to probes and commands.Processing nodes 312A-312D, in addition to a memory controller and interface logic, may include one or more processors. Broadly speaking, a processing node comprises at least one processor and may optionally include a memory controller for communicating with a memory and other logic as desired. More particularly, each processing node 312A-312D may comprise one or more copies of processor 10. External interface unit 18 may includes the interface logic 318 within the node, as well as the memory controller 316.Memories 314A-314D may comprise any suitable memory devices. For example, a memory 314A-314D may comprise one or more RAMBUS DRAMs (RDRAMs), synchronous DRAMs (SDRAMs), static RAM, etc. The address space of computer system 300 is divided among memories 314A-314D. Each processing node 312A-312D may include a memory map used to determine which addresses are mapped to which memories 314A-314D, and hence to which processing node 312A-312D a memory request for a particular address should be routed. In one embodiment, the coherency point for an address within computer system 300 is the memory controller 316A-316D coupled to the memory storing bytes corresponding to the address. In other words, the memory controller 316A-316D is responsible for ensuring that each memory access to the corresponding memory 314A-314D occurs in a cache coherent fashion. Memory controllers 316A-316D may comprise control circuitry for interfacing to memories 314A-314D. Additionally, memory controllers 316A-316D may include request queues for queuing memory requests.Generally, interface logic 318A-318L may comprise a variety of buffers for receiving packets from the link and for buffering packets to be transmitted upon the link. Computer system 300 may employ any suitable flow control mechanism for transmitting packets. For example, in one embodiment, each interface logic 318 stores a count of the number of each type of buffer within the receiver at the other end of the link to which that interface logic is connected. The interface logic does not transmit a packet unless the receiving interface logic has a free buffer to store the packet. As a receiving buffer is freed by routing a packet onward, the receiving interface logic transmits a message to the sending interface logic to indicate that the buffer has been freed. Such a mechanism may be referred to as a "coupon-based" system.I/O devices 320A-320B may be any suitable I/O devices. For example, I/O devices 320A-320B may include network interface cards, video accelerators, audio cards, hard or floppy disk drives or drive controllers, SCSI (Small Computer Systems Interface) adapters and telephony cards, modems, sound cards, and a variety of data acquisition cards such as GPIB or field bus interface cards.Numerous variations and modifications will become apparent to those skilled in the art once the above disclosure is fully appreciated. It is intended that the following claims be interpreted to embrace all such variations and modifications. |
An inductive touch sensor comprises an inductors disposed in or on a deformable substrate. When a force is applied to the deformable substrate the physical shape of the inductor will change and thereby change its inductance value. The change in the inductance value can be detected and used to indicate actuation of an associated touch key of the inductive touch sensor. A plurality of inductive touch sensors may be used to form a touch panel. |
CLAIM What is claimed is: 1. An inductive touch sensor, comprising: a flexible substrate; an inductor in mechanical communication with the flexible substrate, the inductor and the flexible substrate having a first position and a second position, wherein the flexible substrate and the inductor assume the second position when a force is applied thereto; and wherein the inductor has a first inductance value when in the first position and a second inductance value when in the second position. 2. The inductive touch sensor according to claim 1 , wherein the first inductance value is greater than the second inductance value. 3. The inductive touch sensor according to claim 1 , wherein the first inductance value is less than the second inductance value. 4. The inductive touch sensor according to claim 1 , wherein the flexible substrate has openings therein to allow distances between coil turns of the inductor to increase and decrease depending upon whether or not the force is being applied to the flexible substrate and inductor. 5. The inductive touch sensor according to claim 4, wherein the distances between the coil turns increase when the force is applied. 6. The inductive touch sensor according to claim 4, wherein the distances between the coil turns decrease when the force is applied. 7. The inductive touch sensor according to claim 1 , wherein the flexible substrate and inductor are substantially flat when the force is not applied thereto, and concave when the force is applied thereto. 8. The inductive touch sensor according to claim 1 , wherein the flexible substrate and inductor are convex when the force is not applied thereto, and less convex when the force is applied thereto. 9. The inductive touch sensor according to claim 1 , wherein the flexible substrate and inductor are convex when the force is not applied thereto, and substantially flat when the force is applied thereto. 10. The inductive touch sensor according to claim 1 , wherein the inductor is embedded in the flexible substrate. 1 1. The inductive touch sensor according to claim 1 , wherein the inductor is coterminous with the flexible substrate. 12. The inductive touch sensor according to claim 1 , further comprising: a support substrate; and ridge spacers between the support substrate and the flexible substrate, wherein the support substrate, ridge spacers and flexible substrate form a cavity. 13. The inductive touch sensor according to claim 12, wherein the cavity is filled with a flexible material. 14. The inductive touch sensor according to claim 12, further comprising a conductive ground plane in the cavity and on an inside face of the support substrate, wherein the conductive ground plane influences the second inductance value of the inductor when in the second position. 15. The inductive touch sensor according to claim 12, further comprising a magnetic material in the cavity and on an inside face of the support substrate, wherein the magnetic material influences the second inductance value of the inductor when in the second position. 16. The inductive touch sensor according to claim 15, wherein the magnetic material is selected from the group consisting of ferrite and powered iron. 17. The inductive touch sensor according to claim 1 , further comprising an electronic circuit coupled to the inductor for measuring inductance values thereof. 18. The inductive touch sensor according to claim 1 , wherein the electronic circuit is a mixed signal integrated circuit device. 19. An inductive touch sensor panel, comprising: a flexible substrate divided into a plurality of touch key areas arranged in a matrix; a plurality of inductors in mechanical communication with the flexible substrate, each of the plurality of inductors associated with a respective one of the plurality of touch key areas, each of the plurality of inductors and the plurality of touch key areas having a first position and a second position, wherein each touch key area and inductor assume the second position when a force is applied thereto; wherein the inductor has a first inductance value when in the first position and a second inductance value when in the second position; a support substrate; and ridge spacers between the support substrate and each of plurality of touch key areas, wherein the support substrate, ridge spacers and the plurality of touch key areas form a plural ity of cavities. 20. An inductive touch sensor, comprising: a flexible substrate; a support substrate; ridge spacers between the support substrate and the flexible substrate, wherein the support substrate, ridge spacers and flexible substrate form a cavity; an inductor comprising a coiled spring, a first end of the inductor is in mechanical and electrical communications with the flexible substrate and a second end of the inductor is in mechanical and electrical communications with the support substrate; the inductor has a first inductance value when in a first position and a second inductance value when in a second position; and the inductor assumes the second position when a force is applied to the flexible substrate. |
INDUCTIVE TOUCH SENSOR USING A FLEXIBLE COIL TECHNICAL FIELD The present disclosure relates to inductive touch sensors, and more particularly, to an inductive touch sensor using a flexible coil. BACKGROUND Inductive touch sensor technology may be used as an alternative to capacitive touch sensor technology. Current technology inductive touch sensors comprise a target (surface being touched or pressed), a spacer and an inductance coil. When the target is actuated (e.g., touched) the coil inductance changes value. Detection of this change in the inductance value of the coil indicates actuation of the inductive touch sensor. Manufacturing of an inductive touch panel, comprise a plurality of inductive touch sensors, and requires assembly of a sensor etched and sandwiched on a printed circuit board (PCB), generally at final assembly of a product. The spacer must be placed between the PCB which contains the inductance coils, one for each key or button, and the targets for each key or button. Current manufacturing technologies consist of producing the PCB, the spacer, laminating the spacer to the PCB and then mounting the PCB/Spacer assembly to the target panel. Tight tolerances are required between the target and the inductive coil that will change its inductance value. SUMMARY What is needed is a simplified and inexpensive way to manufacture an inductive touch sensor that can be used in a touch panel. According to an embodiment, an inductive touch sensor may comprise: a flexible substrate; an inductor in mechanical communication with the flexible substrate, the inductor and the flexible substrate having a first position and a second position, wherein the flexible substrate and the inductor assume the second position when a force is applied thereto; and wherein the inductor has a first inductance value when in the first position and a second inductance value when in the second position. According to a further embodiment, the first inductance value is greater than the second inductance value. According to a further embodiment, the first inductance value is less than the second inductance value. According to a further embodiment, the flexiblesubstrate has openings therein to allow distances between coil turns of the inductor to increase and decrease depending upon whether or not the force is being applied to the flexible substrate and inductor. According to a further embodiment, the distances between the coil turns increase when the force is applied. According to a further embodiment, the distances between the coil turns decrease when the force is applied. According to a further embodiment, the flexible substrate and inductor are substantially flat when the force is not applied thereto, and concave when the force is applied thereto. According to a further embodiment, the flexible substrate and inductor are convex when the force is not applied thereto, and less convex when the force is applied thereto. According to a further embodiment, the flexible substrate and inductor are convex when the force is not applied thereto, and substantially flat when the force is applied thereto. According to a further embodiment, the inductor is embedded in the flexible substrate. According to a further embodiment, the inductor is coterminous with the flexible substrate. According to a further embodiment, a support substrate and ridge spacers between the support substrate and the flexible substrate are added, wherein the support substrate, ridge spacers and flexible substrate form a cavity. According to a further embodiment, the cavity is filled with a flexible material. According to a further embodiment, a conductive ground plane in the cavity and on an inside face of the support substrate is added, wherein the conductive ground plane influences the second inductance value of the inductor when in the second position. According to a further embodiment, a magnetic material is added in the cavity and on an inside face of the support substrate, wherein the magnetic material influences the second inductance value of the inductor when in the second position. According to a further embodiment, the magnetic material is selected from the group consisting of ferrite and powered iron. According to a further embodiment, an electronic circuit is coupled to the inductor for measuring inductance values thereof. According to a further embodiment, the electronic circuit is a mixed signal integrated circuit device. According to another embodiment, an inductive touch sensor panel may comprise: a flexible substrate divided into a plurality of touch key areas arranged in a matrix; a plurality of inductors in mechanical communication with the flexible substrate, each of the plurality ofinductors associated with a respective one of the plurality of touch key areas, each of the plurality of inductors and the plurality of touch key areas having a first position and a second position, wherein each touch key area and inductor assume the second position when a force is applied thereto; wherein the inductor has a first inductance value when in the first position and a second inductance value when in the second position; a support substrate; and ridge spacers between the support substrate and each of plurality of touch key areas, wherein the support substrate, ridge spacers and the plurality of touch key areas form a plurality of cavities. According to yet another embodiment, an inductive touch sensor may comprise: a flexible substrate; a support substrate; ridge spacers between the support substrate and the flexible substrate, wherein the support substrate, ridge spacers and flexible substrate form a cavity; an inductor comprising a coiled spring, a first end of the inductor is in mechanical and electrical communications with the flexible substrate and a second end of the inductor is in mechanical and electrical communications with the support substrate; the inductor has a first inductance value when in a first position and a second inductance value when in a second position; and the inductor assumes the second position when a force is applied to the flexible substrate. BRIEF DESCRIPTION OF THE DRAWINGS A more complete understanding of the present disclosure thereof may be acquired by referring to the following description taken in conjunction with the accompanying drawings wherein: Figure 1 illustrates schematic plan views of an inductor that is formed as a spiral coil of an inductive touch key for non-actuated and actuated conditions, according to a specific example embodiment of this disclosure; Figure 2 illustrates schematic plan views of an inductor that is formed as a spiral coil of an inductive touch key for non-actuated and actuated conditions, according to another specific example embodiment of this disclosure; Figure 3 illustrates schematic elevational cutaway views of the inductor of the inductive touch key shown in Figures 1 or 2, according to specific example embodiments of this disclosure;Figure 4 illustrates schematic elevational cutaway views of the inductor of the inductive touch key shown in Figures 1 or 2, according to specific example embodiments of this disclosure; Figure 5 illustrates schematic elevational cutaway views of an inductive touch key using the inductor and substrate shown in Figure 4, according to a specific example embodiment of this disclosure; Figure 6 illustrates schematic elevational views of an inductor that is formed as a spring coil of an inductive touch key for non-actuated and actuated conditions, according to yet another specific example embodiment of this disclosure; Figure 7 illustrates schematic elevational cutaway views of an inductive touch key using the inductor shown in Figure 6, according to specific example embodiments of this disclosure; Figure 8 illustrates a schematic frontal view of an inductive touch keypad showing an inductive sense coil that is typical for all keys of the keypad, according to specific example embodiments of this disclosure; and Figure 9 illustrates a schematic block diagram of an electronic system having an inductive touch keypad as shown in Figure 6, an inductive touch analog front end and a digital processor, according to specific example embodiments of this disclosure. While the present disclosure is susceptible to various modifications and alternative forms, specific example embodiments thereof have been shown in the drawings and are herein described in detail. It should be understood, however, that the description herein of specific example embodiments is not intended to limit the disclosure to the particular forms disclosed herein, but on the contrary, this disclosure is to cover all modifications and equivalents as defined by the appended claims. DETAILED DESCRIPTION An inductive touch sensor comprises an inductor disposed in or on a deformable substrate. When a force is applied to the deformable substrate the physical shape of the inductor changes and thereby changes its inductance value. The change in the inductance value can be detected and used to indicate actuation of an associated touch key of theinductive touch sensor. A plurality of inductive touch sensors may be used to form a touch panel. Referring now to the drawings, the details of an example embodiment is schematically illustrated. Like elements in the drawings will be represented by like numbers, and similar elements will be represented by like numbers with a different lower case letter suffix. Referring to Figure 1 , depicted are schematic plan views of an inductor that is formed as a spiral coil of an inductive touch key for non-actuated and actuated conditions, according to a specific example embodiment of this disclosure. An inductor coil 102a is shown in a non-actuated state, and the inductor coil 102b is shown in an actuated state, as more fully described hereinafter. The inductor coil 102 is wound in a substantially flat spiral configuration in or on a deformable substrate 104 (see Figures 3 and 4). When a force, e.g., finger push, is disposed on the inductor coil 102 and the deformable substrate 104 the electrically conductive turns of the inductor coil 102 will become farther apart (separate) as shown in the (b) drawing of Figure I . The deformable substrate 104 is shown deformed, e.g., stretched, in two dimensions (X and Y axes). A third dimension (Z-axis is shown in Figures 3 and 4) also stretches the substrate 104 to further increase the distance between the coil turns. By increasing the separation between the coil turns, the inductance value of the inductor coil 102 will decrease. This change in the inductance value can be measured and used to indicate activation of the inductor coil 102 by an external force, e.g., finger push. Electrical connections 522 and 524 are adapted to couple the inductor coil 102 to electronic measurement circuits (not shown) for determination of the inductance value thereof. It is contemplated and within the scope of this disclosure that slots may be cut in the deformable substrate 104 so as to facilitate separation of the coil turns. Referring to Figure 2, depicted are schematic plan views of an inductor that is formed as a spiral coil of an inductive touch key for non-actuated and actuated conditions, according to another specific example embodiment of this disclosure. The inductor coil 102a is shown in a non-actuated state, and the inductor coil 102c is shown in an actuated state, as more fully described hereinafter. The inductor coil 102 is wound in a substantially flat spiral configuration in or on a deformable substrate 204 (see Figures 3 and 4). When a force, e.g., finger push, is disposed on the inductor coil 102 and the deformable substrate 204 theelectrically conductive turns of the inductor coil 102 will become farther apart (separate) as shown in the (b) drawing of Figure 2. The deformable substrate 204 is shown deformed, e.g., stretched, in one dimension. (X axis). A third dimension (Z-axis is shown in Figures 3 and 4) also stretches the substrate 204 to further increase the distance between the coil turns. By increasing the separation between the coil turns, the inductance value of the inductor coil 102 will decrease. This change in the inductance value can be measured and used to indicate activation of the inductor coil 102 by an external force, e.g., finger push. It is contemplated and within the scope of this disclosure that slots may be cut in the deformable substrate 104 so as to facilitate separation of the coil turns. Referring to Figure 3, depicted are schematic elevational cutaway views of the inductor of the inductive touch key shown in Figures 1 or 2, according to specific example embodiments of this disclosure. The inductor coil 102 is shown embedded into the flexible substrate 104. When substrate 104 does not have a force on its face then there is no deformation thereof and the turns of the coil 102a are spaced at a distance d, therebetween. The coil 102a configuration will have a first inductance value. When a force 306 is applied to a face of the substrate 104, deflection thereof occurs and the turns of the coil 102b become farther apart. As shown in (b), the turns of the coil 102b are spaced at a distance da therebetween, where da > di. Now the coil 102b has a second inductance value that is less than the first inductance value. This is easily measured by electronic circuits. It is contemplated and within the scope of this disclosure that the coil 102 may be fabricated on a surface of the substrate 104 (coterminous), or the coil 102 may be fabricated without any substrate at all. The coil 102 may be self supporting and deformably springy so as to return to its un-actuated shape. So long as the shape of the coil 102 changes wherein the distance between the turns thereof change, so will the inductance value thereof change. As shown in Figure 3 the substrate 104 is normally flat when no force 306 is applied to its surface (face), and becomes concave when the force 306 is applied thereto. Referring to Figure 4, depicted are schematic elevational cutaway views of the inductor of the inductive touch key shown in Figures 1 or 2, according to specific example embodiments of this disclosure. The inductor coil 102 is shown embedded into a convex curved flexible substrate 204. When the convex curved substrate 204 does not have a force on its face then there is no deformation thereof, the face thereof remains convex, and theturns 310 of the coil 102a are spaced at a distance d t therebetween. The coil 102a configuration will have a third inductance value. When a force 306 is applied to a convex face of the substrate 204, deflection thereof occurs and the turns 10 of the coil 1 2b become closer together. As shown in (b), the turns 3 10 of the coil 102b are spaced at a distance dk therebetween, where d s > d.(. Now the coil 102b has a fourth inductance value that is greater than the third inductance value. This is easily measured by electronic circuits. It is contemplated and within the scope of this disclosure that the coil 102 may be fabricated on a surface of the convex substrate 204, or the coil 102 may be fabricated without any substrate at all. The coil 102 may be self supporting and deformably springy so as to return to its un-actuated shape. So long as the shape of the coil 102 changes wherein the distance between the turns thereof change, so will the inductance value thereof. As shown in Figure 4 the substrate 204 is normally convex curved when no force 306 is applied to its surface ( face), and may become substantially flat (e.g., less convex curved) when the force 306 is applied thereto. The configuration shown in Figure 4 is easily adapted to raised, tactile touch keys on a keypad (see Figure 8). Referring to Figure 5, depicted are schematic elevational cutaway views of an inductive touch key using the inductor and substrate shown in Figure 4, according to a specific example embodiment of this disclosure. An inductive touch key, generally represented by the numeral 500, comprises inductor coil 102 having turns 310 embedded in a convex curved flexible substrate 504 that is attached to ridged supports 518 and 520. These ridged supports 518 and 520 space the substrate 504 from a support substrate 512, e.g., printed circuit board (PCB), that may be common to a plurality of inductive touch keys 800 (see Figure 8). A deformable space 508 is disposed between the convex curved flexible substrate 504 and the support substrate 512. The deformable space 508 may be air or gas (empty), or it may be filled with a deformable material, e.g., foam, silicon gel, etc. Optionally, a magnetic material 510, e.g., ferrite, powered iron, etc., having properties that influence the inductance value of the coil 102, may be located in the space 508. A conductive ground plane 514 may be disposed on a face of the support substrate 512 and connected to ground or a power source common 526 with, for example, a printed circuit board via 516. The purpose of this conductive ground plane 514 is to influence (increase) the inductance value of the coil 102 as the turns 310 of the coil 102 are moved closer to it,drawing (b) showing force 306 applied to a face of the convex curved flexible substrate 504. It is contemplated and within the scope of this disclosure that the changes in spacing between the turns 310 of the coil 102, the ground plane 514, and or the magnetic material 510 influence the inductance value of the coil 102 as it changes position relative to the support substrate 512 due to the force 306 being applied to the face of the convex curved flexible substrate 504, compare drawing (a) to drawing (b) of Figure 5. Electrical connections 522 and 524 are used to couple the inductor coil 102 to electronic measurement circuits (see Figure 9) for determining the inductance value thereof. Referring to Figure 6, depicted are schematic elevational views of an inductor that is formed as a spring coil of an inductive touch key for non-actuated and actuated conditions, according to yet another specific example embodiment of this disclosure. An inductor coil 702a is shown in a non-actuated state, and the inductor coil 702b is shown in an actuated state, as more fully described hereinafter. The inductor coil 702 is wound in a deformable spring shape. When a force 306, e.g., finger push, is disposed on the inductor coil 702 the electrically conductive turns of the inductor coil 702 will become closer together as shown in the (b) drawing of Figure 6. By decreasing the separation between the coil turns, the inductance value of the inductor coil 702 will increase. This change in the inductance value can be measured and used to indicate activation of the inductor coil 702 by an external force, e.g., finger push. Referring to Figure 7, depicted are schematic elevational cutaway views of an inductive touch key using the inductor shown in Figure 6, according to specific example embodiments of this disclosure. An inductive touch key, generally represented by the numeral 700, comprises a spring shaped inductor coil 702, a convex curved flexible substrate 704 that is attached to ridged supports 718 and 720. These ridged supports 718 and 720 space the substrate 704 from a support substrate 712, e.g., printed circuit board (PCB), that may be common to a plurality of inductive touch keys 800 (see Figure 8). A deformable space 708 is disposed between the convex curved flexible substrate 704 and the support substrate 712. The deformable space 708 may be air or gas (empty), or it may be filled with a deformable material, e.g., foam, silicon gel, etc. A conductive ground plane 714 may be disposed on a face of the support substrate 712 and connected to ground or a power source common 724 with, for example, aprinted circuit board via 7 16. The purpose of this conductive ground plane 7 14 is to connect one end of the coil 702 and the other end of the coil 702 is connected with a conductor 722 that may be a flexible conductor disposed on an inside surface of the convex curved flexible substrate 704. As a force 306 is applied to the face of the convex curved flexible substrate 704, the coil 702 is compressed, thereby increasing its inductance value, compare drawing (a) to drawing (b) of Figure 7. Electrical connections 722 and 724 are used to couple the inductor coil 702 to electronic measurement circuits (see Figure 9) for determining the inductance value thereof. Referring to Figure 8, depicted is a schematic frontal view of an inductive touch keypad showing an inductive sense coil that is typical for all keys of a keypad, according to specific example embodiments of this disclosure. A keypad, generally represented by the numeral 800, is configured as a matrix of inductive touch sensor keys 804 comprising a plurality of inductive touch sensors 802. In each one of the plurality of inductive touch sensors 802 is a coil 802 having an inductance value that changes when a force 306 is applied thereto, as more fully described hereinabove. Referring to Figure 9, depicted is a schematic block diagram of an electronic system having an inductive touch keypad as shown in Figure 8, an inductive touch analog front end and a digital processor, according to specific example embodiments of this disclosure. A digital processor 950, e.g., a microprocessor, microcomputer, digital signal processor (DSP), application specific integrated circuit (ASIC), programmable logic array (PLA), etc., is coupled to an inductive touch analog front end (AFE) 952 and a matrix of inductive touch sensor keys 800, e.g., pushbuttons, targets, etc. The digital processor 950 and AFE 952 may be part of a mixed signal (analog and digital circuits) integrated circuit device. The inductive touch AFE 952 facilitates, with a single low-cost integrated circuit device, all active functions used in determining when there is actuation of inductive sensors, e.g., by pressing and deflecting a target key that changes the inductance value of an associated inductive sensor. The inductive touch AFE 952 measures the inductance value of each sensor of the matrix of inductive touch sensor keys 800 and converts the inductance values into respective analog direct current (dc) voltages that are read and converted into digital values by the digital processor 950. It is contemplated and within the scope of this disclosure that standard analog components may be used to make a discrete analog front end(AFE), and that one having ordinary skill in electronic circuit design and the benefit of this disclosure could readily design such a discrete AFE. The digital processor 950 supplies clock and control functions to the inductive touch AFE 952, reads the analog voltage detector output of the inductive touch AFE 952, and selects each key of the matrix of inductive touch sensor keys 800. When actuation of a key of the matrix of inductive touch sensor keys 800 is determined, the digital processor 950 will take an appropriate action as programmed therein. While embodiments of this disclosure have been depicted, described, and are defined by reference to example embodiments of the disclosure, such references do not imply a limitation on the disclosure, and no such limitation is to be inferred. The subject matter disclosed is capable of considerable modification, alteration, and equivalents in form and function, as will occur to those ordinarily skilled in the pertinent art and having the benefit of this disclosure. The depicted and described embodiments of this disclosure are examples only, and are not exhaustive of the scope of the disclosure. |
A method of forming a nanowire is disclosed. A nanowire having a first dimension is deposited on a first dielectric layer that is formed on a substrate. A sacrificial gate stack having a sacrificial dielectric layer and a sacrificial gate electrode layer is deposited over a first region of the nanowire leaving exposed a second region and a third region of the nanowire. A first spacer is deposited on each side of the sacrificial gate stack. A second dielectric layer is deposited over the first dielectric layer to cover the second region and third region. The sacrificial gate stack is removed. The first region of the nanowire is thinned by at least one thermal oxidation process and oxide removal 10 process to thin said first region from said first dimension to a second dimension. |
CLAIMS We claim:1. A method of reducing a dimension of a nanowire comprising: depositing a nanowire on a first dielectric layer formed on a substrate, said nanowire having a first dimension; depositing a sacrificial gate stack having a sacrificial dielectric layer and a sacrificial gate electrode layer over a first region of said nanowire leaving exposed a second region and a third region of said nanowire; depositing a first spacer on each side of said sacrificial gate stack; depositing a second dielectric layer over said first dielectric layer to cover said second region and third region; removing said sacrificial gate stack; and thinning said first region of said nanowire by at least one thermal oxidation process and oxide removal process to thin said first region from said first dimension to a second dimension. 2. The method of claim 1 wherein said depositing said second dielectric layer is a blanket deposition wherein said second dielectric layer is further polished to exposed said sacrificial gate electrode. 3. The method of claim 1 further comprising: depositing a second spacer on each side of said first spacer prior to said depositing said second dielectric layer. <Desc/Clms Page number 21> 4. The method of claim 1 further comprising: forming an epitaxial film over said second region and third region of said nanowire prior to said depositing said second dielectric layer. 5. The method of claim 1 wherein said thinning said first region further comprises: sequentially growing oxide layers on said first region by said thermal oxidation and etching away said oxide layers using a buffered oxide etchant until said second dimension reach a desired value. 6. The method of claim 1 wherein said second dimension is at least ten folds smaller than said first dimension. 7. The method of claim 1 further comprising: forming a silicide layer over each of said second region and third region of said nanowire prior to depositing said dielectric layer. 8. The method of claim 1 further comprising: implanting dopants into each of said second region and third region of said nanowire to form source/drain regions prior to said depositing said dielectric layer. 9. A method of fabricating a nanowire having a comprising: depositing a nanowire on a first dielectric layer formed on a substrate, said nanowire having a first dimension; <Desc/Clms Page number 22> depositing a sacrificial dielectric layer over a first region of said nanowire and an etchable sacrificial layer over said sacrificial dielectric layer leaving exposed a second region and a third region of said nanowire, (said first region defining a channel region for said nanowire); depositing a first spacer on each side of said sacrificial dielectric layer and said etchable sacrificial layer; depositing a second dielectric layer over said first dielectric layer to cover said second region and third region; etching away said etchable sacrificial layer and said sacrificial dielectric layer; and thinning said first region of said nanowire by at least one thermal oxidation process and oxide removal process to thin said first region from said first dimension to a second dimension. 10. The method of claim 9 further comprising: depositing a second spacer on each side of said first spacer prior to said depositing said second dielectric layer. 11. The method of claim 9 further comprising: forming an epitaxial film over said second region and third region of said nanowire prior to said depositing said second dielectric layer. 12. The method of claim 9 wherein said thinning said first region further comprises: sequentially growing oxide layers on said first region by said thermal oxidation and etching away said oxide layers using a buffered oxide etchant. <Desc/Clms Page number 23> 13. The method of claim 9 wherein said second dimension is at least ten folds smaller than said first dimension. 14. The method of claim 9 further comprising: forming a silicide layer over each of said second region and third region of said nanowire prior to said depositing said second dielectric layer. 15. The method of claim 9 further comprising: implanting dopants into each of said second region and third region of said nanowire to form source/drain regions prior to depositing said second dielectric layer. 16. A method of fabricating an electronic device comprising: depositing a nanowire on a first dielectric layer formed on a substrate, said nanowire having a first dimension; depositing a sacrificial dielectric layer over a first region of said nanowire and an etchable sacrificial layer over said sacrificial dielectric layer leaving exposed a second region and a third region of said nanowire, said first region defining a channel region for said electronic device; depositing a first spacer on each side of said sacrificial dielectric layer and said etchable sacrificial layer; forming a source/drain region in each of said second region and said third region; depositing a second dielectric layer over said first dielectric layer to cover said second region and third region; <Desc/Clms Page number 24> etching away said etchable sacrificial layer and said sacrificial dielectric layer; thinning said first region of said nanowire by at least one thermal oxidation process and oxide removal process to thin said first region from said first dimension to a second dimension; and depositing a device gate stack over said first region, said device gate stack including a third dielectric layer and a gate electrode. 17. The method of claim 16 further comprising: forming contact to said source/drain region. 18. The method of claim 16 further comprising: depositing a second spacer on each side of said first spacer prior to said depositing said second dielectric layer. 19. The method of claim 16 further comprising: forming an epitaxial film over said second region and third region of said nanowire prior to said depositing said second dielectric layer. 20. The method of claim 16 wherein said forming said source/drain region further comprises : forming an epitaxial film over each of said second region and third region of said nanowire; implanting a dopant into said second region and said third region; and forming a silicide layer over said epitaxial film. <Desc/Clms Page number 25> 21. The method of claim 16 further comprising: forming a silicide layer over each of said second region and third region of said nanowire prior to said depositing said second dielectric layer. 22. The method of claim 16 further comprising: implanting dopants into each of said second region and third region of said nanowire to form source/drain regions prior to depositing said second dielectric layer. 23. The method of claim 16 wherein said thinning said first region further comprises: sequentially growing oxide layers on said first region by said thermal oxidation and etching away said oxide layers using a buffered oxide etchant. 24. The method of claim 16 wherein said second dimension is at least ten folds smaller than said first dimension. 25. The method of claim 16 wherein said etchable sacrificial layer comprises silicon or polysilicon. 26. An electronic device comprising: a nanowire formed on a first dielectric layer formed on a substrate, said nanowire having a channel region, a first source/drain region, and a second source/drain region, said channel region being substantially smaller than each of said first source/drain region, and said second source/drain region; <Desc/Clms Page number 26> a device gate stack formed over said channel region; a first spacer formed on each side of said device gate stack; and a second dielectric layer formed over said first dielectric layer, said first source/drain region, and said second source/drain region. 27. An electronic device as in claim 26 further comprising: a second spacer formed on each side of said first spacer. 28. An electronic device as in claim 26 further comprising: an epitaxial layer formed over each of said first source/drain region, and said second source/drain region to increase dimensions of said first source/drain region, and said second source/drain region. 29. An electronic device as in claim 26 wherein said dielectric layer further comprises: contact vias to allow to each of said first source/drain region and said second source/drain region. |
<Desc/Clms Page number 1> METHOD OF FABRICATING AN ULTRA-NARROW CHANNEL SEMICONDUCTOR DEVICE BACKGROUND Field [0001] A method of fabricating an ultra-small nanowire and a semiconductor device having an ultra-narrow channel formed in the nanowire. Description Of The Related Art [0002] Advances in semiconductor devices and the ongoing quest for miniaturization of the semiconductor devices lead to a demand for a better fabrication process for nanoscale structures. Semiconductor devices are being made on nanoscale structures since smaller devices typically equate to faster switching times, which lead to speedier and better performance. Devices based upon nanoscale structures having ultra- small dimensions are thus a natural progression of semiconductor device scaling. For example, devices have been made on a semiconductor nanoscale structures generally known as"nanowire. "A nanowire is referred to as a semiconductor (e. g. , silicon) structure having dimensions in the order of nanometers. Current methods of fabricating nanowires include photolithography and vapor liquid solid epitaxy deposition. [0003] In photolithography, a thin layer of semiconductor material (e. g. , silicon) is deposited on a substrate and then patterned to form nanowires on the substrate. In vapor liquid solid epitaxy deposition, metal colloids (e. g. , gold or nickel) in nano-dimensions are exposed to a silicon source gas (e. g. , silane) under high temperature. Silicon is then decomposed and grown on the colloids forming silicon nanowires. The silicon nanowires <Desc/Clms Page number 2> are removed from the colloids and are deposited on a substrate. Under both methods, the dimensions of the nanowires are difficult to control especially for dimensions less than 5 nm. In addition, in devices made on nanowires, the device channels are extremely narrow. Extremely narrow channels ( < 10nm) can exhibit 1-D device transport which promises higher mobility and possible ballistic transport to improve device performance. However, methods of making these ultra-small channels in a controllable way are not yet currently compatible with high-volume manufacturing processes. BRIEF DESCRIPTION OF THE DRAWINGS [00051 The disclosure is illustrated by way of example and not by way of limitation in the figures of the accompanying drawings in which like references indicate similar elements. The invention may best be understood by referring to the following description and accompanying drawings that are used to illustrate embodiments of the invention. In the drawings: [0006] Figure 1 illustrates a nanowire formed on a substrate; [0007] Figure 2 illustrates a sacrificial gate stack formed over the nanowire of Figure 1; [0008] Figure 3 illustrates a sacrificial gate stack and two spacers formed adjacent to the sacrificial gate stack that is formed over the nanowire; [0009] Figure 4 illustrates a sacrificial gate stack, at least one spacer adjacent each side of the sacrificial gate stack, and a dielectric layer formed over the nanowire; <Desc/Clms Page number 3>] Figure 5 illustrates the sacrificial gate stack of Figure 4 is removed to expose a section of the nanowire; [0011] Figure 6 illustrates thinning of the exposed section of the nanowire of Figure 5 down to a desired dimension; [0012] Figure 7 illustrates that a device gate stack formed over the thinned nanowire of Figure 6 to form a semiconductor device having an ultra-narrow channel region ; [0013] Figure 8 illustrates the semiconductor device of Figure 7 with the dielectric layer removed for clarity purpose; [0014] Figure 9 illustrates the semiconductor device of Figure 7 with the dielectric layer and the device gate stack removed for clarity purpose; [0015] Figure 10 illustrates the semiconductor device of Figure 7 with the dielectric layer and the device gate stack removed, and only one spacer is shown for clarity purpose; [0016] Figure 11 illustrates the semiconductor device of Figure 7 with everything removed except for the nanowire having sections of different cross-sectional dimensions; and [0017] Figure 12 shows that thermal oxidation of nanoscale semiconductor structure is self-limiting. <Desc/Clms Page number 4> DETAILED DESCRIPTION [0018] Exemplary embodiments are described with reference to specific configurations and techniques. Those of ordinary skill in the art will appreciate the various changes and modifications to be made while remaining within the scope of the appended claims. Additionally, well known elements, devices, components, circuits, process steps and the like are now set forth in detail. [0019] As discussed above, nanoscale structures such as nanowires are extremely difficult to make with reliable and controllable dimensions. Current methods used to make nanowires include dimensional control of initial growth from nanometer sized nucleation sites or lithographic and patterning methods to print small dimensional structures that then use over-etching techniques to reduce the dimensions of the nanowires. These approaches can be difficult in practice, especially when trying to control the dimensions of billions of small regions across a giant 300mm wafer. [00201 Exemplary embodiments of the present invention describe methods of making nanowires that allow for easy control of the dimensions of the nanowires. More particularly, the embodiments disclose methods of making nanowires that have at least one region (e. g. , the middle region) being extremely small or ultra-narrow (e. g. , having dimensions about less than 5 nm). Further, as will be apparent from the discussion below that the embodiments demonstrate a reliable and controllable way to fabricate an ultra- small nanowire (e. g. , having dimensions of about less than 5 nm) and/or to fabricate a nanowire that has an ultra-small or ultra-narrow channel region useful for making other semiconductor devices. <Desc/Clms Page number 5> [0021] In one embodiment, a method of reducing a dimension of a nanowire is disclosed. A nanowire is deposited on a first dielectric layer that is formed on a substrate. The nanowire has a first dimension. The nanowire provides a first region, a second region, and a third region. A sacrificial gate stack having a sacrificial dielectric layer and a sacrificial gate electrode layer is deposited over the first region of the nanowire leaving exposed the second region and the third region of the nanowire. A first spacer is deposited adjacent each side of the sacrificial gate stack. A second dielectric layer is deposited over the first dielectric layer to cover the second region and third region. The sacrificial gate electrode and the sacrificial dielectric layer are removed after the first spacer is deposited. Removing the sacrificial gate electrode and the sacrificial dielectric layer exposes the first region of the nanowire. The first region of the nanowire is thinned by at least one thermal oxidation and oxide removal process. After thinning, the first region has a second dimension that is smaller than the first dimension. Thinning the first region of the nanowire provide the first region of the nanowire with a cross-sectional dimension that is substantially smaller (e. g. , ten times or at least two times smaller) than that of the second region and the third region. The first region can be the middle region of the nanowire and the second and third regions can be the side regions of the nanowire. [0022] In another embodiment, a method of fabricating a nanowire is disclosed. A nanowire is deposited on a first dielectric layer that is formed on a substrate. The nanowire has a first dimension. A sacrificial dielectric layer is deposited over a first region of the nanowire and an etchable sacrificial layer is deposited over the sacrificial dielectric layer leaving exposed a second region and a third region of the nanowire. A first spacer is deposited adjacent each side of the sacrificial dielectric layer and the etchable sacrificial layer. A second dielectric layer is deposited over the first dielectric <Desc/Clms Page number 6> layer to cover the second region and third region. The etchable sacrificial layer and the dielectric layer are etched away. After the sacrificial dielectric layer and the etchable sacrificial layer are removed, the first region of the nanowire is exposed. The first region of the nanowire is thinned by at least one thermal oxidation and oxide removal process. After thinning, the first region has a second dimension that is smaller than the first dimension. In addition, thinning the first region of the nanowire provides the first region with a cross-sectional dimension that is substantially smaller (e. g. , ten times or at least two times smaller) than that of the second region and third region of the nanowire. [0023] In another embodiment, a method of fabricating a semiconductor device in a nanowire is disclosed. A nanowire is deposited on a first dielectric layer that is formed on a substrate. The nanowire has a first dimension. A sacrificial dielectric layer is deposited over a first region of the nanowire and an etchable sacrificial layer is deposited over the sacrificial dielectric layer leaving exposed a second region and a third region of the nanowire. The first region defines a channel region for the semiconductor device. The second and third regions define source/drain regions for the semiconductor device. A first spacer is deposited adjacent each side of the sacrificial dielectric layer and the etchable sacrificial layer. A second dielectric layer is deposited over the first dielectric layer to cover the second region and third region. The etchable sacrificial layer and the sacrificial dielectric layer are etched away. Etching away the etchable sacrificial layer and the dielectric layer exposes the first region of the nanowire. The first region of the nanowire is thinned by at least one thermal oxidation and oxide removal process to provide the first region with a second dimension that is smaller or substantially smaller (e. g. , ten times or at least two times smaller) than the first dimension. A device gate stack comprising a third dielectric layer and a gate electrode is deposited over the first region. The semiconductor <Desc/Clms Page number 7> device formed in the nanowire thus has a channel region that is smaller or substantially smaller than the source/drain regions of the device. The following section describes exemplary methods of making the nanowires and the semiconductor devices as mentioned above. In Figure 1, a substrate 102 is provided. In one embodiment, the substrate 102 is made of a semiconductor material such as silicon. The substrate 102 can be a monocrystalline silicon, a polycrystalline silicon, an amorphous silicon, or a silicon alloy. In some embodiments, the substrate 102 is a silicon on insulator (SOI) substrate. The substrate 102 can also be any suitable semiconductor substrate typically used for fabricating semiconductor devices as is known in the art. [0025] As shown in Figure 1, the substrate 102 is insulated with a thin layer of dielectric layer 104, which may be comprised of an insulating material such as silicon dioxide (Si02), silicon nitride (Si3N4), or other suitable semiconductor insulating material. The dielectric layer 104 can be formed on the substrate 102 using conventional methods such as chemical vapor deposition (CVD) or physical deposition. The dielectric layer 104 functions to isolate one nanowire from another and/or to isolate one device formed in the nanowire from another. [0026] As shown in Figure 1, at least one nanowire 106 is formed on the dielectric layer 104. For the purpose of the disclosure, a nanowire is referred to as a semiconductor strip (e. g. , a silicon strip) that has a thickness ranging from a few nanometers (nm) (e. g., 10nm) to a few hundreds nanometers (e. g. , 100-200nm). A nanowire can also be referred to a semiconductor strip that has cross-sectional dimensions (e. g. , height and width) in the order of nanometers. The nanowire 106 can be grown, deposited, or patterned on the <Desc/Clms Page number 8> dielectric layer 104. In one embodiment, the nanowire 106 is formed using a conventional method that can reliably deposit a silicon strip in the order of 10-100 nm thick. In one embodiment, the nanowire 106 is deposited using a process called Vapor Liquid Solid Epitaxy (VLSE). In the VLSE process, metal colloids (e. g. , gold or nickel) are exposed to a silicon source gas (e. g. , SiH4) and high temperature. The silicon source gas is dissolved into the colloidal particles and silicon sections are grown on the colloids. The silicon sections are then removed and deposited on the dielectric layer 104. VLSE is known in the art. In another embodiment, the nanowire 106 is deposited using conventional lithography and etching processes in which a thin silicon film is deposited on the dielectric layer 104, using method such as CVD or plasma enhanced CVD, and patterned (e. g., etching) to form the individual nanowire 106. It is to be noted that other methods can be used to form the nanowire 106 on the dielectric layer 104 as is known in the art. [0027] In one embodiment, the nanowire 106 has first cross-sectional dimensions that are in the order of nanoscale. The nanowire 106 has a first length 130, which can be about 100nm to about few microns depending on application. The nanowire 106 has a first height 132 and a first width 134. The first height 132 and the first width 134 define the first cross-sectional dimension or the first thickness of the nanowire 106. For a reliable performance of the semiconductor device that will be formed in the nanowire 106, the first width 134 and the first height 132 need to be reliably controlled. In one embodiment, the nanowire 106 has a first height 132 of about 10-100nm and a first width 134 of about 10-lOOnm. The first height 132, first width 134, and first length 130 can be varied depending on the methods used to form the nanowire 106 on the dielectric layer 104. A method that can reliably and controllably forms the nanowire 106 in the order of about 10-lOOnm is used to form the nanowire 106 on the dielectric layer 104. <Desc/Clms Page number 9> [0028] As will be apparent from below, a semiconductor device such as a transistor is formed in the nanowire 106. For a superior semiconductor device, the nanowire 106 needs to be as thin as possible. More optimally, the channel region for the transistor should be as thin as possible. The cross-sectional dimension of the nanowire 106 or optimally, the cross-sectional dimension of the device channel region needs to be as thin as possible. In addition, the cross-sectional dimension of the nanowire 106 needs to be reliably controlled for an efficient and reliable performance of the device. The following sections describe a novel process of reliably fabricating an ultra-small or an ultra-narrow nanowire 106. First, a conventional method is used to deposit the nanowire 106 on the dielectric layer 104 as previously discussed. Then, at least one region of the nanowire 106 is thinned. The nanowire 106 is thinned at least at the region of the nanowire 106 that will form the channel region for the device. The following sections also describe a novel process of reliably fabricating an ultra-small semiconductor device from the nanowire 106. Even though the discussion focuses on fabricating a nanowire 106 for a transistor, it is to be appreciated that other semiconductor devices can be formed in the nanowire 106 without deviating from the scope of the embodiments. [0029] In Figure 2, a sacrificial gate stack 108 is formed (via a planar deposition process) over a first region of the nanowire 106. In one embodiment, the first region is the middle region of the nanowire 106. In one embodiment, the sacrificial gate stack 108 forms a sacrificial tri-gate structure covering all three exposed sides of the middle region of the nanowire 106. In another embodiment, the sacrificial gate stack 108 is a non- planner structure because it is formed to wrap around all exposed sides of the middle region of the nanowire 106. After the sacrificial gate stack 108 is formed over the middle region, the remaining regions of the nanowire 106 are the second region 114 and the third <Desc/Clms Page number 10> region 116. The regions 114 and 116 are left exposed at this point. In one embodiment, the first region will form the device channel region and the second region 114 and the third region 116 will form the source and drain regions for a semiconductor device formed in the nanowire 106. [0030] Continuing with Figure 2, the sacrificial gate stack 108 includes a sacrificial gate electrode 119 and on a sacrificial dielectric layer 121. In one embodiment, the sacrificial gate stack 108 is a conventional gate stack as known in the art. In one embodiment, the sacrificial gate electrode 119 is a polysilicon film and the sacrificial dielectric layer 121 is a silicon oxide film. The sacrificial dielectric layer 121 and the sacrificial gate electrode 119 are deposited over the middle region of the nanowire 106 using any semiconductor deposition methods known in the art such as CVD. In another embodiment, the sacrificial gate electrode 119 is replaced with an etchable sacrificial layer that can be easily and selectively etched off. The sacrificial gate electrode 119 thus needs not be polysilicon and/or needs not be conductive. The sacrificial gate electrode 119 only needs to be removable and or etchable. [00311 Continuing with Figure 2, a first spacer 110 is formed adjacent each side of the sacrificial gate stack 108. The spacer 110 is similar to a conventional spacer wall found in a semiconductor transistor. In one embodiment, the spacer 110 comprises silicon nitride or any other material suitable for a spacer wall of a transistor. The spacer 110 can be formed using methods known in the art such as CVD followed by patterning to form the spacer 110 adjacent each side of the sacrificial gate stack 108. [0032] In one embodiment, a semiconductor epitaxial film (e. g. , a silicon or germanium epitaxial film) is further formed over the second region 114 and the third <Desc/Clms Page number 11> region 116 of the nanowire 106. Since the second region 114 and the third region 116 will form the source/drain regions of a semiconductor device, it is optimal to make these regions as large as possible for better contact landings made to the source/drain regions. For nanoscale semiconductor devices, electrical contacts to the source/drain regions are often difficult to control due to the small surface areas of the nanowire. Forming an epitaxial film of a suitable thickness over the regions 114 and 116 allow the source/drain regions to be made larger than permitted by the dimensions of the nanowires 106. Electrical contacts to the source/drain regions thus can be obtained more easily. In addition, the epitaxial film may be used to decrease the series resistance of the source/drain regions formed in the second region 114 and third region 116. Better contact landings and lower series resistance for the source/drain regions lead to a better device performance. The epitaxial film may be of any suitable thickness that will give the second region 114 and third region 116 sufficient contact areas. In one embodiment, a semiconductor epitaxial film is deposited such that each of the second region 114 and third region 116 has a cross-sectional dimension that is about 3 times the first cross- sectional dimension of the nanowire 106. The epitaxial film is not shown in the Figure 2. The epitaxial film can be formed over the second region 114 and the third region 116 using methods known in the art. [00331 In one embodiment, the second region 114 and the third region 116 are implanted using conventional methods such as ion implantation to form the source/drain regions for a semiconductor device. A silicide layer (not shown) can be formed over each of the second region 114 and the third region 116 after the implantation to facilitate contacts to the source/drain regions. The silicide layer provides a low contact resistance to the source/drain regions formed in the second region 114 and the third region 116. The <Desc/Clms Page number 12> silicide layer can be formed of a metal such as cobalt, nickel, or etc. The silicide layer can be formed using conventional methods that deposit the metal over the second region 114 and the third region 116. After the metal is deposited, heat is applied to these regions to allow the silicon in these regions to react with the metals to form silicide. [0034] As illustrated in Figure 3, in one embodiment, a second spacer 112 is formed adjacent each side of the first spacer 110. The second spacer 112 is similar to the first spacer 110 and can be made out of nitride, similar materials as those used to form the first spacers 110, or other suitable materials known in the art. The second spacer 112 is beneficial in that it adds stress to the device to improve device performance. Additionally, when there are two spacers, 110 and 112, patterning to complete the device is easier. [0035] In Figure 4, a dielectric layer 118 is formed over the dielectric layer 104 covering the second region 114 and the third region 116. The dielectric layer 118 is similar to a conventional interlayer dielectric layer. In one embodiment, the dielectric layer 118 is similar to the dielectric layer 104 and may be made of an insulating material such as silicon dioxide (Si02), silicon nitride (Si3N4), or other suitable insulating material. The dielectric layer 118 can be formed using conventional methods such as CVD. In one embodiment, the dielectric layer 118 is blanketly deposited over everything including the sacrificial gate stack 108. The dielectric layer 118 is then polished back to expose the top surface of the sacrificial gate electrode 119 of the sacrificial gate stack 108. [0036] In Figure 5, the sacrificial gate stack 108 is removed. First, the sacrificial gate electrode 119 of the sacrificial gate stack 108 is removed. To remove the sacrificial gate electrode 119, a selective etching process that is selective to etch away the sacrificial gate electrode 119 is used. In the embodiment where the sacrificial gate electrode 119 is <Desc/Clms Page number 13> made of polysilicon, a conventional etching process typically used to remove polysilicon can be used to remove the sacrificial gate electrode 119. In one embodiment, a Tetra Methyl Ammonium Hydroxide (TMAH) or Potassium Hydroxide (KOH) etching solution is used to remove the sacrificial gate electrode 119. These etching solutions etch away the polysilicon and are selective to silicon dioxide (Si02) and silicon nitride (Si3N4). Second, the sacrificial dielectric layer 121 is removed. In the embodiment where the sacrificial dielectric layer 121 is made of Si02, an etching process that is selective to remove Si02 is used to remove the sacrificial gate dielectric layer 121. For instance, a buffered etchant solution containing hydrofluoric acid and water can be used to remove the sacrificial dielectric layer 121. The etching process is controlled so that only the sacrificial dielectric layer 121 is removed leaving intact the first spacers 110 and the second spacers 112 and the dielectric layer 104. In one embodiment, the dielectric layer 104, the first spacers 110, and the second spacers 112 can be made of different materials (e. g., Si02 for the dielectric layer 104 and SiON or Si3N4 for the spacers 110 and 112) to ensure that only the sacrificial dielectric layer 121 is removed. [00371 In Figure 6, after the sacrificial gate stack 108 is removed, the middle region of the nanowire 106 is now exposed. In Figure 6, the middle region is labeled as region 120. In one embodiment, the middle region 120 of the nanowire 106 is thinned to provide an ultra-narrow (e. g. , having dimension less than 5nm) channel for the device. In another embodiment, the middle region 120 is thinned to provide the nanowire 106 with at least one region that is ultra-small (e. g. , having dimension less than 5nm). As mentioned, the nanowire 106 is formed with a first cross-sectional dimension having a first height 132 of about 10-100 nm and a first width 134 of about 10-lOOnm. The first cross-sectional dimension may also be referred to as the initial thickness of the nanowire 106. Before <Desc/Clms Page number 14> thinning, the middle region 120 has the same initial thickness or cross-sectional dimension as the rest of the nanowire 106 (e. g. , about 10-lOOnm). After thinning, the middle region 120 will have a second cross-sectional dimension that is smaller or substantially smaller than the first cross-sectional dimension. In one embodiment, the second cross-sectional dimension is less than about 5nm or less than about 2-3nm. [0038] In one embodiment, at least one thermal oxidation process and at least one etching process are used to thin the middle region 120. The initial thickness (the first cross-sectional dimension) of the nanowire 106 can be thinned or reduced to a second thickness by controlled thermal oxidation and etching processes. In one embodiment, an oxide layer is controllably and thermally grown on the exposed surfaces of the middle region 120. The silicon on the exposed surfaces of the middle region 120 is consumed during the thermal oxidation process. In one embodiment, the amount of the silicon consumed is about 44% of the total thickness of the middle region 120 of the nanowire 106. For example, the nanowire 106 may have an initial thickness of the middle region 120 of about 10nm. The thermal oxidation process would consume 4.4nm of the silicon (44% of the silicon). After the thermal oxidation process, the thickness of the middle region 120 is about 5nm or 5.6mm. In one embodiment, in the thermal oxidation process, 0.44nm of silicon is consumed to produce lnm of SiO2. Thus, when a 10nm thick nanowire 106 is oxidized, 4.4nm of silicon is consumed and 1Onm of Si02 is produced. After the Si02 is removed, the nanowire 106 has a thickness of about 5.6nm. The middle region 120 can be successively and repeatedly thermally oxidized and etched to achieve a desired thickness or cross-sectional dimension (e. g. , about or less than Snm). For example, the nanowire 106 may have an initial thickness of the middle region 120 of about <Desc/Clms Page number 15> 100nm. Several successive thermal oxidation and etching processes may be necessary to thin the middle region 120 down to about or less than Snm. [0039] In another embodiment, a more aggressive thermal oxidation process can be used. The middle region 120 may be thermally oxidized at a temperature of about 800- 900 C for about 2 hours followed by a wet etching using a buffered oxide etchant such as hydrofluoric acid or equivalent. In an embodiment where the nanowire 106 has a first cross-sectional dimension of about 50 nm (e. g. , a height 132 of about 100 nm and width 134 of about 50 nm), after the thermal oxidation at about 800-900 C for about 2 hours followed by a wet etching using a buffered oxide etchant, the middle region 120 can be thinned down to a second cross-sectional dimension of about 5 nm (e. g. , a height 132 of about 5 nm and width 134 of about 5 nm). Similar thermal oxidation and etching can be performed to further thin the nanowire 106 down to a cross-sectional dimension of about 2-3 nm. A suitable dry etching process known in the art (e. g. , reactive ion etching or plasma etching) can be used instead of the wet etching process to remove the oxide layer formed on the middle region 120 of the nanowire 106 following the thermal oxidation process. Optimally, a wet etching process is used for better selectivity. [00401 It is to be noted that self-limiting oxidation has been observed when small dimension silicon regions are thermally oxidized. This is illustrated in Figure 12, which is a figure extracted from Fukuda, et al, "Fabrication of silicon nanopillars containing polycrystalline silicon/insulator multiplayer structures, "Appl. Phys. Lett. 70, (3) 333 (1997). In Fukuda, studies have indicated that thermal oxidation of a nanoscale silicon structure is self-limiting. Self-limiting oxidation is a stress effect. When the nanoscale silicon structure is thermally oxidized, irrespective of the process variations (e. g. , time and <Desc/Clms Page number 16> temperature variation), the silicon structure is oxidized to a self-limited thickness. The oxidized portion of the silicon structure is removed and the remaining silicon structure can be similarly oxidized again to another self-limited thickness. This process can be repeated as necessary to achieve the desired thickness. As illustrated in Figure 12, Fukuda oxidized the silicon structure for various durations of time, from about 3 to about 20 hours. The silicon structure is oxidized and the oxided layer is removed to leave the silicon structure with a core thickness of about 10-15nm irrespective of the oxidation time. [0041] Thus, for a particular nanowire 106, any region of the nanowire 106 can be thermally oxidized relying on the self-limiting oxidation for some control of the thickness to be oxidized. The oxidized portion can be removed. The thermal oxidation and the removal processes can be repeated to oxidize the nanowire 106 to another self-limiting thickness until the desired thickness is achieved. In one embodiment, the thermal oxidation and removal processes are repeated until the nanowire 106 is thinned to about or less than 5nm. The thinning of a region of the nanowire 106 can be easily controlled because the oxidation thickness for each oxidation process will be less sensitive to process variations such as time and temperature. [00421 In Figure 7, a device gate stack 122 is formed over the thinned middle region 120 using conventional methods. In one embodiment, the middle region 120 forms a narrow channel region for the device. The device gate stack 122 comprises a dielectric layer 123 and a gate electrode 125 formed over the dielectric layer 123. In one embodiment, the device gate stack 122 is a conventional gate stack as known in the art. In that embodiment, the gate electrode 125 is a polysilicon film formed on the dielectric layer 123 which can be a silicon oxide film. In another embodiment, the gate electrode 125 is a <Desc/Clms Page number 17> damascene gate that can be made of a semiconductor material such as silicon, polysilicon, silicon germanium, germanium, or a metal such as copper, aluminum, and titanium. In another embodiment, the gate electrode 125 is made of metal. Having the gate electrode 125 being made of metal avoids the need to treat the gate electrode 125 so that it is conductive as is needed when the gate electrode 125 is made of a semiconductor material such as polysilicon. Additionally, for smaller devices, a metal gate electrode is more beneficial since it allows for lower resistance than would a semiconductor (e. g., polysilicon gate) electrode. In one embodiment, the device gate stack 122 forms a tri-gate structure since it covers three sides of the middle region 120. In another embodiment, the device gate stack 122 is a non-planar structure since it covers all exposed sides of the middle region 120. An example of a semiconductor device formed according to the methods discussed above is illustrated in Figures 8-11. These figures show the device with various layers or structures removed for clarity purposes. The device includes a substrate 102, a first dielectric layer 104, and a nanowire 106. The nanowire 106 includes a middle region 120 that forms a channel region of the device and regions 114 and 116 that form source/drain regions of the device. After the thinning processes as previously described, the channel region of the device is smaller or substantially smaller than each of the source/drain regions. For instance, the channel region may be at least 10-20 times smaller than each of the source/drain regions. Alternatively, the channel region may only be 2 time smaller than each of the source/drain regions. In one embodiment, only the channel region of the device is thinned down from the original cross-sectional dimension using the methods previously described. Thus, the channel region of the device is an ultra-narrow channel region. The source/drain regions of the deice can have the same cross-sectional <Desc/Clms Page number 18> dimension as the original cross-sectional dimension of the nanowire. More optimally, each of the source/drain regions has an epitaxial film formed thereover as previously discussed. Thus, each of the source/drain regions has a cross-sectional dimension that is larger than the original cross-sectional dimension of the nanowire. [00441 The device further includes a device gate stack 122 formed over the channel region of the nanowire 106. The device also includes a first spacer 110 formed adjacent to each side of the device gate stack 122. Alternatively, the device may include a second spacer 112 formed adjacent to each side of the first spacers 110 as previously described. The device may also include a second dielectric layer 118 formed over the source/drain regions (regions 114 and 116) and the first dielectric layer 104. Contact vias (not shown) may be created into the second dielectric layer 118 using methods known in the art to allow for electrical contacts to the source/drain regions. [0045] Figure 8 shows the device with the second dielectric layer 118 removed to show only the device gate stack 122 formed over the middle region 120 of the nanowire 106, and the first spacers 110 and the second spacers 12 formed on each side of the device gate stack 122. Figure 9 shows the device with the device gate stack removed to show that the middle region has a smaller cross-sectional dimension than the regions 114 and 116. Figure 10 shows the device with the second spacers 112 removed to show only the first spacer 110. Figure 11 shows the device with only the nanowire 106 remained on the first dielectric layer 104. This figure shows that the regions 114 and 116 of the nanowire 106 are substantially larger than the middle region 120. [00461 While the invention has been described in terms of several embodiments, those of ordinary skill in the art will recognize that the invention is not limited to the <Desc/Clms Page number 19> embodiments described. The method and apparatus of the invention, but can be practiced with modification and alteration within the spirit and scope of the appended claims. The description is thus to be regarded as illustrative instead of limiting. [0047] Having disclosed exemplary embodiments, modifications and variations may be made to the disclosed embodiments while remaining within the spirit and scope of the invention as defined by the appended claims. |
A method of preventing Hot Carrier Injection in input/output connections on low voltage integrated circuits. As integrated circuit voltages drop generally so does the external voltages that those circuits can tolerate. By placing input/output devices, in series, external voltages can be divided between the devices thereby reducing junction voltages seen by internal devices. A circuit for preventing Hot Carrier Injection in these input/output devices comprises comparing an input voltage (Vpad) to a reference voltage (Vddo), and if conditions that would produce Hot Carrier Injection are present (e.g. when input voltage is greater than reference voltage), slowing the turn-on of one of the series connected input/output devices (305, 307), thereby reducing the voltage from the drain-to-source of another series connected input/output device. |
WHAT IS CLAIMED IS: 1. A method of suppressing hot carrier injection in an integrated circuit having a first and second transistors coupled in series and the first transistor connected to an input node, the method comprising; (a) accepting an input voltage (VPAD) at the input node; (b) accepting a reference voltage (VDDO) ; (c) comparing said input voltage to the reference voltage (VDDO) ; and (d) reducing a drain-to-source voltage of said first transistor by slowing turn-on of said second transistor coupled in series when said input voltage (VPAD) is greater than said reference voltage (VDDO). 2. The method of claim 2, wherein step (d) includes discharging a gate voltage of said second transistor, thereby reducing its conductivity and slowing its turn-on. 3. A circuit for suppressing hot carrier injection, comprising; a circuit input node having a input voltage (VPAD) ; a first transistor having a drain coupled to said input node ; a second transistor having a drain coupled to a source of said first transistor, and a source coupled to a ground potential; a sense circuit having a first input of said input voltage (VPaD), a second input of a first reference voltage (VDDO) ; and a pre-driver device having an input coupled to an output of said sense circuit, and having an output coupled to a gate of said second transistor; wherein, said pre-driver slows the turn-on of said second transistor, when said pre-driver is enabled. 4. The circuit of claim 3, wherein said first and second transistors are N- channel Metal Oxide Semiconductor (NMOS) devices. <Desc/Clms Page number 22> 5. The circuit of claim 3, wherein said sense circuit enables said pre- driver when said input voltage is greater than said first reference voltage. 6. The circuit of claim 3, wherein said sense circuit comprises: a third transistor having a source coupled to said input voltage (VPAD), and a gate coupled to said first reference voltage (VDDO) ; and a fourth transistor having a drain coupled to a drain of said third transistor, a gate coupled a gate of said first transistor, and a source coupled to an input node of said pre-driver circuit. 7. The circuit of claim 6 wherein said third transistor is a P-channel Metal Oxide Semiconductor (PMOS) device, and said fourth transistor is a NMOS device. 8. The circuit of claim 3, wherein said sense circuit comprises: a third transistor disposed between said input node and an intermediate reference voltage (Vpb), wherein a gate of said third transistor is connected to its own drain; a fourth transistor disposed between a first reference voltage (VDDO) and said intermediate reference voltage (Vpb) wherein a gate of said fourth transistor is connected to its own drain; a fifth transistor disposed between a sixth and seventh transistors, said fifth transistor having a gate connected to said intermediate reference voltage (Vpb), a source connected to a drain of said sixth transistor (MPO), and a drain connected to a drain of said seventh transistor (MN1), wherein the gates of said sixth and seventh transistors are connected to the gate of said first transistor. 9. The circuit of claim 8 wherein said third, fourth and seventh transistors are NMOS devices, and said fifth and sixth transistors are PMOS devices. <Desc/Clms Page number 23> 10. The circuit of claim 8, wherein a plurality of transistors are disposed in series between said second transistor and said intermediate reference voltage (Vpb), and wherein gates of said plurality of transistors are connected to their own drains. 11. The circuit of claim 10 wherein said plurality of transistors are NMOS devices. 12. The circuit of claim 3, wherein said pre-driver, when enabled, reduces a gate voltage of said second transistor by providing a discharge path for a gate voltage of said second transistor. 13. The circuit of claim 3, wherein said pre-driver comprises: a third transistor having a source coupled to said first reference voltage (VDDO) and a gate coupled to said input node; a fourth transistor having a drain coupled to a drain of said third transistor, a gate coupled to said input node, and a source coupled to a ground potential; and an output node coupled to said gate of said second transistor, said output node coupling the drains of said first and second transistors; wherein when said pre-driver is enabled, a voltage at said gate of said second transistor is reduced, thereby slowing the turn-on of said second transistor. 14. A method of suppressing hot carrier injection in an integrated circuit having a plurality of transistors coupled in series between an input node and a reference voltage, comprising; (a) accepting an input voltage (VPAD) ; (b) accepting a first reference voltage (VDDO) ; <Desc/Clms Page number 24> (c) comparing said input voltage to said first reference voltage (VDDO) ; (d) providing a discharge path for said input voltage; and (e) reducing the drain-to-source voltage of a first transistor by slowing the turn-on of a second transistor coupled in series with said first transistor when said input voltage (VPAD) is greater than said first reference voltage (VDDO). 15. The method of claim 14, wherein step (e) includes discharging a gate voltage of said second transistor, thereby reducing its conductivity and slowing its turn-on. 16. A circuit for suppressing hot carrier injection in an integrated circuit device, comprising; a circuit input node having a input voltage (VPAD) ; a first transistor having a drain coupled to said input node; a second transistor having a drain coupled to a source of said first transistor, and a source coupled to a ground potential; a sense circuit having a first input of said input voltage (VP, ), a second input of a first reference voltage (VDDO) ; and a pre-driver device having an input coupled to an output of said sense circuit, and having an output coupled to a gate of said second transistor; a discharge path for the input voltage (VPAD) disposed between said input node and a second reference voltage; wherein, said pre-driver slows the turn-on of said second transistor, when said pre-driver is enabled. 17. The circuit of claim 16, wherein said sense circuit comprises: a third transistor having a source coupled to said input node, and a gate connected to said first reference voltage (VDDO) ; <Desc/Clms Page number 25> a fourth transistor having a drain coupled to the drain of said third transistor, and a gate connected to said drain of said third transistor; a fifth transistor having a drain coupled to the source of said fourth transistor, and a gate connected to said first reference voltage (VDDO). 18. The circuit of claim 16 wherein said pre-driver comprises: a third transistor having a source coupled to said first reference voltage (VDDO) and a gate coupled to said input node; a fourth transistor having a drain coupled to a drain of said third transistor, a gate coupled to said input node, and a source coupled to a ground potential; and an output node coupled to said gate of said second transistor, said output node coupling the drains of said first and second transistors; wherein when said pre-driver is enabled, a voltage at said gate of said second transistor is reduced, thereby slowing the turn-on of said second transistor. 19. The circuit of claim 16, wherein said alternate discharge path comprises : a third transistor having a drain coupled to said circuit input node, and a gate coupled to said first reference voltage (VDDO) ; a fourth transistor having a drain coupled a source of said third transistor, and a gate coupled to said first reference voltage (VDDO) ; and a fifth transistor having a source coupled to a source of said fourth transistor, a gate coupled to a control voltage (output enable/OE), and a source coupled to a second reference voltage (Vddc). |
<Desc/Clms Page number 1> HOT CARRIER INJECTION SUPPRESSION CIRCUIT BACKGROUND OF THE INVENTION [0001] Field of the Invention [00021 The present invention relates to integrated circuits (ICs), such as interface circuits, that are designed having reduced feature sizes, for example, 0.13 p, m. More particularly, the invention relates to ICs that include interfaces (such as input/output (I/O) circuits) that are capable of interfacing with comparatively high-voltage signals from other sources, for example a 3.3 volt IC interfacing with signals from a 5 volt IC, or any other disparate ranges. Moreover, the invention relates to integrated circuits in which the semiconductor devices are biased such that the stress across the gate-oxides and junctions, as well as the leakage currents, are maintained at tolerable levels. Related Art [0003] The trend in CMOS-based processing technology is to produce integrated circuit (IC) cores having a higher density of semiconductor devices, such as transistors, and faster clock rates than their predecessors. I/O circuits, which electrically couple an IC core to external components, are accessed through I/O circuit pads that surround the IC core. The IC core and the I/O circuit pads are generally fabricated from the same processing technology. There is however no requirement that they comprise the same technology and hybrid circuits are known in the art. The inventive concepts herein are applicable to a variety of fabrication technologies. [0004] The performance of the IC cores may generally be improved by shrinking the feature sizes of the semiconductor devices, for example field-effect transistors (FETs). Unfortunately, reducing the IC feature sizes may proportionally decrease the maximum operating voltage that the semiconductor devices within the IC can withstand. For example, an I/O <Desc/Clms Page number 2> circuit pad, fabricated from a CMOS process having 0.30 micron features, typically withstands a maximum operating voltage of about 3.6 volts. In such a case the maximum operating voltage of the I/O circuit pad is insufficient to drive the external components which have a higher voltage requirement, such as 5 volts. Furthermore, if the IC is interfaced with a greater than the maximum operating voltage, the IC may fail. If high voltages appear across the drain-to-source of NMOS and PMOS devices when they are in a conducting state, there exists the possibility of Hot-Carrier-Injection (HCI). HCI occurs when, as a result of larger fields along the channel direction, a small fraction of the channel carriers have enough energy to enter the insulating layer near the drain. In N-Channel MOSFETs, energetic electrons entering the oxide create interface traps and oxide wear-out, eventually leading to gate-to-drain shorts. Thus, over time, HCI degrades transistor characteristics. Devices in the IC and devices in I/O circuit are equally susceptible to HCI. [0005] One way to attempt to resolve such requirements of circuits with mismatched voltage requirements is to increase the robustness of the fabrication process, for example by increasing the thickness of the gate-oxide layer of the semiconductor devices which comprise the IC circuitry. A thick gate-oxide layer may provide semiconductor devices, such as FETs, with the ability to support a higher voltage requirement. However, this voltage robustness is commonly accompanied by a decreases the performance of the IC, because the thick gate-oxide layer reduces the overall gain of the devices which comprise the IC. Reducing the gain minimizes the benefit that occurs by reducing the feature size. [0006] Other attempts have included increasing the complexity of the CMOS fabrication process so there are multiple sets of devices where each set meets different voltage requirements. Each set of devices requires a different gate-oxide. Each additional gate-oxide requires a separate mask. The resulting hybrid process may significantly increase the manufacturing costs of the IC. <Desc/Clms Page number 3> [0007] One way to avoid the drawbacks of the aforementioned processing-based solutions is to use a"level-shift"chip as an external component. The IC core and the I/O circuits are fabricated from the same process. The"level-shift chip"may be fabricated from a process that supports the discrete voltage requirement by stepping up the core output signals to support the discrete voltage range and stepping down the external drive signals to support the IC core voltage range. Such a level-shift chip can be a waste of much needed space on a crowded printed circuit board and may degrade performance. An I/O circuit that transforms voltages between different voltage levels without degrading the overall performance of the integrated circuit and maximizing use of space on the printed circuit board or multi-chip substrate may be beneficial. It would be a further benefit if such an I/O circuit could use voltages presented at the I/O circuit in order to provide such protective biasing. It would be yet another benefit to protect the devices comprising the I/O circuit itself from potentially damaging voltages that occur during transient conditions. Commonly an I/O power supply may vary +/-10% and may vary significantly more during transient conditions. When the I/O power supply varies, circuits may have higher stress on the gate-oxides of the devices in the I/O circuit ; such stresses may not be desirable in many process technologies. It may be desirable to provide bias voltages to various devices in the I/O circuit such that the device gate-oxide is protected from high-voltages under various conditions of operation even when the power-supply voltage varies by a large amount. [0010] Embodiments of the present invention may be optimized, for example, where 5 volt input tolerance is required, even when the power supplies are varying in steady state by +/-10%. Embodiments of the present invention are illustrated in an optimized form for I/O circuits where a 5 volt +/-10% input tolerance is required for normal operating range. Additionally the inventive concepts herein are <Desc/Clms Page number 4> described in terms of CMOS (Complimentary Metal Oxide Semiconductor) integrated circuits. Those skilled in the art will readily appreciate the fact that techniques described with respect to CMOS ICs are readily applicable to any circuits having disparate power supply and/or drive signal requirements for different portions of the circuitry. The CMOS example chosen is one likely to be familiar to those skilled in the art. There is, however, no intent to limit the inventive concepts to CMOS ICs, as the techniques are equally applicable to a wide variety of integrated circuit fabrication techniques. SUMMARY OF THE INVENTION [0012] An exemplary embodiment of the invention includes an integrated circuit having a four device input output circuit in a push pull configuration. Two of the devices, termed upper devices, comprise PMOS (P-Channel Metal Oxide Semiconductor) devices and two of the devices, termed lower devices, comprise NMOS (N-channel Metal Oxide Semiconductor) devices. The devices are biased to eliminate hazardous voltages across device junctions and to reduce the magnitude of the voltage being passed on to the core circuitry. The biases are derived from the input/output state of the circuit and the voltage presented to the I/O circuit connection (VPAD). Additionally PMOS device well bias voltage may be developed based on VPAD. [0013] During transient conditions, such as where the circuit changes state, individual devices within the I/O interface circuit itself can experience temporarily high drain-to-source voltages. This condition may result in Hot- Carrier-Injection (HCI). Such transient conditions may be avoided by implementing a sense circuit that detects the transient condition, and using a pre-driver circuit to reduce the high drain-to-source voltage present in the affected device. More specifically, this is accomplished by accepting an input voltage (VPAD), accepting a reference voltage (VDDO), comparing the input voltage to the first reference voltage (VDDO), and reducing the drain-to-source voltage of a first transistor by slowing the turn-on of a second transistor <Desc/Clms Page number 5> coupled in series with the first transistor when the input voltage (VPAD) is greater than said first reference voltage (VDDO). BRIEF DESCRIPTION OF THE FIGURES [0014] Other features and advantages of the invention will become apparent from a description of the following figures, in which like numbers refer to similar items throughout. FIG. 1 is a graphic illustration of an exemplary environment in which embodiments of the invention may be utilized. [0016] FIG. 2 is a graphical illustration of a prior art input output circuit and connection. FIG. 3 is a schematic of a portion of a CMOS (Complimentary Metal Oxide Semiconductor) input output circuit in which single push pull output devices, as illustrated in FIG. 2, have been replaced by two devices each. [0018] FIG. 4 is input output circuit, including a well biasing circuit, according to an embodiment of the invention. [0019] FIG. 5 is a graph illustrating the relationship between well voltage and pad voltage for the input (or a tristate) mode, according to an embodiment of the invention. [0020] FIG. 6 is a block diagram of I/O circuitry biasing according to an embodiment of the invention. [0021] FIG. 7 is a graphical representation of a bias voltage (Vopi) as a function of pad voltage (VPAD), according to an embodiment of the invention. FIG. 8 is a graphical illustration of a portion of a circuit configuration used to provide the pad voltage to the core circuitry, according to an embodiment of the invention. [0023] FIG. 9A is a schematic diagram of the generation of Bias Mid voltage, according to an embodiment of the invention. [0024] FIG. 9B is a schematic diagram of an alternative embodiment for the generation of Bias Mid voltage, according to an embodiment of the invention. <Desc/Clms Page number 6> [0025] FIG. 9C is a schematic diagram of yet another alternative embodiment for generation of Bias-Mid voltage, according to an embodiment of the invention. [0026] FIG. 10 is a schematic diagram of an exemplary well biasing circuit, according to an embodiment of the invention. [0027] FIG. 1 lA is a schematic diagram of a circuit used to generate VGpi. [0028] FIG. 11B is a schematic diagram illustration of the generation of VDDO - VTp depicted in FIG. 11 A. [0029] FIG. 11 C is a graph illustrating the relationship between Bias-Mid and VPAD according to an embodiment of the invention. [0030] FIG. 11D is a schematic diagram depicting an exemplary illustration of a transistor implementation of block 901. [0031] FIG. 12 is a schematic diagram of a circuit that may be used to prevent power on stress of devices, according to an embodiment of the invention. [0032] FIG. 13 is a circuit and block diagram of a portion of an over voltage protection circuit. [0033] FIG. 14 is a schematic diagram illustrating a modification of FIG. 9A. FIG. 15 is a schematic diagram illustrating a transistor implementation of block 1401. [0035] FIG. 16 is a schematic diagram illustrating a transistor implementation of FIG. 14. [0036] FIG. 17 is a schematic diagram of a circuit that may be used to prevent stress on devices when voltage spikes appear at an I/O pad. [0037] FIG. 18 is a schematic diagram of a circuit including several previously illustrated embodiments of the invention. [0038] FIG. 19 is flow chart describing a method for preventing stress on a particular device during a transient condition. [0039] FIG. 20 is a functional diagram that implements the method described in FIG. 19. [0040] FIG. 21 is a schematic diagram of a circuit showing a first embodiment of the functional diagram described in FIG. 20. <Desc/Clms Page number 7> [0041] FIG. 22 is a schematic diagram of a circuit showing a second embodiment of the functional diagram of FIG. 20. [0042] FIG. 23 is a schematic diagram of a circuit showing a third embodiment of the functional diagram of FIG. 20. DETAILED DESCRIPTION OF THE INVENTION [0043] FIG. 1 is a graphic illustration of an exemplary environment in which embodiments of the invention may be utilized. In FIG. 1 a personal computer system is represented generally at 101. Within the computer system is circuit board 103 on which a CPU integrated circuit chip 105 is mounted. The CPU is a type that uses 3.3 volts as its supply voltage. A keyboard interface integrated circuit chip 107 is also mounted on circuit board 103. The keyboard interface integrated circuit uses a supply voltage of 5.0 volts. The CPU 105 is coupled to the Keyboard chip 107. The CPU 105 may be of a type that contains integrated devices that may be damaged by interfacing with a device having a higher supply voltage. Because of the disparity in supply voltages that may exist in such situations an output circuit which can compensate for the higher interface voltages may be useful. [0044] FIG. 2 is a graphical illustration of a prior art input output circuit and connection. A common input output circuit comprises a pull up device, such as PMOS (P-channel Metal Oxide Semiconductor) device 215 and a pull down device, such as NMOS (N-channel Metal Oxide Semiconductor) device 217, such as illustrated in FIG. 2. Devices 215 and 217 are coupled together at an input/output (I/O) pad 219. The substrate for the NMOS device is commonly coupled to ground potential, e. g. as shown at 221. The substrate for the NMOS device is typically a substrate that is common for the entire integrated circuit chip on which it resides. PMOS devices are commonly fabricated in their own isolated well. [0045] In deep submicron fabrication, the component integrated devices can tolerate only limited differential voltages across their junctions. Commonly <Desc/Clms Page number 8> the voltage that can be tolerated across the junctions is on the order of 2.5 Volts. [0046] In the Illustration of FIG. 2 pad 219 interfaces to a 5 volt circuit, and hence the pad may commonly see voltages in the neighborhood of 5.5 volts. A 5 volt signal applied to pad 219 may stress devices within the chip 105. For example if gate 205 of device 217 is at a zero volt potential then the voltage across the 205-203 gate-oxide can exceed 5 volts, thereby stressing device 217. For this reason more than one device may be used to divide the voltages in pull up and pull down I/O circuits. [0047] FIG. 3 is a schematic of a portion of a MOS (Metal Oxide Semiconductor) input output circuit in which each push pull output device illustrated in FIG. 2 has been replaced by two devices. That is, output device 215 has been replaced by devices 301 and 303 and device 217 has been replaced by devices 305 and 307. By replacing devices 215 and 217 by two devices each, the output voltage appearing at pad 309 may be safely divided over the two upper (301 and 303) and the two lower (305 and 307) I/O devices. The middle NMOS device 303 and the middle PMOS device 305 have their gates biased to intermediate potentials to avoid excessive voltages under various I/O pad, 309, voltages. The devices 305 and 307 are coupled in series and disposed between the I/O pad 309 and ground. More specifically, the source of device 305 is coupled to the drain of device 307. The devices 301 and 303 are coupled in series and disposed between VDDO and the I/O pad 309. More specifically, the drain of device 301 is coupled to the source of device 303. [0048] FIG. 4 is input output circuit 404, including a well biasing circuit, according to an embodiment of the invention. Devices 301 and 303 are fabricated in wells, illustrated schematically as 400 and 402, which are essentially at a floating potential. Because devices in wells at floating potential can have problems, such as device latch up, wells may commonly be coupled to a known bias voltage. The wells of devices 301 and 303 are coupled to the highest circuit potential available using well biasing circuit 401. The inputs to <Desc/Clms Page number 9> the well biasing circuit are the pad voltage present on input output pad 309, VDDO and voltage VGP1 which are illustrated in FIG. 7. During the operation of input output circuit 404. in an output mode (when pad 309 is in an output driving mode), wells 400 and 402 are coupled to VDDO. When the pad 309 is in an input mode, the well voltage depends upon the pad voltage. In the output enable mode VWell = VDDO. [0050] When input output circuit 404 is in an input mode (when pad 309 is in an input mode), Vwell depends on both the input (Pad) voltage VPAD and VDDO If VPAD is less than VDDO when input output circuit 404 in the input mode then Vwen VDDO. If VPAD is greater than VDDO then Vue = VPAD. A graph of this relationship is illustrated in FIG. 5. [0051] FIG. 5 is a graph illustrating the relationship between well voltage and pad voltage for the I/O circuit in an input (or a tristate) condition. As can be seen from the graph, if the pad voltage is less than VDDO then the well voltage is equal to VDDO. If the pad voltage is greater than VDDO then the well voltage is equal to the pad voltage. The well bias can thereby be changed according to changing circuit conditions. [0052] FIG. 6 is a block diagram of I/O circuitry 600 biasing according to an embodiment of the invention. When I/O circuitry 600 is in the input mode, first bias circuit 407 ties gate 403 of device 301 to VDDO.. In the output mode device 301 is controlled by an input from first bias circuit 407 according to whether a high or low value is being output on the pad 309. [0053] In the input mode second bias circuit 405 provides gate voltage Vopi to the gate of output device 303, The gate voltage VGPI provided to the gate of output device 303 varies from an intermediate power supply voltage, such as VDDC being equal to 1.2 volts, and the pad voltage presented to the circuit at input output pad 309. Such biasing prevents device 303 from being damaged due to a voltage potential across its junctions. FIG. 7 is a graphical representation of VGpl bias voltage as a function of pad voltage (VPAD). If VPAD is less than VDDO, then VGpI provided to the gate of output device 303 is equal to the intermediate supply voltage VDDC. If <Desc/Clms Page number 10> VPAD is greater than VDDO then VGPI provided to the gate of output device 303 is equal to VPAD. In such a manner the voltage between the gate of device 303 and pad 309 can be kept in a safe range to prevent damage to the junction. To summarize the operation of the circuit of FIG. 6, when the circuit 600 is in an output mode: The well biasing circuit 401 ties the wells of devices 301 and 303 to VDDO. The gate of the lower PMOS device 307 is tied to an intermediate voltage, such as VDDC = 1. 2 Volts. The gate of upper NMOS device 305 is tied to an intermediate voltage, such as VDDP= 2.5 Volts. ) When the circuit 600 is not in output mode, that is in the tri-state or input mode then upper PMOS device 301 and lower NMOS device 307 are turned off and devices 303 and 305 are turned on to divide the voltages of the output circuit. The gate voltage of the upper NMOS device 305 is controlled by third bias circuit 409. Third bias circuit 409, when in an input or tristate mode, will increase the base voltage when the pad voltage increases beyond a certain threshold, for example VDDP equal to 2.5 Volts. Fourth bias circuit 411 works in a similar fashion to first bias circuit 407. Both bias circuits 407 and 411 work in a digital mode, either providing a first or second voltage depending on the required I/O pad 309 output voltage. In a first mode of operation first bias circuit 407 switches between a first voltage VDDO and a second lower voltage VDDC. Gate bias circuit 411 switches between providing VDDP and ground potential to the gate of device 307. ] FIG. 8 is a graphical illustration of a circuit configuration used to provide the pad voltage to the core circuitry. The VPAD input is coupled to the core circuitry 803 through an NMOS device 801. The gate of NMOS device 801 accepts Bias-Mid as its control voltage. Such an arrangement protects the gate source voltage of device 801 and also prevents large voltages from the input from being coupled into the core circuitry when it is in the input, (tristate) or output conditions. <Desc/Clms Page number 11> [0060] One facet of the I/O system comprising devices 301,303, 305 and 307 is that any number of such devices may be added in parallel, in order to provide any level of drive signals needed. [0061] FIG. 9A is a schematic diagram illustrating how Bias-Mid voltage is generated. Block 901 is a switching circuit that switches its Bias-1 output between voltages VDDO (3.3 Volts nominally in the present embodiment) and VDDC (1.2 Volts nominally in the present embodiment). Device 905 is a PMOS device as are devices 907 and 909. Device 907 turns on when the output is enabled or the VPAD is low. When device 907 is turned on, Bias-Mid is coupled to VDDP. When output is not enabled i. e. the pad is in the tri-state (input only) mode and VPAD is high, then Bias~1 is equal to VDDO and device 905 charges point 911 to Bias-1 minus VTP, where VTP is the threshold of device 905, and accordingly is the voltage dropped across device 905. If Bias-Mid is greater than the sum of VDDP and Vrp, then device 909 will drain current from node 911 such that the sum of VDDP plus VTP is the maximum value for Bias Mid. Bias Mid is always between (VDDP + VTP) and. (VDDO- VTP), whether (VDDP + VTP) or (VDDO-VTP) is larger. A typical value of the threshold voltage VTP is. 5 volts. The actual value of Bias-Mid will be determined by the relative sizes of devices 907 and 909. [0062] FIG. 9B is a schematic diagram of an alternate embodiment illustrating how Bias-Mid voltage is generated in an alternate embodiment. Block 901 is a switching circuit that switches its Bias 1 output between voltages VDDO (3.3 Volts nominally in the present embodiment) and VDDC (1.2 Volts nominally in the present embodiment). Device 905 is a PMOS device as is device 907. Device 909B is a NMOS device. Device 907 turns on when the output. is enabled or the VPAD is low. When device 907 is turned on, Bias-Mid is coupled to VDDP. When output is not enabled i. e. the pad is in the tri-state (input only) mode and during this time when VPAD is high, then Bias~1 is equal to VDDO and device 905 charges point 911 to Bias. 1 minus VTp, where VTP is the threshold of device 905, and accordingly is the voltage dropped across device 905. If Bias-Mid is greater than the sum of (VDDP + <Desc/Clms Page number 12> VTP) then device 909b will drain current from node 911 such that (VDDP + VTP) is the maximum value for Bias Mid. Bias Mid is always between (VDDP + VTN) and (VDDO-VTP), whether (VDDP + VTN) or (VDDO-VTP) is larger. A typical voltage value for the threshold voltage VTP is. 5 volts. The actual value of Bias Mid will be determined by the relative sizes of devices 907 and 909b. [0063] FIG. 9C is a schematic diagram of yet another alternate embodiment for generation of Bias-Mid voltage. In this circuit Bias-Mid is always less than (VDDP + VTP) and greater than (VDDO~ VTN). [0064] FIG. 10 is a schematic diagram of an exemplary well biasing circuit, according to an embodiment of the invention. Device 1001, when turned on, couples the I/O pad 309 to the well 1005. Device 1003, when turned on, couples VDDO to the well 1005. When VPAD is less than VDDO the gate source of device 1001 is less than the threshold voltage of device 1001, and device 1001 is turned off. When VGPI is low (e. g. 1.2 Volts) then device 1003 conducts, thereby tying the well 1005 to VDDO. When VPAD is equal to VDDO or greater then device 1001 will begin to turn on, thereby coupling the well 1005 to VPAD. FIG. 11A is a schematic diagram of a circuit used to generate Vomi. Bias~l switches between VDDO (3.3 volts) and VDDC (1. 2 volts). Device 1101 couples Bias 1 to Vcpi, When bias~l is 3.3 volts device 1101 is off and when bias 1 is 1.2 Volts then VGpl is tied to 1.2 Volts. When the VPAD at 309 is greater than VDDO device 1103 begins to conduct, because the gate of device 1103 is tied to (VDDO-VTp), and VGP1 is thereby coupled to VPAD. [0066] FIG. 11B shows a circuit which may be used to generate (VDDO-VTP) The strong upper PMOS device charges the node 1150 to (VDDO-VTP) In addition to the problems that may be caused when a lower supply voltage chip is interfaced with a higher voltage chip"power on stress"problems, which may be caused when circuitry is turned on and the supplies that provide protective biases are not yet up to their full voltage, may exist. In such a case a voltage present at an I/O pad may stress devices which are coupled to that I/O pad. <Desc/Clms Page number 13> FIG. 11 C is a graph illustrating the relationship between Bias-Mid and VPAD. Bias-Mid is set at 2.5 volts, and remains at 2.5 volts until VPAD increases beyond 2.5 volts. Thereafter Bias-Mid tracks increases with VPAD and becomes equal to a higher voltage when VPAD increases beyond a certain value. [0068] FIG. 11 D is a schematic diagram depicting an exemplary illustration of a transistor implementation of block 901. FIG. 12 is a schematic diagram of a circuit that may be used to prevent power on stress of devices, according to an embodiment of the invention. The circuit illustrated in FIG. 12 may be used to generate the Bias-Mid voltage when VDDO is not up to its nominal value. If Bias-Mid is present then devices 305 and 307, shown in FIG. 8, will be protected from junction over voltage problems even though the voltages, which ordinarily would be used to generate Bias Mid as explained in FIG. 9, are not present. [0070] In FIG. 12 devices 1201,1203, and 1205 are arranged as a series of diode coupled transistors such that a threshold voltage VTP (in the present example equal to approximately. 5 volts) is dropped across each device when it is conducting. When device 1207 is conducting, the pad voltage, minus the threshold voltage of devices 1201,1203, 1205 and 1207, is coupled to Bias Mid. Device 1207, in essence, acts as a switch. [0071] As an example, assume that VDDO is initially zero volts. Zero volts at the gate of device 1209 turns it on. In such case point 1211 charges to a potential close to the pad voltage, since device 1213 is off. Point 1211 is coupled to the gate of device 1214 thereby turning device 1214 off Since VDDO is zero volts, PMOS device 1219 turns on, which leads the gate of device 1207 being coupled to Bias Mid. This leads to coupling the pad voltage, minus the threshold voltage of devices 1201,1203, 1205 and 1207 to Bias Mid. When VDDO is low, device 1215 provides a current leakage path for Bias-Mid to VDDC or VDDP. When VDDO is low, string 1217 turns on and the pad voltage is coupled to Bias Mid. Devices 1220,1221, 1223 and 1225 <Desc/Clms Page number 14> act as protection for device 1209 in the instance where the VPAD is high and VDDO is low. When VDDO is high, point 1211 is tied to Bias-Mid because device 1213 turns on. When VDDO is high, device 1219 is turned off and device 1213 is turned on, thus raising the potential at the base of device 1207 to VPAD, thereby turning device 1207 off. Also device 1215 turns off when VDDO is high. [0073] FIG. 13 is a circuit and block diagram of a portion of an over voltage protection circuit. Device 1001 provides a protection mechanism for the well bias. If VDDO is lower than the pad voltage by VTP or more then device 1001 will turn on. If device 1001 turns on then the well is coupled, via device 1001, to the pad, and hence the well will be biased to VPAD. Similarly device 1301 is coupled between the pad and PGate, the gate of PMOS device 303 shown in FIG. 6. The gate of device 1301 is biased so that when VDDO is lower than the pad voltage by VTP or more, then device 1301 will turn on and couple PGate to the pad voltage, therefore if VDDO is low then PGate will not depend on VDDO for its voltage level and instead will take the voltage level from the voltage on the pad. [0075] FIG. 14 is a schematic diagram illustrating a modification of FIG. 9. In FIG. 14 block 901 is decoupled from the Bias Mid signal when VDDO is lower than its nominal value. The decoupling is done by using block 1401. When VDDO is not up to its nominal value, the node V~pwr is decoupled from VDDP by using block 1401 as a switch. When VDDO is up to its nominal value, the node Vpwr is coupled to VDDP by using block 1401. [0076] FIG. 15 is a schematic diagram illustrating a transistor implementation of block 1401. When VDDO is greater than a certain value, NMOS 1507 is turned on thereby connecting the gate of PMOS 1505 to VDDC. Connecting the gate of PMOS 1505 to VDDC turns on 1505 thereby connecting V~pwr to VDDP. When VDDO is less than a certain value, NMOS 1507 is turned off and PMOS 1506 is turned on thereby connecting the gate of PMOS 1505 to Bias-Mid, thereby turning off PMOS 1505 and disconnecting Vpwr from VDDC <Desc/Clms Page number 15> [0077] FIG. 16 is a schematic diagram illustrating a transistor implementation of the circuitry illustrated in FIG. 14. [0078] FIG. 17 is a schematic diagram of a circuit that may be used to prevent stress on devices when voltage spikes appear at an I/O pad. When transient voltages appear, the Bias-Mid voltage changes momentarily due to the gate to drain overlap capacitance (Cgd) of the driver NMOS. A capacitance (Cbm) is placed at the bias mid node such that the transient voltage at the pad (V~pad, transient) gets divided between Cgd and Cbm depending on the ratio of the capacitances which gives the additional transient voltage on biasmid (Vbm, transient): AVbm, transient = (Cgd/ (Cgd+Cbm) * AV~pad, transient. [0079] Also, when transient voltages appear, the voltage VGPI on PMOS 207 gate changes momentarily due to the gate to drain overlap capacitance (Cgdp) of the driver PMOS. A capacitance (Cgp) is placed at the PMOS 207 gate node such that the transient voltage at the pad (V~pad, transient) gets divided between Cgdp and Cgp depending on the ratio of the capacitances which gives the additional transient voltage on PMOS 207 gate (VGpi+transient) : A (VGpl+transient) = (Cgdp/(Cgdp+Cgp)) *A (V~pad, transient). FIG. 18 is a schematic diagram of a circuit including several previously illustrated embodiments of the invention. The transistors illustrated in FIG. 18 are all 2.5 volt devices. The maximum output pad voltage is 3.6 volts and the maximum input voltage is 5.5 volts. The typical values of power supplies are VDDO = 3. 3 volts, VDDP = 2.5 volts, VDDC = 1. 2 volts, Vssc = 0 volts and Vsso = 0 volts. The operation of the circuit of FIG. 18 under various operating conditions is summarized below. [0081] When the I/O pad 309 is in an output enabled mode (i. e. OE is high) the maximum pad voltage is VDDO. VGPi at the gate of PMOS device 303 is coupled to VDDC through NMOS transistors 1101 and 1801 and accordingly PMOS device 303 is turned on. Block 901 generates an output Bias 1 voltage of VDDC and accordingly PMOS device 907 is turned on, the steady state voltage of Bias~Mid is VDDP and PMOS device 905 is turned off. <Desc/Clms Page number 16> [0082] When the I/O pad 309 is output disabled (i. e. OE is low) and the pad voltage is below a predetermined value, then VGPI at the gate of PMOS 303 is floating if the pad voltage is below VDDO, Block 901 generates a output Bias~1 voltage of VDDC and accordingly PMOS device 907 is turned on, the steady value of Bias-Mid voltage is VDDP, and PMOS device 905 is turned-off in this condition. [0083] When the I/O pad 309 is output disabled (i. e. OE is low) and the pad voltage is above a predetermined value, then block 901 generates an output Bias~l voltage of VDDO and accordingly PMOS device 907 is turned-off, PMOS device 905 is turned on, and the steady state value of Bias-Mid is between (VDDO-VTp) as a minimum value and (VDDP + Vt) as a maximum value, where VTp and Vt are offset voltages due to the turn on threshold voltages of transistors 905 and 909b respectively. Vop), at the gate of PMOS device 303 is coupled to the pad voltage if the pad voltage is greater than VDDO- [0084] Capacitors Cbm and Cgp in FIG. 18 are used to insure that Bias-Mid voltage and VGPI voltage, respectively, are kept at desirable levels when transient voltages appear at the pad as was described relative to FIG. 17. [0085] FIG. 19 is flow chart describing one embodiment of a method for preventing stress on a particular device in the I/O circuit during a transient condition. For example, when the pad 309 (FIG. 3) is switched from the input mode, where VPAD= 5 volts (5.5 in worst case scenario), to the output-enable mode with output low, the device 305 could see a high transient drain-to- source voltage that could lead to Hot-Carrier-Injection. More specifically, when this state change occurs, the gate of device 307 is pulled high causing it to turn on. This makes the potential at the source of device 305 nearly equal to Vsso (nominally 0 volts), while the potential at the drain of device 305 is at VPAD= 5 volts. This high drain-to-source voltage may result in Hot-Carrier- Injection (HCI) in device 305 when it is in a conducting state, which can lead to device degradation. <Desc/Clms Page number 17> [0086] According to steps 1905 and 1910 of FIG. 19, an input voltage at the pad 309 (VPAD) and a reference voltage (VDDO, nominally 3.3V) are sensed. Next, according to step 1915, these two voltages are compared. When the input voltage exceeds the reference voltage (VPAD > VDDO), the potential for a transient condition that could lead to HCI exists across device 305. When this condition is present, the pre-driver circuit 2010 is enabled in step 1920, and the gate voltage of device 307 is reduced according to step 1925, thus reducing the conductivity from drain-to-source and slowing the turn-on of device 307. This prevents device 305 from conducting by blocking its path to ground, thus reducing the possibility of HCI. When input voltage is less than the reference voltage (VPAD < VDDO), HCI conditions are not present, and, according to step 1918, no action is taken. FIG. 20 is a functional circuit diagram implementing the method described in FIG. 19. As described above, devices 301 and 303 comprise the pull-up section of the I/O circuit, while devices 305 and 307 comprise the pull down section of the I/O circuit. The present embodiment of the invention is designed to protect device 305 from high transient drain-to-source voltages that could lead to HCI. One skilled in the art could use the same method, and below described circuits to protect any other similarly situated device. The sense circuit 2005 accepts as its two inputs the pad voltage (VPAD) 309, and the reference voltage (VDDO). Its output is coupled to the pre-driver circuit 2010. Pre-driver circuit 2010 has a path to Vssc (nominally ground) and is coupled to the control gate of device 307. As described above, when VPAD > VDDO, pre-driver 2010 is enabled by sense circuit 2005. When enabled, pre-driver 2010 essentially provides a path from the gate of device307 to reference voltage Vssc, which reduces the gate voltage of device 307. This reduces the conductivity of device 307, thus preventing device 305 from conducting while in this transient condition. This has the effect of preventingHCI in device 305. [0089] FIG. 21 is a schematic diagram of a first embodiment of the circuit described in FIG. 20. The sense circuit 2005 consists of a PMOS device 2102 <Desc/Clms Page number 18> and an NMOS device 2104. The pre-driver circuit 2010 consists of PMOS device 2108 and NMOS device 2110. [0090] According to this first embodiment, when a voltage greater than VDDO by a PMOS threshold voltage (VTP) appears at the pad, PMOS device 2102 turns on, and the drain of NMOS device 2104 goes to pad voltage. The gate of 2104 is tied to Bias-Mid voltage, which causes the source of 2104 to be pulled to Bias~Mid voltage minus the NMOS threshold voltage (V-rN). This turns on device 2110 in pre-driver 2010. Device 2108 remains off in this embodiment. Device 2110, when conducting, provides a path for the dissipation of the gate voltage of device 307, which reduces the conductivity and slows the turn-on of device 307 while in this transient condition. The reduced conductivity of device 307 reduces the voltage from drain-to-source of device 305 and thus suppresses HCI. In the normal output enable mode, when pad voltage is switching between VDDO and Vsso, pre-driver circuit 2010 is not enabled because the gate-to-source voltage of device 2102, which is tied to VDDO, is always less than its threshold. [00911 FIG. 22 is a schematic diagram of a second embodiment of the circuit described in FIG. 20. The sense circuit 2005 consists of NMOS devices 2202, 2204,2206, 2208, and 2104; and PMOS devices 2210 and 2102. Devices 2202-2206 are coupled in series between VPAD and node Vpb. As shown, any number of similar devices may be coupled in similar fashion. Pre-driver circuit 2010 again consists of PMOS device 2108 and NMOS device 2110. [0092] According to this second embodiment, the gate of device 2102 is coupled to an intermediate voltage (Vpb). This voltage (Vpb) is determined by the greater of voltage (VDDO-VTN) or (VPAD-n* VTN) where VTN is the NMOS threshold voltage and n is the number of NMOS in series between VPAD and Vpb (i. e. devices 2202,220 4 and 2206). These transistors are selected such that when a voltage greater than VDDO appears at the pad, device 2102 turns on. Bias mid is such that device 2210 is also on, thus pulling the drain of device 2104 to pad voltage. The gate of 2104 is also tied to the Bias-Mid voltage. Thus, in the above described transient condition, the <Desc/Clms Page number 19> source of 2104 is pulled to Bias-Mid minus the NMOS threshold voltage (V-rN). This turns on device 2110 in pre-driver 2010, which provides a path for the dissipation of the gate voltage of device 307, which, in turn, reduces the conductivity and slows the turn-on of device 307 while in this transient condition. The reduced conductivity of device 307 reduces the voltage from drain-to-source of device 305 and thus suppresses HCI. Note that by varying the number of NMOS transistors (n) between VPAD and Vpb allows the turn- on of device 2102 to be controlled. [0093] FIG. 23 is a schematic diagram of a third embodiment of the circuit described in FIG. 20. The sense circuit 2005 consists of NMOS devices 2104 and 2304, and PMOS device 2102. The pre-driver circuit 2010 consists of PMOS device 2108 and NMOS device 2110. This third embodiment also contains an alternate discharge path for dissipation of pad voltage. This discharge path consists of PMOS device 2308, and NMOS devices 2310, and 2312. [0094] According to a third embodiment, when a voltage greater than VDDO appears at the pad, device 2102 turns on, and the drain of device 2104 goes to pad voltage. The gate of device 2104 is tied to its drain and this is also at pad voltage. This causes the source of device 2104 to be pulled to VPAD minus the NMOS threshold voltage (VTN). As long as bias voltage VDDO is present, then device 2304 is on as well. Device 2304, when conducting, pulls the gate to pre-driver 2010 to a voltage that is the lower of VDDO minus the NMOS threshold voltage, or VPAD minus the NMOS threshold voltage (VrN which turns on device 2110 in pre-driver 2010. Device 2110, when conducting, provides a path for the dissipation of the gate voltage of device 307, which reduces the conductivity and slows the turn-on of device 307 while in this transient condition. This reduces the voltage from drain-to-source of device 305 and thus suppresses HCI. [0095] Additionally, because device 307 turns on slowly in this transient condition, part of the charge at the device input is discharged through PMOS device 2308, and then through NMOS devices 2310 and 2312. In the transient <Desc/Clms Page number 20> condition, VDDO is present (e. g. is at its nominal value of 3.3V), and the output enable (OE) is high. Therefore, PMOS device 2308 and NMOS devices 2310 and 2312 are conducting during the transient condition, and provide a discharge path for the pad 309. The discharge path for pad 309 also reduces the maximum drain-to-source voltage seen across device 305. These embodiments are provided by way of example and not limitation. They describe three different ways to implement the method described in FIG. 19. One skilled in the art would recognize other circuit designs that could implement this method. |
A method and an apparatus are provided for adjusting a sampling protocol in an adaptive control process. The method comprises determining a performance value based on a measurement associated with at least one or more previously processed workpieces, adjusting a sampling protocol for one or more processed workpieces based on the determined performance value, and measuring the one or more processed workpieces according to the sampling protocol to provide one or more measurements. The method further comprises adjusting at least one of a process model and a control parameter based on at least a portion of the one or more measurements. |
What is claimed is:1. A method, comprising:determining a performance valued based on a measurement associated with at least one or more previously processed workpieces;adjusting a sampling protocol for one or more processed workpieces based on the determined performance value;measuring the one or more processed workpieces according to the sampling protocol to provide one or more measurements; andadjusting at least one of a process model and a control parameter based on at least a portion of the one or more measurements.2. The method of claim 1, wherein measuring the one or more processed workpieces comprises measuring one or more processed semiconductor wafers.3. The method of claim 1, wherein determining the performance value comprises determining a difference between a target result and a measured process result, wherein the measured process result is based on at least a portion of the measurements associated with the previously processed workpieces.4. The method of claim 1, wherein adjusting the sampling protocol comprises adjusting at least one of a number of the processed workpieces to be measured, a number of features formed on the processed workpieces that are to be measured, and a type of features to be measured.5. The method of claim 4, wherein adjusting the number of processed workpieces comprises increasing the number of processed workpieces that are to be measured if the performance value is greater than a preselected threshold value.6. The method of claim 5, further comprising determining the performance value for each of a plurality of process runs, and wherein adjusting the number of processed workpieces comprises increasing the number of processed workpieces that are to be measured if the performance value for each of the plurality of process runs is greater than the preselected threshold value.7. The method of claim 4, wherein adjusting the number of processed workpieces comprises decreasing the number of processed workpieces that are to be measured if the performance value is less than a preselected threshold value.8. The method of claim 7, further comprising determining the performance value for each of a plurality of process runs, and wherein adjusting the number of processed workpieces comprises decreasing the number of processed workpieces that are to be measured if the performance value for each of the plurality of process runs is less than the preselected threshold value.9. The method of claim 1, wherein adjusting the number of the processed workpieces that are to be measured comprises adjusting the number of processed workpieces that are to b measured for each process ran and wherein measuring the one or more processed workpieces according to the sampling protocol comprises measuring at least one feature formed on the one or more of the processed workpieces.10. An apparatus comprising:an interface adapted to receive a measurement associated with one or more previously processed workpieces; anda control unit communicatively coupled to the interface, the control unit adapted to:determine a performance value based on the measurement associated with at least one or more of the previously processed workpieces;adjusting a sampling protocol for one or more processed workpieces based on the determined performance value;receive metrology data comprising measurements of the one or more processed workpieces according to the sampling protocol; andadjust at least one of a process model and a control parameter based on at least a portion of the metrology data.11. The apparatus of claim 10, wherein the one or more processed workpieces comprise one or more semiconductor wafers, and wherein the control unit is adapted to process the one or more semiconductor wafers and receive metrology data associated with the measurements of the one or more semiconductor wafers.12. The apparatus of claim 10, wherein the control unit is adapted to determine a difference between a target result and a measured process result, wherein the measured process result is based on at least a portion of the measurements associated with the previously processed workpieces.13. The apparatus of claim 12, wherein the control unit is adapted to adjust at least one of a number of the processed workpieces to be measured, a number of features formed On the processed workpieces that are to be measured, and a type of features to be measured.14. The apparatus of claim 13, wherein the control unit is adapted to increase the number of processed workpieces that are to be measured if the performance value is greater than a preselected threshold value.15. The apparatus of claim 14, wherein the control unit is further adapted to determined the performance value for each of a plurality of process runs, and wherein the control unit is adapted to increase the number of processed workpieces that are to be measured if the performance value for each of the plurality of process runs is greater than the preselected threshold value.16. The apparatus of claim 13, wherein the control unit is adapted to decrease the number of processed workpieces that are to be measured if the performance value is less than a preselected threshold value.17. The apparatus of claim 16, wherein the control unit is farther adapted to determine the performance value for each of a plurality of process runs, and wherein the control unit is adapted to decrease the number of processed workpieces that are to be measured if the performance value for each of the plurality of process runs is less than the preselected threshold value.18. The apparatus of claim 10, wherein the control unit is adapted to adjust the number of the processed workpieces that are to b measured comprises adjusting the number of processed workpieces that are to be measured for each process run and wherein the control unit is adapted to measure the one or more processed workpieces according to the sampling protocol comprises measuring at least one feature formed on the one or more of the processed workpieces.19. An apparatus comprising:means for determining a performance value based on a measurement associated with at least one or more previously processed workpieces;means for adjusting a sampling protocol for one or more processed workpieces based on the determined performance value;means for measuring the one or more processed workpieces according to the adjusted sampling protocol to provide one or more measurements; andmeans for adjusting at least one or a process model and a control parameter based on at least a portion of the one or more measurements.20. An article comprising one or more machine-readable storage media containing instructions that when executed enable a processor to:determine a performance value based on a measurement associated with at least one or more previously processed workpieces;adjust a sampling protocol for one or more processed workpieces based on the determined performance value;receive one or more measurements associated with the one or more processed workpieces, wherein the one or more workpieces are measured according to the adjusted sampling protocol; andadjust at least one of a process model and a control parameter based on at least a portion of the one or more measurements.21. The article of claim 20, wherein the instructions when executed enable the processor to determine a difference between a target result and a measured process result, wherein the measured result is based on at least a portion of the measurements associated with the previously processed workpieces.22. The article of claim 21, wherein the instructions when executed enable the processor to adjust at least one of a number of the processed workpieces to be measured, a number of features formed on the processed workpieces that are to be measured, and a type of features to be measured.23. The article of claim 22, wherein the instructions when executed enable the processor to increase the number of processed workpieces that are to be measured if the performance value is greater than a preselected threshold value.24. The article of claim 22, wherein the instructions when executed enable the processor to decrease the number of processed workpieces that are to be measured if the performance value is less than a preselected threshold value.25. The article of claim 23, wherein the instructions when executed enable the processor to adjust the number of processed workpieces that are to be measured for each process run.26. A system, comprising:a dispatch module;a controller adapted to:determine a performance value based on a measurement associated with at least one or more previously processed workpieces;adjust a sampling protocol of the dispatch module for one or more processed workpieces based on the determined performance value;measure the one or more processed workpieces according to the adjusted sampling protocol to provide one or more measurements; andadjust at least one of a process model and a control parameter based on at least a portion of the one or more measurements.27. The system of claim 26, wherein the controller is implemented within an advanced control process framework. |
BACKGROUND OF THE INVENTION1. Field of the InventionThis invention relates generally to an industrial process, and, more particularly, to adjusting a sampling protocol of processed workpieces in an adaptive semiconductor process.2. Description of the Related ArtThere is a constant drive within the semiconductor industry to increase the quality, reliability and throughput of integrated circuit devices, e.g., microprocessors, memory devices, and the like. This drive is fueled by consumer demands for higher quality computers and electronic devices that operate more reliably. These demands have resulted in a continual improvement in the manufacture of semiconductor devices, e.g., transistors, as well as in the manufacture of integrated circuit devices incorporating such transistors. Additionally, reducing the defects in the manufacture of the components of a typical transistor also lowers the overall cost per transistor as well as the cost of integrated circuit devices incorporating such transistors.Generally, a set of processing steps is performed on a group of wafers, sometimes referred to as a "lot," using a variety of processing tools, including photolithography steppers, etch tools, deposition tools, polishing tools, rapid thermal processing tools, implantation tools, etc. The technologies underlying semiconductor processing tools have attracted increased attention over the last several years, resulting in substantial improvements.One technique for improving the operation of a semiconductor processing line includes using a factory wide control system to automatically control the operation of the various processing tools. The manufacturing tools communicate with a manufacturing framework or a network of processing modules. Each manufacturing tool is generally connected to an equipment interface. The equipment interface is connected to a machine interface that facilitates communications between the manufacturing tool and the manufacturing framework. The machine interface can generally be part of an Advanced Process Control (APC) system. The APC system initiates a control script based upon a manufacturing model, which can be a software program that automatically retrieves the data needed to execute a manufacturing process. Often, semiconductor devices are staged through multiple manufacturing tools for multiple processes, generating data relating to the quality of the processed semiconductor devices.During the fabrication process, various events may take place that affect the performance of the devices being fabricated. That is, variations in the fabrication process steps result in device performance variations. Factors, such as feature critical dimensions, doping levels, particle contamination, film optical properties, film thickness, film uniformity, etc., all may potentially affect the end performance of the device. Various tools in the processing line are controlled in accordance with performance models to reduce processing variation. Commonly controlled tools include photolithography steppers, polishing tools, etching tools, and deposition tools, etc. Pre-processing and/or post-processing metrology data is supplied to process controllers for the tools. Operating recipe parameters, such as processing time, are calculated by the process controllers based on the performance model and the metrology data to attempt to achieve post-processing results as close to a target value as possible. Reducing variation in this manner leads to increased throughput, reduced cost, higher device performance, etc., all of which equate to increased profitability.Run-to-run control in semiconductor manufacturing is a type of batch control, where a batch may be as small as one wafer or as large as several lots of wafers. The standard output of a run-to-run controller is a process recipe. This recipe defines the set points for "low-level" controllers built into the processing tool. The process recipe is generally calculated based on an estimated "process" state (e.g., the processing tool state, wafer state, etc.) and a process model that is substantially representative of the operation of the process. The "process" state is typically not measured directly but rather estimated based on the measurements from previously processed wafers. Based on at least the process model and the estimated process state, the run-to-run controller supervises the processing tool by specifying required values for process variables such as temperature, pressure, flow, and process time. The processing tool initiates the activities necessary to maintain these variables at the requested values.In an adaptive process, the process model or parameters used to determine the next recipe may be adjusted, as desired, based on metrology data associated with previously processed workpieces to bring the actual process results closer to the target results. It may be desirable to adjust the process model or parameters, for example, if the controller is unable to achieve the desired results because of disturbance or process changes. Because the process model or parameters are adjusted based on the metrology data, the amount of metrology data that is available may affect how reliably the process model/parameters may be adjusted. Thus, if a system employing a fixed sampling frequency plan measures a fixed number of wafers, then the amount of metrology data that is available also remains fixed. In a fixed sampling frequency plan, for example, only one out of every five processed wafers may be measured because of time and cost concerns. A fixed sampling frequency plan thus may not offer an efficient or flexible plan for adjusting the process model or parameters to achieve the desired process results.The present invention is directed to overcoming, or at least reducing the effects of, one or more of the problems set forth above.SUMMARY OF THE INVENTIONIn one embodiment of the present invention, a method is provided for adjusting a sampling protocol in an adaptive control process. The method comprises determining a performance value based on a measurement associated with at least one or more previously processed workpieces, adjusting a sampling protocol for one or more processed workpieces based on the determined performance value, and measuring the one or more processed workpieces according to the sampling protocol to provide one or more measurements. The method further comprises adjusting at least one of a process model and a control parameter based on at least a portion of the one or more measurements.In another embodiment of the present invention, an apparatus is provided for adjusting a sampling protocol in an adaptive control process. The apparatus comprises an interface communicatively coupled to a control unit. The interface is adapted to receive a measurement associated with at least one or more previously processed workpieces. The control unit is adapted to determine a performance value based on the measurement associated with at least one or more of the previously processed workpieces, adjust a sampling protocol for one or more processed workpieces based on the determined performance value, and receive metrology data comprising measurements of the one or more processed workpieces according to the sampling protocol. The control unit is further adapted to adjust at least one of a process model and a control parameter based on at least a portion of the metrology data.In a further embodiment of the present invention, an article comprising one or more machine-readable storage media containing instructions is provided for adjusting a sampling protocol in an adaptive control process. The one or more instructions, when executed, enable the processor to determine a performance value based on a measurement associated with at least one or more previously processed workpieces, adjust a sampling protocol for one or more processed workpieces based on the determined performance value, and measure the one or more processed workpieces according to the adjusted sampling protocol to provide one or more measurements. The processor is further enabled to adjust at least one of a process model and a control parameter based on at least a portion of the one or more measurements.In a further embodiment of the present invention, a system is provided adjusting a sampling protocol in an adaptive control process. The system comprises a dispatch module and a controller. The controller is adapted to determine a performance value based on a measurement associated with at least one or more previously processed workpieces, adjust a sampling protocol of the dispatch module for one or more processed workpieces based on the determined performance value, and measure the one or more processed workpieces according to the adjusted sampling protocol to provide one or more measurements. The controller is further adapted to adjust at least one of a process model and a control parameter based on at least a portion of the one or more measurements.BRIEF DESCRIPTION OF THE DRAWINGSThe invention may be understood by reference to the following description taken in conjunction with the accompanying drawings, in which like reference numerals identify like elements, and in which:FIG. 1 illustrates a block diagram of an industrial system, in accordance with one embodiment of the present invention;FIG. 2 illustrates a flow diagram of a method that may be implemented in the industrial system of FIG. 1, in accordance with one embodiment of the present invention; andFIG. 3 illustrates a flow diagram of a method of adjusting the sampling protocol of processed workpieces in accordance with one embodiment of the present invention.While the invention is susceptible to various modifications and alternative forms, specific embodiments thereof have been shown by way of example in the drawings and are herein described in detail. It should be understood, however, that the description herein of specific embodiments is not intended to limit the invention to the particular forms disclosed, but on the contrary, the intention is to cover all modifications, equivalents, and alternatives falling within the spirit and scope of the invention as defined by the appended claims.DETAILED DESCRIPTION OF SPECIFIC EMBODIMENTSIllustrative embodiments of the invention are described below. In the interest of clarity, not all features of an actual implementation are described in this specification. It will of course be appreciated that in the development of any such actual embodiment, numerous implementation-specific decisions must be made to achieve the developers' specific goals, such as compliance with system-related and business-related constraints, which will vary from one implementation to another. Moreover, it will be appreciated that such a development effort might be complex and time-consuming, but would nevertheless be a routine undertaking for those of ordinary skill in the art having the benefit of this disclosure.Turning now to the drawings, and specifically referring to FIG. 1, a block diagram of a system 100 is illustrated, in accordance with one embodiment of the present invention. The system 100, in the illustrated embodiment, may perform at least one process operation 102, which may be an industrial process, such as a semiconductor fabrication process, a photographic process, a chemical process, or any other process in which the process state(s) or process output may drift with time.In the system 100, the process operation 102 may be performed using one or more processing tools 105. Generally, the particular type of process operation 102 that is performed, and the type of processing tool(s) 105 employed in that process operation 102, depends on the particular implementation. For example, in the context of a chemical industrial process, the process operation 102 may include processing a polymer. In the context of a photographic process, the process operation 102 may, for example, include processing a film.For illustrative purposes, the process operation 102 depicted in FIG. 1 is at least a portion of a semiconductor fabrication process, which, for example, may be part of an overall semiconductor process flow. Examples of the process operation 102 may be an etch process, deposition process, chemical mechanical planarization (CMP), and the like. The processing tool 105, in the illustrated embodiment, may take the form of any semiconductor fabrication equipment used to produce a processed workpiece, such as a silicon wafer. The semiconductor process may be utilized to produce a variety of integrated circuit products including, but not limited to, microprocessors, memory devices, digital signal processors, application specific integrated circuits (ASICs), or other similar devices. An exemplary processing tool 105 may include an exposure tool, an etch tool, a deposition tool, a polishing tool, a rapid thermal anneal processing tool, a test-equipment tool, an ion implant tool, a packaging tool and the like.In the system 100 of FIG. 1, the process operation 102 may be performed using one or more processing tools 105. The system 100 may include one or more metrology tools 112 for measuring one or more of a variety of aspects of the workpieces (e.g., wafers) that are processed in the process operation 102. The metrology tool 112, in one embodiment, may be capable of measuring aspects of the workpieces off-line, in-line, in situ or a combination thereof. In the illustrated embodiment, a dispatch module 114 indicates and/or identifies the number of workpieces that are provided to the metrology tool 112 for measurements.In accordance with one or more embodiments of the present invention, and as is described in greater detail below, the dispatch module 114 adjusts the measurement frequency of the processed workpieces based on a value of a control performance index. The control performance index, in one embodiment, represents the amount of deviation or difference between the results of the processed workpiece(s) and the target value(s) (i.e., the difference between the actual process result(s) and the expected result(s)). Depending on the value of the control performance index, the dispatch module 114 may increase the sampling frequency, decrease the sampling frequency, or leave it unchanged. As utilized herein, adjusting the "sampling frequency" may include increasing/decreasing the number of workpieces (e.g., wafers) whose output characteristics are measured or it may include increasing/decreasing the number of measurements taken from a given workpiece or workpieces or it may include both or it may include altering the types of measurements taken from a given workpiece or workpieces.The manufacturing system 100 may include a manufacturing execution system (MES) 115 that is coupled to the APC framework 120. The manufacturing execution system 115 may, for example, determine the processes that are to be performed by the processing tool 105, when these processes are to be performed, how these processes are to be performed, etc. In the illustrated embodiment, the manufacturing execution system 115 manages and controls the overall system through the APC framework 120.An exemplary APC framework 120 that may be suitable for use in the manufacturing system 100 may be implemented using the Catalyst system offered by KLA-Tencor, Inc. The Catalyst system uses Semiconductor Equipment and Materials International (SEMI) Computer Integrated Manufacturing (CIM) Framework compliant system technologies and is based on the Advanced Process Control (APC) Framework. CIM (SEMI E81-0699-Provisional Specification for CIM Framework Domain Architecture) and APC (SEMI E93-0999-Provisional Specification for CIM Framework Advanced Process Control Component) specifications are publicly available from SEMI, which is headquartered in Mountain View, Calif.The APC framework 120 includes at least one process controller 155 that, through a feedback or feedforward process, aids the processing tool 105 towards performing a desired process to thereby achieve a desired result. The process controller 155 in the illustrated embodiment includes a control unit 156, a storage unit 157, and a process model that is storable in the storage unit 157. The process controller 155, based at least on an input from an estimator module 180 and an input target value from line 182, uses a process model to determine the next control move for the processing tool 105. The particular control actions taken by the process controller 155 depend on the particular processes performed by the processing tool 105, and the output from the estimator module 180.In an adaptive process, the process model (or parameter(s)) employed by the process controller 155 may be adjusted if the process results are not within an acceptable range of the target value. The process results may deviate from the target value for a variety of reasons, including, but not limited to, a presence of a disturbance or a change in the process. In the illustrated embodiment, the adjustment to the process model or parameter(s) is performed by a parameter/model adjustment (PMA) module 185. As shown, in the illustrated embodiment, the PMA module 185 receives three input values and provides two output values. The three inputs include the target value, a control move value, and a measurement value received from lines 182, 186, 187, respectively. The PMA module 185 delivers the process model or parameters (which may have been adjusted) to the process controller 155 and also delivers a sample signal to the dispatch module 114 to adjust the sampling protocol. The PMA module 185, in one embodiment, may include one or more interface units 190 to communicate with various components of the system 100.The process model 158 employed by the processor controller 155 may be a relatively simple equation-based model (e.g., linear, exponential, weighted average, etc.) or a more complex model, such as a neural network model, principal component analysis (PCA) model, partial least squares projection/latent structures (PLS) model, or the like. The specific implementation of the process model 158 may vary depending on the modeling techniques selected and the process being controlled.The process controller 155, in one embodiment, maintains incoming "state" information associated with the process operation 102, where the "state" information may be based at least in part on the characteristics (i.e., wafer state data) of the wafer selected for gathering metrology data and/or state information known about the controlled processing tool 105 (i.e., tool state data). The phrase "process state" is used herein to denote the "workpiece state" and/or the "processing tool state."The estimator module 180 estimates the next tool state of the processing tool 105 (or the next processing state) based on metrology data associated with a previously processed workpiece and a previously estimated state. The phrase "next tool state," as utilized herein, refers to the state of the processing tool 105 before the next batch of workpieces is processed. Based on the estimated next tool state, the process controller 155 generates the next recipe or control move for the processing tool 105. For example, in the context of an etching process, the estimator module 180 estimates an etch rate of the processing tool 105 based on the received metrology data (e.g., etch depth), and the process controller 155 then uses the estimated etch rate to determine an etch time (i.e., recipe) that the processing tool 105 should use to etch the next workpiece (e.g., wafer).In the illustrated embodiment, the process controller 155 is computer programmed with software to implement the functions described. However, as will be appreciated by those of ordinary skill in the art, a hardware controller designed to implement the particular functions may also be used. Moreover, the functions performed by the process controller 155, as described herein, may be performed by multiple controller devices distributed throughout a system. Additionally, the process controller 155 may be a stand-alone controller, resident in the processing tool 105, or part of a system controlling operations in an integrated circuit manufacturing facility. The term "module," as utilized herein, may be implemented in software, hardware, or any combination thereof.Unless specifically stated otherwise, or as is apparent from the discussion, terms such as "processing" or "computing" or "calculating" or "determining" or "displaying" or the like, refer to the action and processes of a computer system, or similar electronic computing device, that manipulates and transforms data represented as physical, electronic quantities within the computer system's registers and memories into other data similarly represented as physical quantities within the computer system's memories or registers or other such information storage, transmission or display devices.It should be understood that the illustrated components shown in the block diagram of the system 100 in FIG. 1 are illustrative only, and that, in alternative embodiments, additional or fewer components may be utilized without deviating from the spirit or scope of the invention. As an example, in one embodiment, the various components of the system 100 may communicate with each other without the APC framework 120. As an additional example, in one embodiment, the processing tool 105, metrology tool 112, and/or MES 115 may each interface with the APC framework 120 through an associated equipment interface (not shown). Additionally, it should be noted that although various components, such as the dispatch module 114 of the system 100 of FIG. 1 are shown as stand-alone components, in alternative embodiments, such components may be integrated with other components of the system 100.Referring now to FIG. 2, a flow diagram of a method that may be implemented in the manufacturing system 100 of FIG. 1 is illustrated, in accordance with one embodiment of the present invention. The method of FIG. 2 illustrates the exemplary steps performed in association with a given process run. These steps may be repeated as desired for each process run.In the manufacturing system 100, after (or as) a first batch of workpieces are processed by the processing tool 105, the metrology tool 112 (or an in-situ metrology tool) measures (at 212) one or more output characteristics of the processed workpiece. In the context of an etch process, the metrology data may, for example, include the critical dimension, profile and/or etch depth of the features formed on the processed wafer. In the illustrated embodiment of FIG. 1, the metrology data is provided to and received (at 215) by the estimator module 180 of the process controller 155 and the PMA module 185.The PMA module 185 determines (at 225) a performance index, which, in one embodiment, represents the amount of deviation or difference between the results of the processed workpiece(s) and the target value(s) (i.e., the difference between the actual process result(s) and the expected result(s)). Thus, the performance index, in one embodiment, may be indicative of how close the process result(s) is/are to the target value(s). For example, a relatively large performance index may indicate an occurrence of a larger-than-expected deviation in the process result, which may require the PMA module 185 to adjust the process model (or parameters) to bring the process result closer inline with the target result.In one embodiment, the performance index may be calculated by determining the difference between a measurement represented by the metrology data (received in block 215) and the target value provided on the line 182 (see FIG. 1). If more than one performance index is calculated (one for each measured workpiece, for example), then, in one embodiment, the plurality of calculated index values may be combined (e.g., averaged) to arrive at a composite value. The difference may then be determined between the composite value and the target value to ascertain the performance index (at 225). In an alternative embodiment, each of the plurality of performance indices may be considered individually rather than collectively.The PMA module 185 determines (at 230) if it is desirable to adjust a process model and/or process parameters. In one embodiment, this may be determined by comparing the performance index value to a preselected threshold value. If, for example, the performance index is greater than the preselected threshold value (i.e., an indication of a larger-than-desired deviation in the actual results from the expected results), then it may be desirable to adjust the process model or parameter(s). If the performance index is less than or equal to the preselected threshold value, then a process model (or parameters) adjustment may not be desired because the process may be operating within an acceptable range.Assuming that a process model (or parameter) adjustment is not desired (at 230), then, in one embodiment, the PMA module 185 adjusts a sampling protocol (at 240) of the dispatch module 114 based on the performance index. Adjusting the sampling protocol (at 240) may include adjusting the sampling frequency of the processed workpieces that are to be measured, a number of features formed on the processed workpieces that are to be measured, and/or a type of features that are to be measured.In one embodiment, the sampling protocol may be adjusted (at 240) by indicating to the dispatch module 114 to increase or decrease the number (or type) of sample measurements that are desired from the processed workpieces. In one embodiment, based on the performance index value, it may be determined that no adjustment to the sampling frequency is desired (at 240). If no changes to the sampling frequency are desired, then the PMA module 185 may indicate as such to the dispatch module 114 or, alternatively, provide no indication to the dispatch module 114, thus signifying that no change is desired to the sampling frequency. One embodiment of the act of adjusting the sampling protocol (at 240) is illustrated in FIG. 3, which is described later.In FIG. 2, the processing tool 105 processes (at 250) a next batch of workpieces, and the metrology tool 112 measures (at 255) a selected number of processed workpieces based on the adjusted sampling protocol (at 240). Thus, for example, if the PMA module 185 increases the sampling frequency (at 240), the metrology tool 112 may take more measurements than were taken during earlier process run(s). The metrology tool 112 may take more measurements, for example, by increasing the number of processed workpieces that are sampled, increasing the number of features of the processed workpieces that are measured, or a combination thereof. In other instances, as explained above, the metrology tool 112 may measure fewer processed workpieces than, for example, the previous process run(s). It should be understood that measuring the processed workpieces may comprise measuring one or more features or output characteristics (e.g., deposition thickness, etch depth, critical dimensions) of the processed workpieces.Assuming that a process model (or parameter) adjustment is desired (at 230), then, in one embodiment, the PMA module 185 increases a sampling protocol (at 262) of the dispatch module 114 based on the performance index. The processing tool 105 processes (at 263) a next batch of workpieces, and the metrology tool 112 measures (at 266) a selected number of processed workpieces based on the increased sampling protocol (at 262).The PMA module 185 adjusts (at 270) the process model or parameter(s) utilized by the process controller 155 based on the measurements (at 266) of the selected number of processed workpieces, as indicated above, are measured based on the increased sampling protocol (see block 262). Increasing the number measurements of the selected number of processed workpieces, for example, means that more sample measurements (relative to the previous run(s)) are available to the PMA module 185 to make the desired adjustments (at 260) to the process model or parameter(s).Having more sample measurements enables the PMA module 185 to more accurately adjust the process model or parameter(s) to control the process operation 102 to achieve the desired results. The nature of the adjustment made to the process model or parameters(s) may vary depending on implementation. For example, assume the process controller 155 is a PID (proportional-integral-derivative) controller, i.e., the transfer function of the controller is[mathematical formula - see original document]where u is the control move, {tilde over (y)}=y-y is the difference between the process output (y) and the process model predicted output (y). K1, K2, and K3 are controller parameters. If an adjustment to the parameters (e.g., "K1, K2, and K3") of the controller model is desired, which can be done by solving an optimization problem, then it may be useful to have additional measurements to get a better solution for the parameters. An example of adjusting the process model may include changing the changing the process model currently employed by the process controller 155 (e.g., y=ax+b), to another process model (e.g., y=cx<2> +dx+e), if the current process model is unable to control the process operation 102as desired. In the above exemplary process models, y is the process output, and x is the process state.In FIG. 3, the PMA module 185 determines (at 310) if the performance index (determined at block 225) is greater than a first preselected threshold value, and, if so, the PMA module 185 indicates to the dispatch module 114 to increase (at 315) the sampling frequency. An increase in the sampling frequency may be desired because a performance index higher than the first preselected threshold may indicate that the process result is not within an acceptable range of the target value, and thus more measurements are needed to adjust the process model or parameters. In one embodiment, the PMA module 185, depending on the magnitude of the performance index, may indicate to the dispatch module 114 the new sampling frequency that is desired. It should be understood that the particular value assigned to the first preselected threshold value will depend on the particular implementation. In an alternative embodiment, the PMA module 185 may not provide an indication to increase the sampling frequency until a plurality of performance indices are determined to be greater than the first preselected threshold value. That is, the sampling frequency is increased only after several consecutive performance indices (associated with several process runs) are higher than the first threshold value.If the performance index is not greater than the first preselected threshold value (at 310), then the PMA module 185 determines (at 320) if the performance index is less than a second preselected threshold value. If the performance index is less than the second preselected threshold value, then the PMA module 185, in one embodiment, indicates (at 330) to the dispatch module 114 to decrease the sampling frequency. A decrease in the sampling frequency may be desired because a performance index lower than the second preselected threshold can indicate that the process result is within an acceptable range of the target result, and thus the number of measurements needed may be reduced because the current process model is controlling the process as desired. In an alternative embodiment, the PMA module 185 may not provide an indication to lower the sampling frequency until a plurality of performance indices are determined to be less than the second preselected threshold value. That is, the sampling frequency is lowered only after several consecutive performance indices (associated with several process runs) are lower than the second threshold value. In one embodiment, the PMA module 185, depending on the magnitude of the performance index, may indicate to the dispatch module 114 the new sampling frequency that is desired. The particular value chosen for the second preselected threshold value will depend on the particular implementation.If the performance index is not greater than the first preselected threshold value and is not less than the second preselected threshold value, then, in the illustrated embodiment, the PMA module 185 may indicate to the dispatch module 114 that no change in the sampling frequency is desired (at 335). In an alternative embodiment, if it is determined that no change is desired in the sampling frequency, the PMA module 185 may provide no indication to the dispatch module 114, thereby indicating that the previous sampling frequency (or some predefined default sampling frequency) should be applied.One or more embodiments of the present invention adjusts the sampling protocol as needed based on the performance index to adjust a process model or parameters in an adaptive industrial process. For example, the sampling rate may be increased if a relatively large performance index is determined or it may be lowered if a relatively small performance index is determined. In other instances, the sampling protocol may not be altered if the determined performance index is neither relatively large nor small. By adjusting the sampling protocol as desired, an efficient and effective way of controlling the process to achieve the desired objectives is provided.The various system layers, routines, or modules may be executable by the control unit 156 (see FIG. 1). As utilized herein, the term "control unit" may include a microprocessor, a microcontroller, a digital signal processor, a processor card (including one or more microprocessors or controllers), or other control or computing devices. The storage unit 157 (see FIG. 1) referred to in this discussion may include one or more machine-readable storage media for storing data and instructions. The storage media may include different forms of memory including semiconductor memory devices such as dynamic or static random access memories (DRAMs or SRAMs), erasable and programmable read-only memories (EPROMs), electrically erasable and programmable read-only memories (EEPROMs) and flash memories; magnetic disks such as fixed, floppy, removable disks; other magnetic media including tape; and optical media such as compact disks (CDs) or digital video disks (DVDs). Instructions that make up the various software layers, routines, or modules in the various systems may be stored in respective storage devices. The instructions when executed by a respective control unit cause the corresponding system to perform programmed acts.The particular embodiments disclosed above are illustrative only, as the invention may be modified and practiced in different but equivalent manners apparent to those skilled in the art having the benefit of the teachings herein. Furthermore, no limitations are intended to the details of construction or design herein shown, other than as described in the claims below. It is therefore evident that the particular embodiments disclosed above may be altered or modified and all such variations are considered within the scope and spirit of the invention. Accordingly, the protection sought herein is as set forth in the claims below. |
To provide apparatuses, systems and methods for identifying instructions to be speculatively performed in parallel.SOLUTION: In a system 100, a deep learning compiler 102 uses a representation 104 of a computer program to generate a modified representation 106 of the computer program that indicates at least one operation that can be speculatively performed. The DL compiler 102 is a computer program that runs on a CPU and is accessible via an API. The representation 104 of the computer program is a graph representation including instructions to be launched on a parallel processing unit such as a GPU by a host. A modified representation 106 of the computer program is a modified graph representation that includes instructions to be launched on a device by a host and indicates instructions that can be speculatively launched on the device by the host. The deep learning compiler 102 further generates a memory allocation plan 108.SELECTED DRAWING: Figure 1 |
A processor comprising one or more circuits for implementing one or more instructions identified by a compiler to be performed speculatively in parallel.wherein the one or more instructions are identified to be performed speculatively in parallel by the compiler based at least in part on identifying a copy operation; and the one or more circuits. will implement the one or more instructions based at least in part on receiving a command from another processor.The instructions are at least partially responsible for identifying copy operations between a parallel processing unit and a host computer system and labeling safe operations following one or more identified copy operations. 2. The processor of claim 1, identified as being speculatively implemented in parallel by said compiler based on2. The processor of claim 1, wherein the instructions include extended live ranges of variables used by operations associated with instructions identified as being speculatively performed in parallel.wherein the processor is part of a parallel processing unit and the one or more circuits will implement the one or more instructions after receiving a kernel launch command from a host computer system; 2. The processor of claim 1.2. The processor of claim 1, wherein the instruction is part of a while loop.2. The processor of claim 1, wherein the instructions implement part of an inference operation using a recurrent neural network.one or more processors for implementing one or more instructions identified by a compiler to be speculatively performed in parallel; and one or more for storing said one or more instructions. A system comprising a memory ofwherein said instructions are identified to be performed speculatively in parallel by said compiler based at least in part on identifying a copy operation from a parallel processing unit to a host computer system. Item 9. The system according to Item 8.wherein said instructions are identified as being speculatively performed in parallel by said compiler based at least in part on finding one or more conditional branches in a representation of a computer program using a neural network; 9. The system of claim 8, which is aThe one or more processors are a first one or more processors, and the system initiates the one or more instructions for execution by the first one or more processors 9. The system of claim 8, further comprising a second one or more processors forThe one or more processors are a first one or more processors, and the system initiates the one or more instructions for execution by the first one or more processors wherein the second one or more processors perform a copy operation that satisfies a condition preceding the one or more instructions in the representation of the computer program 9. The system of claim 8, wherein in response to receiving a value via the method, it will cease to speculatively fire instructions.9. The instructions of claim 8, wherein the instructions have been identified as speculatively performed in parallel by the compiler based at least in part on labeling operations that are safe to be speculatively performed. The system described in .by the compiler based at least in part on searching a representation of a computer program for copy operations and identifying operations following the copy operations that are speculatively safe to perform. 9. The system of claim 8, identified as being performed speculatively in parallel.9. The system of claim 8, wherein the instructions are part of a while loop implementing part of an inference operation using a neural network.A method comprising implementing one or more instructions identified by a compiler to be speculatively implemented in parallel.Parallelization by the compiler based at least in part on identifying actions in which the instructions do not change random state, overwrite outputs, use signal instructions, or use wait instructions. 17. The method of claim 16, identified as being performed speculatively inidentifying the instruction to be performed speculatively in parallel by the compiler based at least in part on identifying a conditional branch and selecting a path from a plurality of paths following the conditional branch; 17. The method of claim 16, wherein the method is17. The method of claim 16, wherein the instructions are identified to be performed speculatively in parallel by the compiler based at least in part on identifying copy operations.17. The method of claim 16, wherein the instructions include extended live ranges for variables used in speculatively implemented operations.the instructions identified to be performed speculatively in parallel by the compiler based at least in part on identifying a copy operation, the instructions performing an inference operation using a neural network; 17. The method of claim 16, implementing part ofA set of instructions that, when performed by one or more processors, causes the one or more processors to at least identify one or more instructions that are to be speculatively performed in parallel. A machine-readable medium that stores thesaid set of instructions, when executed by said one or more processors, at least partially for identifying copy operations between a parallel processing unit and a host computer system in a representation of a computer program; 23. The machine-readable medium of claim 22, further causing the one or more processors to identify the instructions as to be speculatively performed in parallel based on.said set of instructions, when executed by said one or more processors, further causing said one or more processors to at least identify operations following a safe-to-execute copy operation; 23. The machine-readable medium of claim 22.further causing the one or more processors to label at least operations that are safe to be speculatively performed when the set of instructions are performed by the one or more processors; 23. The machine-readable medium of claim 22.When the set of instructions is performed by the one or more processors, at least labeling operations that are speculatively safe; and labeling as speculatively safe. 23. The machine-readable medium of claim 22, further causing the one or more processors to extend the live range of variables associated with attached operations.The set of instructions, when executed by the one or more processors, searches a computer program representation for at least copy operations between a graphics processing unit and a host computer system. and identifying operations that are speculatively safe to be performed following the copy operation.said set of instructions, when executed by said one or more processors, at least extending the live range of variables associated with operations identified as being speculatively executed in parallel; 23. The machine-readable medium of claim 22, further causing one or more processors to perform.When the set of instructions is executed by the one or more processors, conditions in the representation of the computer program are based at least in part on identifying a copy operation in the representation of the computer program. finding a branch with a condition; selecting a path from a plurality of paths following the conditional branch; and identifying instructions in the selected path that are speculatively safe. 23. The machine-readable medium of claim 22, further causing a plurality of processors to perform.based at least in part on performing one or more speculative operations using a representation of a computer program containing one or more instructions identified by a compiler to be performed speculatively in parallel; a computer vision system including one or more processors for identifying one or more trajectories of a corresponding one or more objects; and at least a portion of the identified one or more trajectories. and one or more of a propulsion system, a directional control system, and a vehicle operator notification system for performing one or more actions based on a target.the one or more processors comprises one or more first processors in a host computer system and one or more second processors in a parallel processing unit; will speculatively execute instructions based at least in part on receiving commands from the host computer system that launch kernels containing instructions on the parallel processing units. vehicle described in31. The vehicle of claim 30, wherein the one or more instructions are identified as being speculatively implemented by the compiler based at least in part on identifying copy operations.31. The vehicle of claim 30, wherein the one or more instructions are identified as being speculatively performed by the compiler based at least in part on labeling safe operations. .31. The vehicle of claim 30, wherein the instructions include extended live ranges for variables used by operations associated with instructions identified as being speculatively performed in parallel.31. The vehicle of claim 30, wherein the instructions implement part of an inference operation using a recurrent neural network. |
At least one embodiment relates to processing resources used to implement and facilitate artificial intelligence. For example, at least one embodiment relates to a processor or computing system used to perform training and/or inference using neural networks according to various novel techniques described herein.Training a neural network and/or inferring using a neural network can use significant memory, time, or computing resources.The amount of memory, time, or computing resources used for training a neural network and/or reasoning using a neural network can be improved.1 is a block diagram illustrating a system for identifying speculatively executable instructions, according to at least one embodiment; FIG. 1 is a block diagram illustrating a system for speculatively executing instructions by invoking a device from a host, according to at least one embodiment; FIG. 4 is a flowchart of a technique for generating instructions, according to at least one embodiment; 4 is a flowchart of a technique for identifying possible speculative instructions, according to at least one embodiment; 4 is a flowchart of a technique for speculatively invoking instructions, according to at least one embodiment; [0014] Figure 4 is a comparison of inference operations performed over time, in accordance with at least one embodiment; FIG. 4 illustrates inference and/or training logic, according to at least one embodiment; FIG. 4 illustrates inference and/or training logic, according to at least one embodiment; FIG. 4 illustrates training and deployment of a neural network, according to at least one embodiment; 1 illustrates an exemplary data center system, in accordance with at least one embodiment; FIG. 1 is a diagram illustrating an example of an autonomous vehicle, according to at least one embodiment; FIG. 10B is a diagram illustrating an example location and field of view of the camera of the autonomous vehicle of FIG. 10A, according to at least one embodiment; FIG. 10B is a block diagram illustrating an exemplary system architecture for the autonomous vehicle of FIG. 10A, according to at least one embodiment; FIG. 10B illustrates a system for communication between a cloud-based server and the autonomous vehicle of FIG. 10A, according to at least one embodiment; FIG. 1 is a block diagram of a computer system, in accordance with at least one embodiment; FIG. 1 is a block diagram of a computer system, in accordance with at least one embodiment; FIG. 1 illustrates a computer system, in accordance with at least one embodiment; FIG. 1 illustrates a computer system, in accordance with at least one embodiment; FIG. 1 illustrates a computer system, in accordance with at least one embodiment; FIG. 1 illustrates a computer system, in accordance with at least one embodiment; FIG. 1 illustrates a computer system, in accordance with at least one embodiment; FIG. 1 illustrates a computer system, in accordance with at least one embodiment; FIG. FIG. 4 illustrates a shared programming model, according to at least one embodiment; FIG. 4 illustrates a shared programming model, according to at least one embodiment; 1 illustrates an exemplary integrated circuit and associated graphics processor, in accordance with at least one embodiment; FIG. 1 illustrates an exemplary integrated circuit and associated graphics processor, in accordance with at least one embodiment; FIG. 1 illustrates an exemplary integrated circuit and associated graphics processor, in accordance with at least one embodiment; FIG. FIG. 4 illustrates additional exemplary graphics processor logic, in accordance with at least one embodiment; FIG. 4 illustrates additional exemplary graphics processor logic, in accordance with at least one embodiment; 1 illustrates a computer system, in accordance with at least one embodiment; FIG. FIG. 4 illustrates a parallel processor, according to at least one embodiment; FIG. 4 illustrates a partition unit, according to at least one embodiment; FIG. 4 is a diagram illustrating processing clusters, in accordance with at least one embodiment; 1 illustrates a graphics multiprocessor, according to at least one embodiment; FIG. 1 illustrates a multi-graphics processing unit (GPU) system, according to at least one embodiment; FIG. FIG. 2 illustrates a graphics processor, according to at least one embodiment; 1 is a block diagram illustrating a processor micro-architecture for a processor, according to at least one embodiment; FIG. FIG. 4 illustrates a deep learning application processor, according to at least one embodiment; 1 is a block diagram illustrating an exemplary neuromorphic processor, in accordance with at least one embodiment; FIG. 1 illustrates at least a portion of a graphics processor, in accordance with one or more embodiments; FIG. 1 illustrates at least a portion of a graphics processor, in accordance with one or more embodiments; FIG. 1 illustrates at least a portion of a graphics processor, in accordance with one or more embodiments; FIG. 1 is a block diagram of a graphics processing engine of a graphics processor in accordance with at least one embodiment; FIG. 1 is a block diagram of at least a portion of a graphics processor core, according to at least one embodiment; FIG. FIG. 4 illustrates thread execution logic including an array of processing elements of a graphics processor core, in accordance with at least one embodiment; FIG. 4 illustrates thread execution logic including an array of processing elements of a graphics processor core, in accordance with at least one embodiment; 1 illustrates a parallel processing unit (“PPU”), according to at least one embodiment; FIG. 1 illustrates a general purpose processing cluster (“GPC”), according to at least one embodiment; FIG. FIG. 4 illustrates a memory partition unit of a parallel processing unit (“PPU”), according to at least one embodiment; FIG. 4 illustrates a streaming multiprocessor, according to at least one embodiment; FIG. 4 is an example data flow diagram for an advanced computing pipeline, according to at least one embodiment; 1 is a system diagram of an example system for training, adapting, instantiating, and deploying machine learning models in advanced computing pipelines, according to at least one embodiment; FIG. 3710A includes an illustration of an advanced computing pipeline 3710A for processing imaging data, in accordance with at least one embodiment; FIG. FIG. 11 is a diagram containing an example data flow for a virtual instrument supporting an ultrasound device, in accordance with at least one embodiment; FIG. 4 is a diagram containing an example data flow for a virtual instrument supporting a CT scanner, in accordance with at least one embodiment; FIG. 4 is a data flow diagram of a process for training a machine learning model, according to at least one embodiment; FIG. 4 is an illustration of a client-server architecture for extending an annotation tool with pre-trained annotation models, according to at least one embodiment;FIG. 1 is a block diagram illustrating a system 100 for identifying instructions that can be speculatively executed in parallel, according to at least one embodiment. In at least one embodiment, a deep learning (DL) compiler 102 uses a computer program representation 104 to generate a modified representation 106 of a computer program that indicates at least one speculatively feasible operation. to generate In at least one embodiment, DL compiler 102 is a computer program that runs on a processor (eg, CPU) and is accessible via an application programming interface (API). In at least one embodiment, computer program representation 104 is executed by a host (e.g., a computer system having a CPU) on a device (e.g., a parallel processing unit (PPU) such as a graphics processing unit (GPU). :parallel processing unit)) contains the instructions to be run on it. In at least one embodiment, computer program representation 104 includes operations using a neural network, such as a recurrent neural network (RNN). In at least one embodiment, computer program representation 104 is a graphical representation.In at least one embodiment, the modified representation 106 of the computer program is launched on a device (eg, a PPU, GPU, or other suitable acceleration device) by a host (eg, a computer system having a CPU). indicates instructions that can be speculatively invoked on the device by the host. In at least one embodiment, modified representation 106 of the computer program is a modified graphical representation. In at least one embodiment, the modified computer program representation 106 includes a list and/or other Contains data structures. In at least one embodiment, the modified representation 106 of the computer program is a labeled and/or annotated version of the representation 104 of the computer program.In at least one embodiment, deep learning compiler 102 also generates memory allocation plan 108 based at least in part on modified representation 106 of the computer program. In at least one embodiment, memory allocation plan 108 uses an extended live range of variables and/or values (e.g., tensors) used in instructions (e.g., operations) with an indication that the instruction is speculatively executable. (extended live range). In at least one embodiment, the deep learning compiler 102 generates an extended live range, instead of or in addition to the memory allocation plan 108, to the modified representation 106 of the computer program or some other data structure. Memorize extended live range instructions.In at least one embodiment, stream scheduler 110 of deep learning compiler 102 produces modified representation 106 of the computer program. In at least one embodiment, stream scheduler 110 is a computer program running on a processor (eg, CPU) and accessible via an API. In at least one embodiment, the stream scheduler 106 identifies operations in the speculatively executable representation 104 of the computer program, and the modified representation 106 of the computer program identifies at least one of the speculatively executable operations. Contains one instruction. In at least one embodiment, stream scheduler 110 performs stream scheduling only within basic blocks (eg, tasks from different basic blocks do not overlap each other). In at least one embodiment, stream scheduler 110 performs stream scheduling within and across basic blocks. In at least one embodiment, some other portion of deep learning compiler 102 , instead of or in addition to stream scheduler 110 , generates at least a portion of modified representation 106 of the computer program.In at least one embodiment, memory allocator 112 of deep learning compiler 102 generates memory allocation plan 108 and/or other indications of extended live range. In at least one embodiment, memory allocator 112 is a computer program running on a processor (eg, CPU) and accessible via an API. In at least one embodiment, some other portion of deep learning compiler 102 instead of or in addition to memory allocator 112 generates memory allocation plan 108 and/or other indications of extended live range. .In at least one embodiment, the DL compiler 102 finds advanced execution opportunities (eg, in the stream scheduler 110) based at least in part on the representation 104 (eg, graph) of the computer program; Find operations that are safe to perform during these opportunities, and allocate memory addresses accordingly (eg, in memory allocator 112). In at least one embodiment, speculative execution and/or speculative execution of an instruction or operation receives an indication required by the invoked instruction (e.g., the value of a branch condition received during a device-to-host copy operation). Before, refers to launching an instruction (eg, one or more operations in the kernel) from the host to the device. In at least one embodiment, at least one action of DL compiler 102 can be expressed in terms of the following pseudocode.In at least one embodiment, the DL compiler 102 generates a list of all speculation opportunities by finding device-to-host copy operations in the representation 104 of the computer program. In at least one embodiment, instead of or in addition to finding device-to-host copy operations, DL compiler 102 generates a list of speculation opportunities based on at least one other type of operation. In at least one embodiment, the list of speculation opportunities is assigned to the variable spec_ops, as shown in the pseudocode above. In at least one embodiment, DL compiler 102 traverses the nodes of computer program representation 104 based at least in part on the identified speculation opportunities. In at least one embodiment, DL compiler 102 selects a branch from multiple branches following the conditional branch associated with the speculative opportunity (e.g., the branch subject to the condition's value being true or the condition's value being false). branches that follow a certain). In at least one embodiment, DL compiler 102 uses heuristics to select a branch from multiple branches (e.g., select a branch that returns to the beginning of a loop when the loop typically iterates many times). do). In at least one embodiment, DL compiler 102 traverses nodes in the selected branch (eg, nodes in subsequent_operations(op)) to identify whether the operation can be safely performed. In at least one embodiment, DL compiler 102 initially assumes that operations can be safely performed, but performing these types of operations can cause undesirable side effects if performed speculatively. So certain operations (e.g. changing random state, overwriting outputs, using scanned inputs, using signal instructions, using wait instructions, using different streams, and/or operations that have parent nodes that are not labeled as safe), label these operations as unsafe. In at least one embodiment, DL compiler 102 extends the live scope of variables for operations labeled as speculatively safe.In at least one embodiment, the deep learning compiler 102, referred to as a compiler, compiles a modified representation 106 of a computer program (e.g., with stream scheduler 110) and memory allocator (e.g., with memory allocator 112). It generates the plan 108 but does not generate enough runtime code to execute the computer program corresponding to the computer program representation 104 . In at least one embodiment, computer program representation 104 is generated by a deep learning framework (eg, TensorFlow or PyTorch). In at least one embodiment, the computer program representation 104 is a graph. In at least one embodiment, stream scheduler 110 and memory allocator 112 operate via a common API. In at least one embodiment, compiler and/or interpreter 114 generates runtime code 116 based at least in part on modified representation 106 of computer program and memory allocation plan 108 . In at least one embodiment, memory allocation plan 108 is included as part of modified representation 106 of the computer program. In at least one embodiment, the compiler/interpreter 114, in addition to the modified representation 106 and the memory allocation plan 108 of the computer program, inputs other inputs 118 (e.g., computer Generating runtime code 116 based at least in part on a portion of the program. In at least one embodiment, DL compiler 102 generates runtime code 116 (eg, by integrating compiler/interpreter 114 in DL compiler 102). In at least one embodiment, runtime code 116 is stored (eg, in memory and/or persistent storage devices) for later use. In at least one embodiment, runtime code 116 is used immediately after generation (eg, compiled just in time for execution). In at least one embodiment, stream scheduler 110, memory allocator 112, and compiler/interpreter 114 (eg, as a compiler) use stream scheduler 110, memory allocator 114 to generate runtime code 116 at compile time. 112 and a combined compiler that performs the operations described with respect to compiler/interpreter 114 . In at least one embodiment, the combined compiler is accessible via an API.In at least one embodiment, computer program representation 104 is structured data (eg, data in a predetermined format and/or syntax) that represents the entire computer program. In at least one embodiment, the computer program representation 104 is structured data representing a portion of the computer program, rather than the entire computer program, where the representation is tensor data in a deep learning neural network. A directed acyclic graph (DAG) can be defined to show the use of . In at least one embodiment, each node of the DAG represents an operation that produces some tensor output, and each edge represents a tensor producer-consumer relationship. In at least one embodiment, a client using system 100 (eg, an application that uses system 100 to compile and/or run deep learning neural network training and/or inference techniques) runs runtime code 116, program Based at least in part on the modified representation 106 and/or the memory allocation plan 108, invoke instructions to be speculatively executed.In at least one embodiment, DL compiler 102 generates modified representation 106 of the computer program based at least in part on adding one or more indicators to representation 104 of the computer program. In at least one embodiment, the indicators are called annotations. In at least one embodiment, instead of or in addition to adding an index to computer program representation 104, DL compiler 102 generates data containing a list of instructions in safely speculatively executable computer program representation 104. Generate structure. In at least one embodiment, computer program representation 104 includes a graph that performs inference using a neural network (eg, a recurrent neural network (RNN)). In at least one embodiment, computer program representation 104 includes a graph that performs training using a neural network (eg, RNN). In at least one embodiment, computer program representation 104 is for an image processing application.FIG. 2 is a block diagram illustrating a system 200 for speculatively executing instructions by invoking a device 204 from a host 202, according to at least one embodiment. In at least one embodiment, host 202 is a computer system that includes processor 206 (eg, CPU) and memory 208 . In at least one embodiment, device 204 is an accelerator that includes processor 210 (eg, one or more parallel processors) and memory 212 . In at least one embodiment, device 204 is a PPU or GPU. In at least one embodiment, DL compiler 102 of FIG. 1 runs on host 202 .In at least one embodiment, host 202 executes operations and operations to be performed on device 204 (e.g., by invoking parallel processing framework instructions such as a Compute Unified Device Architecture (CUDA) kernel). /or invoke an instruction. In at least one embodiment, host 202 provides one or more instructions (e.g., annotated instructions in modified representation 106 of computer program and/or run-time code 116 of FIG. 1) capable of performing operations speculatively. and/or operations), invoke one or more instructions to be speculatively performed by device 204 . In at least one embodiment, host 202 and/or device 204 extend the variable's live range based at least in part on the extended live range identified by DL compiler 102 of FIG.In at least one embodiment, the executor, not shown for clarity, runs on host 202, but is a modified representation of a computer program (e.g., modified representation 106 of computer program in FIG. 1 and and/or speculatively invoke instructions based at least in part on the runtime code 116). In at least one embodiment, the executor runs on a CPU (eg, processor 206) and launches instructions (eg, as a kernel) on a parallel processing unit (eg, GPU). In at least one embodiment, the executor is a virtual machine running on processor 206 (eg, CPU).In at least one embodiment, during execution of a compiled graph (e.g., modified representation 106 or runtime code 116 of FIG. 1), an instruction with an annotation indicating that the operation can be performed speculatively is encountered. At least one aspect of the execution process is represented by the following pseudocode.In at least one embodiment, host 202 uses a kernel launched by host 202 and implemented by device 204 to initiate operations to be speculatively performed on device 204 . In at least one embodiment, speculative execution and/or speculative execution of an instruction or operation receives an indication required by the launched kernel (e.g., the value of a branch condition received during a device-to-host copy operation). Before, it refers to booting the kernel. In at least one embodiment, launching kernels in this manner provides a performance advantage, but with the possible consequence of running kernels that may not be needed.In at least one embodiment, processor 210 of device 204 is configured to implement one or more instructions identified by a compiler (eg, DL compiler 102 of FIG. 1) as being speculatively implemented in parallel. Contains one or more circuits. In at least one embodiment, system 200 includes one or more memories for storing instructions identified as being speculatively executed in parallel (e.g., while device 204 is executing the instructions). pre-kernel launch instruction memory 208 and post-kernel launch instruction memory 212). In at least one embodiment, identification of an instruction to be speculatively executed in parallel (eg, on a PPU or GPU) is performed before it is certain that the instruction will be needed (eg, a value indicating a branch condition is , before it is received by the host in a device-to-host copy operation), an identifier that the host (eg, host 202) can initiate an instruction on the device (eg, using a kernel launch operation) means that the instruction includes an associated label, annotation, or other suitable identifier).In at least one embodiment, the instructions were identified to be speculatively performed in parallel by the compiler based at least in part on identifying a copy operation, and the one or more processors 210 The circuitry will implement one or more instructions based at least in part on receiving a command from another processor (eg, processor 206 of host 202). In at least one embodiment, the instructions identify a copy operation between a parallel processing unit (eg, device 204) and a host computer system (eg, host 202) and one or more identified Based at least in part on labeling safe operations following copy operations, those identified as being performed speculatively in parallel by the compiler. In at least one embodiment, the copy operation is identified from a generic device-to-host copy operation (e.g., in computer program representation 104), but when implemented, the copy operation is a device-specific (e.g., device 204) and a particular host (eg, host 202). In at least one embodiment, the instructions include extended live ranges of variables used by the operations associated with the instructions identified as being speculatively performed in parallel. In at least one embodiment, instructions identified as being speculatively executed in parallel refer to instructions that can be speculatively executed but are not necessarily speculatively executed. In at least one embodiment, host 202 indicates a branch condition even if host 202 has not invoked all possible instructions following a branch condition identified by the compiler to be speculatively invoked. After receiving a value, stop speculatively invoking the instruction following this branch condition.In at least one embodiment, processor 210 is part of a PPU, and one or more circuits of processor 210 are pre-executed after receiving a kernel launch command from a host computer system (e.g., host 202). will implement the identified instruction or instructions. In at least one embodiment, the instruction identified for speculative execution is part of a while loop. In at least one embodiment, the instructions identified for speculative execution are part of some other type of loop (eg, a counting loop) or a different type of code section following a branch condition. In at least one embodiment, the instructions implement part of an inference operation using a recursive neural network. In at least one embodiment, the instructions are at least partially directed to finding one or more conditional branches in a representation of a computer program using a neural network (eg, representation 104 of the computer program of FIG. 1). Identified by the compiler based on In at least one embodiment, one or more processors of host 202 (eg, processor 206) execute one or more instructions for execution by one or more processors of device 204 (eg, processor 210). Speculatively launching, one or more processors of host 202 satisfy a condition preceding one or more speculatively-implemented instructions in the representation of the computer program (eg, from device 204). Stop speculatively invoking instructions in response to receiving a value via a copy operation.FIG. 3 shows a flowchart 300 of a technique for generating instructions, according to at least one embodiment. In at least one embodiment, technique 300 includes at least one circuit, at least one system, at least one processor, at least one graphics processing unit, at least one parallel processor, and/or as described herein. implemented by at least some other processor or component thereof described and/or shown. In at least one embodiment, at least one aspect of technique 300 is performed by DL compiler 102 of FIG.In at least one embodiment, at block 302, technique 300 includes identifying a speculative opportunity (eg, in computer program representation 104 of FIG. 1). In at least one embodiment, identifying a speculative opportunity includes finding a device-to-host copy operation. In at least one embodiment, the operation that allows preemptive execution of other operations is an asynchronous device-to-host copy. In at least one embodiment, operations can be performed speculatively when pageable memory is used for asynchronous device-to-host copies so that they behave synchronously even for asynchronous copies. , another form of memory (eg, pinned memory) is used while performing speculative operations. In at least one embodiment, identifying the speculative opportunity includes finding operations corresponding to device-to-host copy operations that may be safe to initiate prior to receiving the copied data. In at least one embodiment, identifying a speculative opportunity includes searching a computer program representation (eg, a graph) for device-to-host copies (eg, during stream scheduling). In at least one embodiment, device-to-host copies found through a graph search are treated as inference points.In at least one embodiment, at block 304, technique 300 includes labeling safe operations. In at least one embodiment, labeling safe operations includes identifying safe operations. In at least one embodiment, a safe operation means that the operation does not produce side effects (e.g., altering random state), does not break any data dependencies, and does not use the memory of any live tensor during the copy operation. If you don't overwrite it, it is considered safe. In at least one embodiment, successive operations are repeated in order of execution for each inference point (eg, identified by the device-to-host copy operation found in block 302). In at least one embodiment, the iteration is broken upon reaching the end of a basic block, waiting for the default stream, or an inference point, which can be the same inference point. In at least one embodiment, operations produce side effects (e.g., can change something in addition to inputs and outputs, such as random or some custom operation), scanned inputs/outputs , use a non-default stream, be a signal or a wait instruction, or depend on another unsafe behavior. In at least one embodiment, scanned inputs/outputs are treated as non-dangerous and safe if further checks for memory boundaries are satisfied. In at least one embodiment, all other operations are treated as safe. In at least one embodiment, operations are labeled as safe or unsafe (eg, in data structures, annotations, labels, or in separate data structures that associate operations with corresponding safety measures).In at least one embodiment, identifying speculation opportunities at block 302 and labeling safe operations at block 304 is further expressed with respect to the following pseudocode illustration.In at least one embodiment, with respect to the above pseudocode, the basic block is first searched (eg, by the deep learning compiler 102 of FIG. 1) for a device-to-host (d2h) copy, which in this case is , operation (op) 5. In at least one embodiment, the technique continues to iterate through operations starting at op5 (e.g., d2h is found), with break conditions for iteration at the end of basic blocks, waits for stream 0, and other speculation points. Yes, the special treatment for cjmp is to loop back to the start of the basic block. In at least one embodiment, during the iteration, at op6, the iterator moves to op1. In at least one embodiment, op1 is marked as safe. In at least one embodiment, op2 is marked as dangerous because it is a random op with side effects. In at least one embodiment, op3 is marked as safe. In at least one embodiment, op4 is marked as dangerous because it depends on d from op2. In at least one example, iterations stopped at op5, and op1 and op3 were previously marked as safe.In at least one embodiment, at block 306, the technique 300 includes extending the variable's live interval. In at least one embodiment, extending the variable's live interval includes scanning the graph (eg, the modified representation 106 of the computer program of FIG. 1) for inference points, and then speculating. For each operation marked as safe, the live range of the tensor used by the operation is extended to ensure that the live range is live during the inference point. In at least one embodiment, extending the live range ensures that anti-dependencies and/or output dependencies are not violated after resource allocation. In at least one embodiment, expanding the live intervals of variables in block 306 is used in memory allocation, using the assumption that operations can be executed out of order.In at least one embodiment, at block 308, the technique 300 generates instructions (eg, the modified representation 106 of the computer program of FIG. 1, the memory allocation plan 108, and/or the runtime code 116). include. In at least one embodiment, at block 310, technique 300 includes performing another action. In at least one embodiment, performing other actions in block 310 is performed immediately after the device variable is defined, or immediately before the host variable is used if a copy operation has not yet been located at such location. moving the copy operations in the generated instructions and/or in the modified representation of the computer program, such as by moving the copy operations from the device to the host such thatIn at least one embodiment, technique 300 is implemented, at least in part, by one or more processors (eg, processors of host 202 of FIG. 2, or any other such as those shown or described herein). suitable processor) to implement a set of instructions (eg, from a non-transitory machine-readable medium). In at least one embodiment, technique 300 includes identifying one or more instructions to be speculatively implemented in parallel (eg, using DL compiler 102 of FIG. 1). In at least one embodiment, technique 300 is based, at least in part, on identifying copy operations (eg, at block 302) between a parallel processing unit and a host computer system in representations of computer programs. , including identifying the instruction as being performed speculatively. In at least one embodiment, technique 300 identifies operations following a copy operation that is safe to perform (eg, at block 304). In at least one embodiment, technique 300 includes labeling operations that are speculatively safe (eg, at block 304). In at least one embodiment, technique 300 includes extending the live scope of variables associated with operations labeled as safe to be speculatively performed (eg, at block 306). In at least one embodiment, technique 300 involves searching a representation of a computer program for copy operations between a GPU and a host computer system, and following the copy operations, even if speculatively performed. and identifying safe copy operations. In at least one embodiment, technique 300 includes extending the live scope of variables associated with operations identified as being performed speculatively in parallel. In at least one embodiment, technique 300 involves finding a conditional branch in a representation of a computer program based at least in part on identifying a copy action in the representation of the computer program; It includes selecting a path from paths and identifying instructions in the selected path that are safe to be speculatively executed.FIG. 4 depicts a flowchart of a technique 400 for identifying possible speculative instructions, according to at least one embodiment. In at least one embodiment, technique 400 may include at least one circuit, at least one system, at least one processor, at least one graphics processing unit, at least one parallel processor, and/or as described herein. implemented by at least some other processor or component thereof described and/or shown. In at least one embodiment, one or more aspects of technique 400 are performed by DL compiler 102 of FIG.In at least one embodiment, at block 402, technique 400 includes identifying a representation of a set of instructions (eg, computer program representation 104 of FIG. 1). In at least one embodiment, identifying the representation of the set of instructions includes a representation of the set of instructions (e.g., a graph), a pointer to the representation of the set of instructions, a link to the representation of the set of instructions, or a representation of the set of instructions. including receiving an API function call containing any other suitable manner of identifying a representation of the .In at least one embodiment, at block 404, technique 400 includes finding a device-to-host copy operation (eg, in computer program representation 104 of FIG. 1). In at least one embodiment, finding device-to-host copy operations is identified in block 402 to search for device-to-host copy operations and to find device-to-host copy operations. It involves traversing a representation of a set of instructions. In at least one embodiment, technique 400 includes finding conditional branches at block 404 in an alternative or additional manner to finding device-to-host copy operations. In at least one embodiment, finding device-to-host copy operations includes searching for asynchronous device-to-host copy operations.In at least one embodiment, at block 406, technique 400 includes selecting a branch path. In at least one embodiment, selecting branch paths at block 406 includes selecting one or more branch paths from a plurality of branch paths. In at least one embodiment, selecting a branch path at block 406 includes using heuristics to select the branch path (e.g., when the loop is typically performed many times, the end of the loop). choosing a path to perform a further iterative instruction in the loop instead of the instruction following the . In at least one embodiment, at block 408, technique 400 includes performing other actions. In at least one embodiment, performing other actions at block 408 includes returning to block 404 to identify further copy operations.FIG. 5 depicts a flowchart of a technique 500 for speculatively invoking instructions, according to at least one embodiment. In at least one embodiment, technique 500 may include at least one circuit, at least one system, at least one processor, at least one graphics processing unit, at least one parallel processor, and/or as described herein. implemented by at least some other processor or component thereof described and/or shown. In at least one embodiment, at block 502, the technique 500 includes identifying a speculative opportunity. In at least one embodiment, identifying a speculative opportunity at block 502 is a deep learning network in which the host must wait for the device before being able to select a branch to take in the control flow (e.g., an action that copies the branch condition). including identifying points in In at least one embodiment, identifying the speculation opportunity at block 502 includes identifying a device-to-host copy operation. In at least one embodiment, identifying a speculative opportunity at block 502 is determined (e.g., by annotations, labels, or any other (by a suitable identifier for the previously identified operation).In at least one embodiment, at block 504, technique 500 includes speculatively invoking an instruction. In at least one embodiment, speculatively invoking the instruction at block 504 is performed by host 202 of FIG. In at least one embodiment, speculatively launching an instruction at block 504 is performed by an executor (eg, a virtual machine running on a host). In at least one embodiment, speculatively activating the instruction at block 504 includes activating the kernel after the branch condition before it is certain that the kernel will be needed.In at least one embodiment, at block 506 the technique 500 includes performing another action. In at least one embodiment, performing other actions at block 506 includes returning to block 502 to identify the next speculation opportunity. In at least one embodiment, performing other actions at block 506 includes copying to pinned memory, creating an event, and recording this event (eg, at block 504). Including overriding device to host copy commands including invoking safe operations while querying for events and destroying events. In at least one embodiment, the overrides inside the copy implementation are represented in the pseudocode below.In at least one embodiment, as shown in the pseudocode above, overriding includes the ability for one instruction to invoke another instruction and ways to skip instructions during the main execution context.In at least one embodiment, performing other actions in block 506 includes one or more instructions identified by a compiler (eg, DL compiler 102 of FIG. 1) as being speculatively performed in parallel. (eg, by device 204 of FIG. 2). In at least one embodiment, the instructions are based at least in part on identifying operations that do not change random state, overwrite outputs, use signal instructions, or use wait instructions. , are identified as being performed speculatively in parallel by the compiler. In at least one embodiment, instructions are speculatively executed in parallel by a compiler based at least in part on identifying a conditional branch and selecting a path from multiple paths following the conditional branch. identified as a thing. In at least one embodiment, the instructions were identified to be performed speculatively in parallel by the compiler based at least in part on identifying copy operations. In at least one embodiment, the instructions include extended live ranges for variables used in speculatively-implemented operations. In at least one embodiment, the instructions implement part of an inference operation using a neural network.FIG. 6 illustrates a comparison 600 of inference operations performed over time, according to at least one embodiment. In at least one embodiment, the first shorthand representation 602 depicts an inference operation based at least in part on a neural network performed over time without the use of speculative execution. In at least one embodiment, the second simplified representation 604 uses the same neural network as for the first simplified representation 602 (eg, as described with respect to one or more of FIGS. 1-5). i) shows speculative operations performed over time using speculative execution.In at least one embodiment, the first shorthand representation 602 includes an upper portion 606 indicating operations performed on the host computer system and a lower portion 608 indicating operations performed on the device (eg, PPU or GPU). include. In at least one embodiment, the second shorthand representation 604 includes an upper portion 610 indicating operations performed on the host computer system and a lower portion 612 indicating operations performed on the device (eg, PPU or GPU). include.In at least one embodiment, the end of the first iteration (eg, of the while loop for the RNN) is marked with line 614 and the end of the second iteration is marked with line 616 for inferences that do not perform speculative operations. It is In at least one embodiment, the end of the first iteration is marked with line 618 for inferences involving performing speculative operations (eg, as described with respect to one or more of FIGS. 1-5). , the end of the second iteration is marked by line 620 .In at least one embodiment, a gap 622 exists in the second iteration of operations performed on the device shown at bottom 608 before the first operation 614 of the second iteration is performed on the device. do. In at least one embodiment, gap 622 is the result of a condition that will be returned from the device (e.g., the value of a variable indicating whether a loop has terminated) before the host system initiates further operations on the device. ) arises from the requirement to wait for In at least one embodiment, the host system completes synchronous operations 624 (e.g., stream synchronous operations between the host system and the device) before initiating further instructions when performance of speculative operations is not supported. wait for it to finish. In at least one embodiment, for inferences involving performing speculative operations, a gap similar to gap 622 does not exist before first operation 626 is performed on the device. In at least one embodiment, when performing speculative operations is supported, the host system does not perform synchronous operations and/or provides an indication to the host system that the kernel for the next iteration will be needed. before it receives a kernel for the next iteration (e.g., a kernel containing instructions marked as safe for speculative execution). In at least one embodiment, launching the kernel for the next iteration in this manner causes the device to perform more quickly (eg, without a gap similar to gap 622 before first operation 626). will have functions available to In at least one embodiment, this provides performance advantages, such as greater device utilization and/or less time to complete iterations, over systems that do not support performing speculative operations. Bring.In at least one embodiment, comparison 600 shows a simplified representation of the behavior of a decoder (eg, a natural language model such as a Tacotron2 decoder, or any other suitable decoder) over time (eg, does not support speculative operations). one iteration between lines 614 and 616 for systems and one iteration between lines 618 and 620 for systems that support speculative operations). In at least one embodiment, comparison 600 depicts a simplified representation of a system trace (eg, as generated by a system performance analysis tool such as NVIDIA Nsight Systems, or any other suitable performance analysis tool).In at least one embodiment, comparison 600 is a legacy branch point implementation where performing speculative operations (eg, as described with respect to one or more of FIGS. 1-5) requires synchronization at every branch point. • Show that the approach can result in greater parallelism and/or utilization. In at least one embodiment, performing speculative operations is a legacy technique where the host system can only invoke functions up to a branch and must wait for the device to finish before choosing a path to take. • Provides a performance advantage over approaches. In at least one embodiment, performing speculative operations (eg, as described with respect to one or more of FIGS. 1-5) provides performance advantages for inference and/or training with RNNs. , where the outputs of the nodes are fed back as inputs, and the RNN behaves like a loop of nodes throughout the time sequence. In at least one embodiment, performing speculative operations improves utilization of a device (e.g., device 204 of FIG. 2) and does not perform operations speculatively (e.g., kernels certainly require yields approximately a 15% performance improvement over legacy techniques that do not launch kernels from the host to the device before they become active). In at least one embodiment, performing speculative operations (eg, as described with respect to one or more of FIGS. 1-5) involves performing out-of-order execution or branch prediction on the GPU itself. • Applies to host systems speculatively launching kernels on unsupported GPUs. In at least one embodiment, performing speculative operations (eg, as described with respect to one or more of FIGS. 1-5) can be discussed with respect to the following pseudocode: .In at least one embodiment, the instructions in the while loop body of the above pseudocode are host-initiated and computed on the device. In at least one embodiment, after all instructions have been launched, the host must wait for the final memory copy to complete before the host can launch more instructions (e.g., without pre-execution). must. In at least one embodiment, with speculative execution (eg, as described with respect to one or more of FIGS. 1-5), the host executes instructions for the next iteration before evaluating the loop condition. can continue to run.Inference and Training Logic FIG. 7A illustrates inference and/or training logic 715 used to perform inference and/or training operations for one or more embodiments. Details regarding inference and/or training logic 715 are provided below in conjunction with FIGS. 7A and/or 7B.In at least one embodiment, inference and/or training logic 715 uses forward propagation to configure neurons or layers of a neural network that are trained and/or used to infer in aspects of one or more embodiments. and/or output weights, and/or input/output data, and/or code and/or data storage 701 for storing other parameters, without limitation. In at least one embodiment, training logic 715 may include or be coupled to code and/or data storage 701 for storing graph code or other software for controlling timing and/or sequence. The code and/or data storage 701 may be loaded with weights and/or other parameter information and stored in logic, including integer and/or floating point units (collectively arithmetic logic units (ALUs)). is configured. In at least one embodiment, code such as graph code loads weights or other parametric information into the processor ALU based on the architecture of the neural network to which such code corresponds. In at least one embodiment, the code and/or data storage 701 stores data during forward propagation of input/output data and/or weight parameters during training and/or inference using aspects of one or more embodiments. stores the weight parameters and/or input/output data for each layer of the neural network trained or used in conjunction with one or more embodiments. In at least one embodiment, any portion of code and/or data storage 701 may reside in a processor's L1, L2, or L3 cache or other on-chip or off-chip data storage, including system memory. may be included withIn at least one embodiment, any portion of code and/or data storage 701 may be internal or external to one or more processors or other hardware logic devices or circuits. In at least one embodiment, code and/or code and/or data storage 701 includes cache memory, dynamic random addressable memory (“DRAM”), static random addressable memory (“DRAM”), "SRAM": static random addressable memory), non-volatile memory (eg, flash memory), or other storage. In at least one embodiment, code and/or code and/or data storage 701 may be internal or external to the processor, for example, or may include DRAM, SRAM, flash, or some other type of storage. The choice to include depends on the storage available on-chip vs. off-chip, the latency requirements of the training and/or inference functions performed, the batch size of data used in inference and/or training of the neural network. size, or any combination of these factors.In at least one embodiment, the inference and/or training logic 715 uses backpropagation and Code and/or data storage 705 for storing output weights and/or input/output data may be included without limitation. In at least one embodiment, the code and/or data storage 705 stores data during backpropagation of input/output data and/or weight parameters during training and/or inference using aspects of one or more embodiments. stores the weight parameters and/or input/output data for each layer of the neural network trained or used in conjunction with one or more embodiments. In at least one embodiment, training logic 715 may include or be coupled to code and/or data storage 705 for storing graph code or other software for controlling timing and/or sequence. The code and/or data storage 705 may be loaded with weights and/or other parameter information to store logic including integer and/or floating point units (collectively arithmetic logic units (ALUs)). is configured.In at least one embodiment, code, such as graph code, causes the processor ALU to load weights or other parametric information based on the architecture of the neural network to which such code corresponds. In at least one embodiment, any portion of code and/or data storage 705 may be a processor's L1, L2, or L3 cache or other on-chip or off-chip data storage, including system memory. may be included with In at least one embodiment, any portion of code and/or data storage 705 may be internal or external to one or more processors or other hardware logic devices or circuits. In at least one embodiment, code and/or data storage 705 may be cache memory, DRAM, SRAM, non-volatile memory (eg, flash memory), or other storage. In at least one embodiment, code and/or data storage 705 may be internal or external to the processor, for example, or include DRAM, SRAM, flash memory, or some other type of storage. The choice of is based on the available storage on-chip versus off-chip, the latency requirements of the training and/or inference functions performed, the batch size of data used in inference and/or training of the neural network, Or it may be determined according to any combination of these factors.In at least one embodiment, code and/or data storage 701 and code and/or data storage 705 may be separate storage structures. In at least one embodiment, code and/or data storage 701 and code and/or data storage 705 may be a combined storage structure. In at least one embodiment, code and/or data storage 701 and code and/or data storage 705 may be partially combined and partially separate. In at least one embodiment, any portion of code and/or data storage 701 and code and/or data storage 705 may be stored in a processor's L1, L2, or L3 cache or other memory including system memory. It may be included with on-chip or off-chip data storage.In at least one embodiment, inference and/or training logic 715 is for performing logical and/or arithmetic operations based at least in part on or indicated by training and/or inference code (e.g., graph code). may include, without limitation, one or more arithmetic logic units (“ALUs”) 710 including integer and/or floating point units, the results of which are stored in activation storage 720. (eg, output values from layers or neurons in a neural network), which are stored in code and/or data storage 701 and/or code and/or data storage 705. Data functions of input/output and/or weight parameters. In at least one embodiment, the activations stored in activation storage 720 are generated according to linear algebraic and/or matrix-based computations performed by ALU 710 in response to executing instructions or other code; where the weight values stored in code and/or data storage 705 and/or data 701 are used as operands along with other values such as bias values, slope information, momentum values, or other parameters or hyperparameters, Any or all of these may be stored in code and/or data storage 705 or code and/or data storage 701 or another storage on-chip or off-chip.In at least one embodiment, ALUs 710 are included within one or more processors or other hardware logic devices or circuits, while in other embodiments, ALUs 710 are implemented within the processors or other hardware that use them. It may be external to the logic device or circuit (eg co-processor). In at least one embodiment, ALU 710 may be included within an execution unit of a processor, within the same processor, or within a different processor of a different type (e.g., central processing unit, graphics processing unit, fixed function unit). etc.), or otherwise contained within an ALU bank accessible by the processor's execution units. In at least one embodiment, code and/or data storage 701, code and/or data storage 705, and activation storage 720 may share a processor or other hardware logic device or circuitry and may share separate In embodiments, they may be in different processors or other hardware logic devices or circuits, or in the same processor or other hardware logic devices or circuits and in different processors or other hardware logic devices or circuits. May be in any combination. In at least one embodiment, any portion of activation storage 720 is included with other on-chip or off-chip data storage, including processor L1, L2, or L3 caches or system memory. good too. Additionally, the inference and/or training code may be stored with other code accessible to the processor or other hardware logic or circuitry, such that the processor fetches, decodes, schedules, executes, retires, and/or otherwise executes. It may be fetched and/or processed using logic circuitry.In at least one embodiment, activation storage 720 may be cache memory, DRAM, SRAM, non-volatile memory (eg, flash memory), or other storage. In at least one embodiment, activation storage 720 may be wholly or partially internal or external to one or more processors or other logic circuitry. In at least one embodiment, the selection of whether activation storage 720 is internal or external to the processor, for example, or includes DRAM, SRAM, flash memory, or some other type of storage, Available storage on-chip versus off-chip, latency requirements of training and/or inference functions performed, batch size of data used in inference and/or training of neural networks, or any of these factors. It may be determined according to any combination.In at least one embodiment, the inference and/or training logic 715 shown in FIG. 7A is a TensorFlow® processing unit from Google, an inference processing unit (IPU) from Graphcore™, or an Intel Corp. It may be used in conjunction with an application-specific integrated circuit (“ASIC”), such as the Nervana® (eg, “Lake Crest”) processor from Microsoft. In at least one embodiment, the inference and/or training logic 715 shown in FIG. 7A includes central processing unit (“CPU”) hardware, graphics processing unit (“GPU”) hardware. hardware, or in conjunction with other hardware such as a field programmable gate array (“FPGA”).FIG. 7B illustrates inference and/or training logic 715, according to at least one embodiment. In at least one embodiment, the inference and/or training logic 715 may include, without limitation, hardware logic, where computational resources are distributed to one or more layers of neurons in a neural network. are dedicated to weight values or other information corresponding to , or are otherwise used only in conjunction with them. In at least one embodiment, the inference and/or training logic 715 shown in FIG. (eg, "Lake Crest") processors. In at least one embodiment, the inference and/or training logic 715 shown in FIG. 7B is implemented in central processing unit (CPU) hardware, graphics processing unit (“GPU”) hardware, or field programmable gate arrays. It may be used in conjunction with other hardware such as (FPGA). In at least one embodiment, inference and/or training logic 715 includes, without limitation, code and/or data storage 701 and code and/or data storage 705 and uses code (e.g., graph codes), weight values, and/or bias values, slope information, momentum values, and/or other parameter or hyperparameter information may be stored. In at least one embodiment illustrated in FIG. 7B, each of code and/or data storage 701 and code and/or data storage 705 are associated with dedicated computing resources, such as computing hardware 702 and computing hardware 706, respectively. In at least one embodiment, computing hardware 702 and computing hardware 706 each store mathematical functions, such as linear algebra functions, in code and/or data storage 701 and code and/or data storage 705, respectively. Activation Storage 720 stores the results of the activation storage 720 .In at least one embodiment, code and/or data storage 701 and 705, respectively, and corresponding computational hardware 702 and 706, respectively, correspond to different layers of the neural network, thereby storing code and/or data. The activations resulting from one storage/computation pair 701/702 with storage 701 and computational hardware 702 are to reflect the conceptual organization of the neural network by using the following code and/or data code: Storage 705 as well as input to storage/computation pair 705/706 with compute hardware 706. In at least one embodiment, storage/computation pairs 701/702 and 705/706 may correspond to two or more neural network layers. In at least one embodiment, additional storage/computation pairs (not shown) are included in the inference and/or training logic 715 after or in parallel with the storage/computation pairs 701/702 and 705/706. may beTraining and Deployment of Neural Networks FIG. 8 illustrates training and deployment of a deep neural network, according to at least one embodiment. In at least one embodiment, untrained neural network 806 is trained using training data set 802 . In at least one embodiment, the training framework 804 is the PyTorch framework, while in other embodiments the training framework 804 is TensorFlow, Boost, Caffe, Microsoft Cognitive Toolkit/CNTK, MXNet, Chainer, Keras, Deeplearning4j. , or other training framework. In at least one embodiment, training framework 804 trains untrained neural network 806 and enables it to be trained using the processing resources described herein to produce a trained neural network. 808 is generated. In at least one embodiment, the weights may be chosen randomly or by pre-training using a deep belief network. In at least one embodiment, training may be performed in a supervised, partially supervised, or unsupervised manner.In at least one embodiment, untrained neural network 806 is trained using supervised learning, where training data set 802 includes inputs paired with desired outputs for inputs, or training data sets 802 The set 802 contains inputs with known outputs and the outputs of the neural network 806 are manually scored. In at least one embodiment, untrained neural network 806 is trained in a supervised fashion to process inputs from training data set 802 and combine the resulting outputs with an expected or desired set of outputs. compare. In at least one embodiment, the error is then backpropagated through untrained neural network 806 . In at least one embodiment, training framework 804 adjusts weights that control untrained neural network 806 . In at least one embodiment, training framework 804 is a trained neural network suitable for untrained neural network 806 to generate correct answers, such as results 814, based on input data, such as novel data set 812. Includes tools to monitor how well it converges towards a model such as 808. In at least one embodiment, training framework 804 iteratively trains untrained neural network 806 while using a loss function and an adjustment algorithm, such as stochastic gradient descent, to reduce the output of untrained neural network 806. Adjust the weights to refine In at least one embodiment, training framework 804 trains untrained neural network 806 until untrained neural network 806 reaches a desired accuracy. In at least one embodiment, trained neural network 808 can then be deployed to implement any number of machine learning operations.In at least one embodiment, untrained neural network 806 is trained using unsupervised learning, where untrained neural network 806 attempts to train itself using unlabeled data. In at least one embodiment, the unsupervised learning training data set 802 includes input data without any associated output data or "ground truth" data. In at least one embodiment, the untrained neural network 806 can learn groupings within the training data set 802 to see how individual inputs relate to the untrained data set 802. can judge. In at least one embodiment, unsupervised training is used within a trained neural network 808 that can perform operations useful to reduce the dimensionality of the new data set 812 to generate a self-organizing map. be able to. In at least one embodiment, anomaly detection may also be performed using unsupervised training, where anomaly detection identifies data points in new data set 812 that deviate from the normal pattern of new data set 812. It can be so.In at least one embodiment, semi-supervised learning may be used, which is a technique in which the training data set 802 is mixed with labeled and unlabeled data. In at least one embodiment, training framework 804 may be used to perform incremental learning, such as by transfer learning techniques. In at least one embodiment, incremental learning allows trained neural network 808 to adapt to new data set 812 without forgetting the knowledge instilled into trained neural network 808 during initial training. become.Data Center FIG. 9 illustrates an exemplary data center 900 in which at least one embodiment may be used. In at least one embodiment, data center 900 includes data center infrastructure layer 910 , framework layer 920 , software layer 930 and application layer 940 .As shown in FIG. 9, in at least one embodiment, data center infrastructure layer 910 includes resource orchestrator 912, grouped computing resources 914, and node computing resources ("node C. R.”) 916(1) to 916(N), where “N” represents a positive integer (even a different integer “N” than that used in other figures). good). In at least one embodiment, node C. R. 916(1)-916(N) may represent any number of central processing units (“CPUs”) or other processors (including accelerators, field programmable gate arrays (FPGAs), graphics processors, etc.); memory storage devices 918(1)-918(N) (eg, dynamic read-only memory, solid state storage drives or disk drives), network input/output (“NW I/O”: network input/output) devices; , network switches, virtual machines (“VMs”), power modules, and cooling modules. In at least one embodiment, node C. R. 916(1) through 916(N). R. may be a server having one or more of the computing resources described above.In at least one embodiment, grouped computing resources 914 are node C.1. R. or multiple racks housed in the data center at various graphical locations (also not shown). In at least one embodiment, node C. R. A separate group of may include grouped compute, network, memory, or storage resources that may be configured or distributed to support one or more workloads. In at least one embodiment, several nodes C.E. R. may be grouped in one or more racks to provide compute resources to support one or more workloads. In at least one embodiment, one or more racks may also include any number of power supply modules, cooling modules, and network switches in any combination.In at least one embodiment, resource orchestrator 912 is configured to coordinate one or more nodes C.I. R. 916(1)-916(N) and/or grouped computing resources 914 may be configured or otherwise controlled. In at least one embodiment, resource orchestrator 912 may include a software design infrastructure (“SDI”) management entity for data center 900 . In at least one embodiment, resource orchestrator 712 may include hardware, software, or some combination thereof.In at least one embodiment shown in FIG. 9, framework layer 920 includes job scheduler 922 , configuration manager 924 , resource manager 926 and distributed file system 928 . In at least one embodiment, framework layer 920 may include a framework for supporting software 932 of software layer 930 and/or one or more applications 942 of application layer 940 . In at least one embodiment, software 932 or application 942 may each include web-based service software or applications such as those offered by Amazon Web Services, Google Cloud, and Microsoft Azure. In at least one embodiment, framework layer 920 is based on Apache Spark (“Spark”), which can use distributed file system 928 for large-scale data processing (eg, “big data”). ), but is not limited to any free and open source software web application framework. In at least one embodiment, job scheduler 922 may include Spark drivers to facilitate scheduling of workloads supported by the various tiers of data center 900 . In at least one embodiment, configuration manager 924 can configure different layers such as software layer 930 and framework layer 920 including Spark and distributed file system 928 to support large-scale data processing. There may be. In at least one embodiment, resource manager 926 can manage clustered or grouped computing resources that are mapped or distributed to support distributed file system 928 and job scheduler 922. may be In at least one embodiment, clustered or grouped computing resources may include grouped computing resources 914 in data center infrastructure layer 910 . In at least one embodiment, resource manager 926 may work with resource orchestrator 912 to manage these mapped or allocated computing resources.In at least one embodiment, software 932 contained in software layer 930 is the node C. R. 916 ( 1 )- 916 (N), grouped computing resources 914 , and/or software used by at least a portion of distributed file system 928 of framework layer 920 . In at least one embodiment, the one or more types of software may include Internet web page search software, email virus scanning software, database software, and streaming video content software. , but not limited to.In at least one embodiment, application 942 included in application layer 940 is node C. R. 916(1)-916(N), grouped computing resources 914, and/or one or more types of applications used by at least a portion of distributed file system 928 of framework layer 920. may contain. In at least one embodiment, the one or more types of applications are any number of genomics applications, cognitive compute, and training or inference software, machine learning framework software (e.g., PyTorch, TensorFlow, Caffe, etc.) and machine learning applications, or other machine learning applications used in conjunction with one or more embodiments.In at least one embodiment, any of configuration manager 924, resource manager 926, and resource orchestrator 912 may perform operations based on any amount and type of data obtained in any technically feasible manner. , any number and type of self-correction measures may be implemented. In at least one embodiment, the self-correction actions prevent data center operators of data center 900 from determining potentially bad configurations and part may be eliminated.In at least one embodiment, data center 900 trains one or more machine learning models or uses one or more machine learning models according to one or more embodiments described herein. It may also include tools, services, software, or other resources for predicting or inferring information. For example, in at least one embodiment, a machine learning model may be trained by calculating weight parameters according to a neural network architecture using the software and computing resources described above with respect to data center 900. . In at least one embodiment, a trained machine learning model corresponding to one or more neural networks uses weight parameters computed by one or more techniques described herein to It may be used to infer or predict information using the resources described above with respect to center 900 .In at least one embodiment, the data center includes a CPU, application specific integrated circuit (ASIC), GPU, FPGA, or other hardware to perform training and/or inference using the resources described above. may be used. Additionally, one or more of the software and/or hardware resources described above may be used to enable users to train or perform inference on information such as image recognition, speech recognition, or other artificial intelligence services. May be configured as a service.Inference and/or training logic 715 is used to perform inference and/or training operations associated with one or more embodiments. Details regarding inference and/or training logic 715 are provided herein in conjunction with FIGS. 7A and/or 7B. In at least one embodiment, the inference and/or training logic 715 uses neural network training operations, neural network functionality and/or architecture, or neural network use cases described herein. Based at least in part on the weight parameters calculated may be used in the system of FIG. 9 for inference or prediction operations.In at least one embodiment, at least one component shown or described with respect to FIG. 9 is used to implement the techniques and/or functionality described with respect to FIGS. 1-6. In at least one embodiment, inference and/or training logic 715 includes and/or operates at least one aspect (eg, deep learning compiler 102, stream scheduler 110, memory allocator 112) described with respect to FIG. In at least one embodiment, inference and/or training logic 715 is a computer program representation of speculatively executable operations and/or instructions, such as those described with respect to one or more of FIGS. to train at least one untrained or partially trained neural network. In at least one embodiment, the inference and/or training logic is a computer program representation of speculatively executable operations and/or instructions, such as those described with respect to one or more of FIGS. to perform at least one inference operation.Autonomous Vehicle FIG. 10A illustrates an example autonomous vehicle 1000 according to at least one embodiment. In at least one embodiment, autonomous vehicle 1000 (alternatively referred to herein as "vehicle 1000") is, without limitation, a car, truck, bus, and/or other vehicle that accommodates one or more passengers. It can be a passenger car, such as a type of vehicle. In at least one embodiment, vehicle 1000 may be a semi-tractor trailer truck for carrying cargo. In at least one embodiment, vehicle 1000 may be an aircraft, robotic vehicle, or other type of vehicle.Autonomous vehicles have been designated by the National Highway Traffic Safety Administration (“NHTSA”), a division of the U.S. Department of Transportation, and the Society of Automotive Engineers (“SAE”) "Taxonomy and Definitions for Terms Related to Driving Automation Systems for On-Road Motor Vehicles" (e.g. Standard No. J3016-201806 issued on June 15, 2018, dated June 30, 2016) It may be described in terms of automation levels defined by published standard No. J3016-201609, and older and newer editions of this standard. In at least one embodiment, vehicle 1000 may be capable of functionality with one or more of levels 1-5 of autonomous driving levels. For example, in at least one embodiment, vehicle 1000 may be capable of conditional automation (level 3), advanced automation (level 4), and/or full automation (level 5), depending on the embodiment. .In at least one embodiment, vehicle 1000 includes, without limitation, a chassis, a vehicle body, wheels (2, 4, 6, 8, 18, etc.), tires, axles, and other vehicle components. may include components such as In at least one embodiment, vehicle 1000 may include propulsion system 1050 such as, without limitation, an internal combustion engine, a hybrid power plant, an all-electric engine, and/or another type of propulsion system. In at least one embodiment, propulsion system 1050 may be coupled to a drive train of vehicle 1000, which may include, without limitation, a transmission for enabling propulsion of vehicle 1000. In at least one embodiment, propulsion system 1050 may be controlled in response to receiving a signal from throttle/accelerator 1052 .In at least one embodiment, steering system 1054, which may include, but is not limited to, a steering wheel, steers vehicle 1000 (e.g., on a desired path) when propulsion system 1050 is operating (e.g., when vehicle 1000 is in motion). or along a route). In at least one embodiment, steering system 1054 may receive signals from steering actuators 1056 . In at least one embodiment, the handle may be optional for full automation (Level 5) functionality. In at least one embodiment, brake sensor system 1046 may be used to operate the vehicle brakes in response to receiving signals from brake actuator 1048 and/or brake sensors.In at least one embodiment, define one or more system on chip (“SoC”) (not shown in FIG. 10A) and/or graphics processing unit (“GPU”) Controller 1036 provides signals (eg, representing commands) to one or more components and/or systems of vehicle 1000 . For example, in at least one embodiment, controller 1036 outputs signals to operate vehicle brakes via brake actuator 1048 , steering system 1054 via steering actuator 1056 , throttle/accelerator 1052 . may transmit a signal to operate the propulsion system 1050. In at least one embodiment, controller 1036 processes sensor signals and outputs operational commands (eg, signals representing commands) to enable autonomous driving and/or assist a human driver in driving vehicle 1000. It may include one or more on-board (eg, integrated) computing devices (eg, supercomputers). In at least one embodiment, controller 1036 includes a first controller for autonomous driving functions, a second controller for functional safety functions, and a third controller for artificial intelligence functions (eg, computer vision). , a fourth controller for infotainment functions, a fifth controller for redundancy in emergency situations, and/or other controllers. In at least one embodiment, a single controller may address two or more of the above functionalities, two or more controllers may address a single functionality, and/or Any combination of these is also possible.In at least one embodiment, controller 1036 controls one or more components and/or systems of vehicle 1000 in response to sensor data (e.g., sensor input) received from one or more sensors. provide a signal for In at least one embodiment, the sensor data may include, but are not limited to, global navigation satellite system (“GNSS”) sensors 1058 (e.g., global positioning system sensors), RADAR sensors 1060, Ultrasonic sensor 1062, LIDAR sensor 1064, inertial measurement unit (“IMU”) sensor 1066 (e.g., accelerometer, gyroscope, magnetic compass, magnetometer, etc.), microphone 1096, stereo camera 1068, wide-angle camera 1070 (eg, fisheye camera), infrared camera 1072, ambient camera 1074 (eg, 360 degree camera), long range camera (not shown in FIG. 10A), medium range camera (not shown in FIG. 10A), (eg , speed sensor 1044 for measuring the speed of vehicle 1000, vibration sensor 1042, steering sensor 1040, brake sensor (eg, as part of brake sensor system 1046), and/or other types of sensors. may be received from.In at least one embodiment, one or more of controllers 1036 receives input (eg, represented by input data) from instrument cluster 1032 of vehicle 1000 and provides a human machine interface (“HMI”). (e.g., represented by output data, display data, etc.) via display 1034, audible annunciators, loudspeakers, and/or via other components of vehicle 1000; good too. In at least one embodiment, the outputs include vehicle speed, speed, time, map data (eg, a high definition map (not shown in FIG. 10A)), location data (eg, location of vehicle 1000 on a map, etc.). ), directions, locations of other vehicles (eg, occupancy grid), objects sensed by the controller 1036 and information about the state of the objects. For example, in at least one embodiment, HMI display 1034 provides information about the presence of one or more objects (e.g., road signs, warning signs, traffic light changes, etc.) and/or , or information about upcoming driving maneuvers (eg, currently changing lanes, exiting exit 34B after 2 miles, etc.).In at least one embodiment, vehicle 1000 further includes network interface 1024, which may employ wireless antenna 1026 and/or modems for communicating over one or more networks. . For example, in at least one embodiment, network interface 1024 supports Long-Term Evolution (“LTE”), Wideband Code Division Multiple Access (“WCDMA”). , Universal Mobile Telecommunications System ("UMTS"), Global System for Mobile Communications ("GSM"), IMT-CDMA Multi-Carrier ("CDMA2000 ”) may be communicable via a network or the like. Also, in at least one embodiment, the wireless antenna 1026 is a local area network such as Bluetooth, Bluetooth Low Energy (“LE”: Low Energy), Z-Wave, ZigBee, and/or a low frequency network such as LoRaWAN, SigFox. A low power wide-area network (“LPWAN”) may be used to enable communication between objects in the environment (eg, vehicles, mobile devices, etc.).Inference and/or training logic 715 is used to perform inference and/or training operations associated with one or more embodiments. Details regarding inference and/or training logic 715 are provided herein in conjunction with FIGS. 7A and/or 7B. In at least one embodiment, the inference and/or training logic 715 uses neural network training operations, neural network functionality and/or architecture, or neural network use cases described herein. Based at least in part on the calculated weight parameters may be used in the system of FIG. 10A for inference or prediction operations.In at least one embodiment, at least one component shown or described with respect to FIG. 10A is used to implement the techniques and/or functionality described with respect to FIGS. 1-6. In at least one embodiment, inference and/or training logic 715 of vehicle 1000 (shown with respect to FIG. 10C as part of CPU 1006 and GPU 1008) implements at least one aspect (e.g., deep learning compiler 102) described with respect to FIG. , stream scheduler 110, memory allocator 112). In at least one embodiment, inference and/or training logic 715 is a computer program representation of speculatively executable operations and/or instructions, such as those described with respect to one or more of FIGS. to train at least one untrained or partially trained neural network. In at least one embodiment, inference and/or training logic 715 is a computer program representation of speculatively executable operations and/or instructions, such as those described with respect to one or more of FIGS. to perform at least one inference operation. In at least one embodiment, vehicle 1000 uses a representation of a computer program that includes one or more instructions identified by a compiler (eg, DL compiler 102 of FIG. 1) as being speculatively implemented in parallel. and to identify one or more trajectories of the corresponding one or more objects based at least in part on performing one or more inference operations on the Includes computer vision systems. In at least one embodiment, vehicle 1000 performs one or more actions (e.g., acceleration, brake control, steering, alert signals) based at least in part on the identified one or more trajectories. , a propulsion system, a directional control system, and a vehicle operator notification system.FIG. 10B shows an illustration of camera locations and fields of view for the autonomous vehicle 1000 of FIG. 10A according to at least one embodiment. In at least one embodiment, the cameras and their respective fields of view are exemplary and not limiting. For example, in at least one embodiment, additional and/or alternate cameras may be included and/or cameras may be positioned at different locations on vehicle 1000 .In at least one embodiment, the camera type of camera may include, but is not limited to, a digital camera that may be adapted for use with components and/or systems of vehicle 1000 . In at least one embodiment, the camera may operate at automotive safety integrity level (“ASIL”) B and/or another ASIL. In at least one embodiment, the camera type may be capable of any image capture rate, such as 60 frames per second (fps), 1220 fps, 240 fps, etc., depending on the embodiment. In at least one embodiment, the camera may be capable of using a rolling shutter, a global shutter, another type of shutter, or a combination thereof. In at least one embodiment, the color filter array is a red, clear, clear, clear ("RCCC") color filter array, a red, clear, clear, blue ("RCCB") color filter array. color filter array of red, blue, green, and clear (“RBGC”), color filter array of Foveon X3, Bayer sensor (RGGB) color filter array, monochrome sensor color filter array, and/or another type of color filter array. In at least one embodiment, a clear pixel camera, such as a camera with RCCC, RCCB, and/or RBGC color filter arrays, may be used to increase light sensitivity.In at least one embodiment, one or more of the cameras are used to enable advanced driver assistance systems (“ADAS”) functionality (e.g., as part of a redundant or fail-safe design). may be executed. For example, in at least one embodiment, a multi-function mono camera may be installed to provide features including lane departure warning, traffic sign assistance, and intelligent headlight control. In at least one embodiment, one or more of the cameras (eg, all cameras) may simultaneously record and provide image data (eg, video).In at least one embodiment, one or more cameras are configured to eliminate stray light and reflections from inside the vehicle 1000 (e.g., reflections off the dashboard onto the windshield) that can interfere with the camera's image data capture performance. Alternatively, it may be attached to a mounting assembly, such as a custom-designed (three-dimensional (“3D”) printed) assembly. Referring to the door mirror mounting assembly, in at least one embodiment, the door mirror assembly may be custom 3D printed such that the camera mounting plate fits the shape of the door mirror. In at least one embodiment, the camera may be integral with the door mirror. In at least one embodiment, for side view cameras, the cameras may again be integrated into four pillars in each corner of the cabin.In at least one embodiment, a camera with a field of view that includes a portion of the environment in front of vehicle 1000 (e.g., a front camera) is used for surrounding views to help identify paths and obstacles in front of controller 1036 . and/or may be used in conjunction with one or more of the control SoCs to help provide information essential to generating an occupancy grid and/or determining preferred vehicle routes. In at least one embodiment, the front-facing camera may be used to perform many of the ADAS functions similar to LIDAR, including but not limited to emergency braking, pedestrian detection, and collision avoidance. In at least one embodiment, the front-facing camera also provides other functions such as Lane Departure Warnings (“LDW”), Autonomous Cruise Control (“ACC”), and/or traffic sign recognition. It may be used for ADAS functions and systems including but not limited to.In at least one embodiment, various cameras may be used in frontal configuration, including monocular camera platforms including, for example, complementary metal oxide semiconductor (“complementary metal oxide semiconductor”) color imagers. In at least one embodiment, wide-angle camera 1070 may be used to sense objects (eg, pedestrians, cross traffic, or bicycles) coming into view from the surroundings. Although only one wide-angle camera 1070 is shown in FIG. 10B, in other examples, there may be any number of wide-angle cameras (including zero) on vehicle 1000 . In at least one embodiment, any number of long-range cameras 1098 (e.g., long-range view stereo cameras) for depth-based object detection, particularly for objects for which the neural network has not yet been trained. pair) may be used. In at least one embodiment, long range camera 1098 may also be used for object detection and classification, as well as basic object tracking.In at least one embodiment, any number of stereo cameras 1068 may also be included in the frontal configuration. In at least one embodiment, one or more of the stereo cameras 1068 may include an integrated control unit with expandable processing units, which is connected to an integrated controller area network (“CAN A Programmable Logic (“FPGA”) and multi-core microprocessor with a Controller Area Network (“FPGA”) or Ethernet interface on a single chip. In at least one embodiment, such units may be used to generate a 3D map of the environment of vehicle 1000, including range estimates for every point in the image. In at least one embodiment, one or more of stereo cameras 1068 may include, without limitation, compact stereo vision sensors that measure the distance from vehicle 1000 to target objects. , two camera lenses (one left and one right) and an image processing chip that can activate the functions of autonomous emergency braking and lane departure warning using the generated information (e.g. metadata). may be included without In at least one embodiment, other types of stereo cameras 1068 may be used in addition to or in place of those described herein.In at least one embodiment, a camera with a field of view that includes a portion of the environment to the side of vehicle 1000 (e.g., a side view camera) is used for viewing the surroundings to create and update the occupancy grid, and Information may be provided that is used to generate side impact warnings. For example, in at least one embodiment, ambient cameras 1074 (eg, four ambient cameras as shown in FIG. 10B) can be positioned on vehicle 1000 . In at least one embodiment, ambient cameras 1074 may include, without limitation, any number and combination of wide angle cameras, fisheye cameras, 360 degree cameras and/or similar cameras. For example, in at least one embodiment, four fisheye cameras may be positioned to the front, rear, and sides of vehicle 1000 . In at least one embodiment, vehicle 1000 may use three ambient cameras 1074 (eg, left, right, and rear) and one or more other cameras (eg, front facing) as a fourth ambient camera. camera) can be used.In at least one embodiment, a camera with a field of view that includes a portion of the environment behind the vehicle 1000 (e.g., a rear view camera) is used for parking assistance, surrounding views, rear collision warning, and the occupancy grid. It may be created and updated. In at least one embodiment, including cameras also suitable as front-facing cameras described herein (eg, long-range camera 1098 and/or medium-range camera 1076, stereo camera 1068, infrared camera 1072, etc.), these A wide variety of cameras may be used, including but not limited to.Inference and/or training logic 715 is used to perform inference and/or training operations associated with one or more embodiments. Details regarding inference and/or training logic 715 are provided herein in conjunction with FIGS. 7A and/or 7B. In at least one embodiment, the inference and/or training logic 715 uses neural network training operations, neural network functionality and/or architecture, or neural network use cases described herein. Based at least in part on the calculated weight parameters may be used in the system of FIG. 10B for inference or prediction operations.In at least one embodiment, at least one component shown or described with respect to FIG. 10B is used to implement the techniques and/or functionality described with respect to FIGS. 1-6. In at least one embodiment, inference and/or training logic 715 of vehicle 1000 (shown with respect to FIG. 10C as part of CPU 1006 and GPU 1008) implements at least one aspect (e.g., deep learning compiler 102) described with respect to FIG. , stream scheduler 110, memory allocator 112). In at least one embodiment, inference and/or training logic 715 is a computer program representation of speculatively executable operations and/or instructions, such as those described with respect to one or more of FIGS. to train at least one untrained or partially trained neural network. In at least one embodiment, the inference and/or training logic is a computer program representation of speculatively executable operations and/or instructions, such as those described with respect to one or more of FIGS. to perform at least one inference operation.FIG. 10C is a block diagram illustrating an exemplary system architecture for autonomous vehicle 1000 of FIG. 10A, according to at least one embodiment. In at least one embodiment, each of the components, features, and systems of vehicle 1100 of FIG. 11C are shown as being connected via bus 1102 . In at least one embodiment, bus 1002 may include, without limitation, a CAN data interface (alternatively referred to herein as (CAN bus)). In at least one embodiment, CAN is a network within vehicle 1000 that is used to help control various features and functions of vehicle 1000, such as brake actuation, acceleration, brake control, steering, windshield wipers, etc. There may be. In at least one embodiment, bus 1002 may be configured to have tens or even hundreds of nodes, each with its own unique identifier (eg, CAN ID). In at least one embodiment, the bus 1002 is read to find steering wheel angle, ground speed, engine revolutions per minute ("RPM"), button positions, and/or other vehicle status indicators. good too. In at least one embodiment, bus 1002 may be an ASIL B compliant CAN bus.In at least one embodiment, FlexRay and/or Ethernet protocols may be used in addition to or instead of CAN. In at least one embodiment, there may be any number of buses forming bus 1002, including, without limitation, zero or more CAN buses, zero or more FlexRay buses, zero or more Ethernet (registered trademark) bus, and/or zero or more other types of buses using other protocols. In at least one embodiment, two or more buses may be used to perform different functions and/or may be used to provide redundancy. For example, a first bus may be used for collision avoidance functions and a second bus for motion control. In at least one embodiment, each bus of buses 1002 may communicate with any component of vehicle 1000, and two or more of buses 1002 may communicate with corresponding components. In at least one embodiment, each of any number of system-on-chips (“SoCs”) 1004 (such as SoC 1004(A) and SoC 1004(B)), each controller 1036, and/or each computer in the vehicle , may be accessible to the same input data (eg, inputs from sensors on vehicle 1000) and may be connected to a common bus, such as a CAN bus.In at least one example, vehicle 1000 may include one or more controllers 1036, such as those described herein with respect to FIG. 10A. In at least one embodiment, controller 1036 may be used for various functions. In at least one embodiment, controller 1036 may be coupled to any of various other components and systems of vehicle 1000, such as vehicle 1000, artificial intelligence of vehicle 1000, infotainment of vehicle 1000, and/or Or it may be used to control other functions.In at least one example, vehicle 1000 may include any number of SoCs 1004 . In at least one embodiment, each SoC 1004 includes, without limitation, a central processing unit (“CPU”) 1006, a graphics processing unit (“GPU”) 1008, a processor 1010, a cache 1012, an accelerator 1014, a data Store 1016 and/or other components and features not shown may be included. In at least one embodiment, SoC 1004 may be used to control vehicle 1000 in various platforms and systems. For example, in at least one embodiment, SoC 1004 can obtain map refreshes and/or updates from one or more servers (not shown in FIG. 10C) via network interface 1024 (" HD": High Definition) map 1022 (for example, the system of vehicle 1000).In at least one embodiment, CPU 1006 may include a CPU cluster, or CPU complex (also referred to herein as a "CCPLEX"). In at least one embodiment, CPU 1006 may include multiple cores and/or level two (“L2”) caches. For example, in at least one embodiment, CPU 1006 may include eight cores in a coherent multiprocessor configuration. In at least one embodiment, CPU 1006 may include four dual-core clusters, where each cluster has a dedicated L2 cache (eg, a 2 megabyte (MB) L2 cache). In at least one embodiment, CPUs 1006 (e.g., CCPLEX) may be configured to support simultaneous cluster operation that allows any combination of clusters of CPUs 1006 to be active at any given time. .In at least one embodiment, one or more of CPUs 1006 may implement power management functionality, which includes, without limitation, one or more of the following features: Wear blocks can be automatically clock gated when idle to save dynamic power; Wait for Interrupt (“WFI”)/Wait for Event (“WFE”) instructions Each core clock can be gated when the core is not actively executing instructions due to execution of the ; each core can be power gated independently; each core cluster can be independently clock gated when gated or power gated; and/or each core cluster can be independently clock gated when all cores are power gated Power can be gated. In at least one embodiment, CPU 1006 may also implement enhanced algorithms for managing power states, where allowed power states and expected wake-up times are specified, and cores, clusters, and Hardware/microcode determines which is the best power state the CCPLEX should enter. In at least one embodiment, the processing core may support in software a simple sequence of entering power states, with work offloaded to microcode.In at least one embodiment, GPU 1008 may include an integrated GPU (alternatively referred to herein as an "iGPU"). In at least one embodiment, GPU 1008 may be programmable and efficient for parallel workloads. In at least one embodiment, GPU 1008 may use an extended tensor instruction set. In at least one embodiment, GPU 1008 may include one or more streaming microprocessors, where each streaming microprocessor has a level 1 (“L1”) cache (eg, an L1 cache with at least 96 KB of storage capacity). cache), and two or more streaming microprocessors may share an L2 cache (eg, an L2 cache having a storage capacity of 512 KB). In at least one embodiment, GPU 1008 may include at least eight streaming microprocessors. In at least one embodiment, GPU 1008 may use a compute application programming interface (API). In at least one embodiment, GPU 1008 may employ one or more parallel computing platforms and/or programming modules (eg, NVIDIA's CUDA model).In at least one embodiment, one or more of GPUs 1008 may be power optimized for best performance in automotive and embedded use cases. For example, in at least one embodiment, GPU 1008 may be fabricated on Fin field-effect transistor (“FinFET”) circuits. In at least one embodiment, each streaming microprocessor may incorporate multiple mixed-precision processing cores partitioned into multiple blocks. For example, without limitation, 64 PF32 cores and 32 PF64 cores can be partitioned into 4 processing blocks. In at least one embodiment, each processing block includes 16 FP32 cores, 8 FP64 cores, 16 INT32 cores, 2 mixed-precision NVIDIA Tensor cores for deep learning matrix operations, level zero (“L0 ”) instruction cache, warp scheduler, dispatch unit, and/or a 64 KB register file. In at least one embodiment, the streaming microprocessor includes independent and parallel integer and floating point data paths to achieve efficient execution of workloads by mixing computing and addressing computations. In at least one embodiment, the streaming microprocessor may include independent thread scheduling capabilities to allow finer-grained synchronization and coordination among parallel threads. In at least one embodiment, a streaming microprocessor may include a combination of an L1 data cache and shared memory unit to improve performance while simplifying programming.In at least one embodiment, one or more of GPUs 1008 includes a high bandwidth memory (“HBM”) and/or 16GB HBM2 memory subsystem, and in some examples about 900GB. /sec of peak memory bandwidth. In at least one embodiment, in addition to or instead of the HBM memory, such as graphics double data rate type five synchronous random access memory ("GDDR5") Synchronous graphics random-access memory (“SGRAM”) may be used.In at least one embodiment, GPU 1008 may include integrated memory technology. In at least one embodiment, address translation services (“ATS”) support may be used to allow GPU 1008 to directly access CPU 1006's page tables. In at least one embodiment, an address translation request may be sent to CPU 1006 when the GPU of GPU 1008 memory management unit (“MMU”) encounters a miss. In at least one embodiment, in response, two of CPUs 1006 may look up the virtual-to-physical address mapping in their page tables and send translations back to GPU 1008 . In at least one embodiment, unified memory technology allows for providing a single unified virtual address space for the memory of both CPU 1006 and GPU 1008, thereby allowing programming of GPU 1008 and application execution to GPU 1008. Make porting easier.In at least one embodiment, GPU 1008 may include any number of access counters that can keep track of how often GPU 1008 accesses memory of other processors. In at least one embodiment, the access counter helps ensure that memory pages are moved to the physical memory of the processor most frequently accessing the page, thereby reducing shared memory among processors. Range efficiency may be improved.In at least one embodiment, one or more of SoCs 1004 may include any number of caches 1012, including those described herein. For example, in at least one embodiment, cache 1012 may include a level 3 (“L3”) cache available to both CPU 1006 and GPU 1008 (eg, coupled to both CPU 1006 and GPU 1008). In at least one embodiment, cache 1012 may include a write-back cache that can record line states by using a cache coherence protocol or the like (eg, MEI, MESI, MSI, etc.). . In at least one embodiment, the L3 cache may include 4MB or more of memory, depending on the embodiment, although smaller cache sizes may be used.In at least one embodiment, one or more of SoCs 1004 may include one or more accelerators 1014 (eg, hardware accelerators, software accelerators, or combinations thereof). In at least one embodiment, SoC 1004 may include a hardware acceleration cluster that may include optimized hardware accelerators and/or large on-chip memory. In at least one embodiment, a large on-chip memory (eg, 4MB of SRAM) may allow the hardware acceleration cluster to accelerate neural networks and other computations. In at least one embodiment, a hardware acceleration cluster may be used to complement GPU 1008 and offload some of GPU 1008's tasks (e.g., free up GPU 1008 cycles so that other tasks can be performed). You can free more). In at least one embodiment, the target workload (e.g., perceptual, convolutional neural network (“CNN”), recurrent neural network (“RNN”)) that is stable enough to accept acceleration. Accelerator 1014 can be used for : recurrent neural networks, etc.). In at least one embodiment, the CNN is a region-based, i.e., regional convolutional neural network ("RCNN"), and a fast RCNN (e.g., used for object detection), or other type of CNN. may includeIn at least one embodiment, accelerator 1014 (eg, hardware acceleration cluster) may include one or more deep learning accelerators (“DLA”). In at least one embodiment, the DLA may include, without limitation, one or more Tensor processing units (“TPUs”), which are further used for deep learning applications and inference. It may be configured to provide 10 trillion operations per second. In at least one embodiment, the TPU may be an accelerator configured and optimized for performing image processing functions (eg, CNN, RCNN, etc.). In at least one embodiment, the DLA may be further optimized for a specific set of neural network types and floating point operations as well as inference. In at least one embodiment, the design of the DLA allows for better performance per millimeter than typical general-purpose GPUs, typically significantly exceeding the performance of CPUs. In at least one embodiment, the TPU performs several functions including, for example, a single-instance convolution function supporting INT8, INT16, and FP16 data types for both features and weights, as well as post-processing functions. may In at least one embodiment, the DLA includes, for example, without limitation, a CNN for object identification and detection using data from camera sensors, a CNN for range estimation using data from camera sensors, CNN for emergency vehicle detection and identification and detection using data from microphones, CNN for facial recognition and vehicle owner identification using data from camera sensors and/or security and/or safety related Neural networks, particularly CNNs, may be run quickly and efficiently on processed or unprocessed data for any of a variety of functions, including CNNs for events.In at least one embodiment, the DLA may perform any function of the GPU 1008; for example, by using inference accelerators, designers can target either the DLA or the GPU 1008 for any function. good too. For example, in at least one embodiment, the designer may concentrate the processing of CNN and floating point arithmetic on the DLA and offload other functions to GPU 1008 and/or other accelerators 1014 .In at least one embodiment, accelerator 1014 may include a programmable vision accelerator (“PVA”), alternatively referred to herein as a computer vision accelerator. may be called. In at least one embodiment, the PVA is used for advanced driver assistance systems (“ADAS”) 1038, autonomous driving, augmented reality (“AR”) applications, and/or virtual reality (“VR”) applications. It may be designed and configured to accelerate computer vision algorithms for this purpose. In at least one embodiment, PVA may provide a balance between performance and flexibility. For example, in at least one embodiment, each PVA includes, for example, without limitation, any number of reduced instruction set computer (“RISC”) cores, direct memory access (“DMA”), memory access), and/or any number of vector processors.In at least one embodiment, the RISC core may interact with an image sensor (eg, the image sensor of any camera described herein), an image signal processor, or the like. In at least one embodiment, each RISC core may contain any amount of memory. In at least one embodiment, the RISC core may use any of multiple protocols depending on the embodiment. In at least one embodiment, the RISC core may run a real-time operating system (“RTOS”). In at least one embodiment, a RISC core may be implemented using one or more integrated circuit devices, application specific integrated circuits (“ASICs”), and/or memory devices. For example, in at least one embodiment, a RISC core may include an instruction cache and/or tightly coupled RAM.In at least one embodiment, DMA may allow PVA components to access system memory independently of CPU 1006 . In at least one embodiment, the DMA may support any number of features used to provide optimizations to the PVA, including but not limited to multi-dimensional addressing and/or circular addressing. . In at least one embodiment, the DMA may support up to six or more addressing dimensions, including, without limitation, block width, block height, block depth, horizontal block stepping, vertical block stepping. Stepping and/or depth stepping may be included.In at least one embodiment, the vector processor may be a programmable processor that may be designed to efficiently and flexibly perform programming for computer vision algorithms, providing signal processing functionality. do. In at least one embodiment, the PVA may include a PVA core and two vector processing subsystem partitions. In at least one embodiment, a PVA core may include a processor subsystem, a DMA engine (eg, two DMA engines), and/or other peripherals. In at least one embodiment, the vector processing subsystem may operate as the primary processing engine of the PVA and includes a vector processing unit ("VPU"), an instruction cache, and/or vector memory (e.g., " VMEM"). In at least one embodiment, the VPU is a digital signal processor, such as a single instruction, multiple data (“SIMD”), very long instruction word (“VLIW”) digital signal processor. may include In at least one embodiment, a combination of SIMD and VLIW may improve throughput and speed.In at least one embodiment, each of the vector processors may include an instruction cache and may be associated with dedicated memory. As a result, in at least one embodiment, each of the vector processors may be configured to run independently of other vector processors. In at least one embodiment, a vector processor included in a particular PVA may be configured to use data parallelism. For example, in at least one embodiment, multiple vector processors included in a single PVA may execute common computer vision algorithms on different regions of an image. In at least one embodiment, a vector processor included in a particular PVA may execute different computer vision algorithms simultaneously on one image, or even different algorithms on successive images, or It may be performed on parts of the image. In at least one embodiment, any number of PVAs may be included in the hardware acceleration cluster, and any number of vector processors may be included in each PVA, among others. In at least one embodiment, the PVA may include additional Error Correction Code (“ECC”) memory to enhance the overall security of the system.In at least one embodiment, accelerator 1014 includes an on-chip computer vision network and static random access memory (“SRAM”), providing high bandwidth, low latency SRAM for accelerator 1014. may provide. In at least one embodiment, the on-chip memory may include, for example and without limitation, at least 4MB of SRAM including 8 field-configurable memory blocks, which are accessible from both PVA and DLA. may be In at least one embodiment, each pair of memory blocks may include an advanced peripheral bus (“APB”) interface, configuration circuitry, a controller, and a multiplexer. In at least one embodiment, any type of memory may be used. In at least one embodiment, the PVAs and DLAs may access memory via a backbone that provides the PVAs and DLAs with fast access to memory. In at least one embodiment, the backbone may include an on-chip computer vision network interconnecting PVAs and DLAs (eg, using APBs) to memory.In at least one embodiment, the on-chip computer vision network has an interface that determines that both the PVA and DLA provide ready and valid signals before sending any control signals/addresses/data. may contain. In at least one embodiment, the interface may provide separate phases and separate channels for transmitting control signals/addresses/data, and bursty communication for continuous data transfer. In at least one embodiment, the interface may conform to International Organization for Standardization ("ISO") 26262 or International Electrotechnical Commission ("IEC") 61508 standards, although other standards may be used. and protocols may be used.In at least one embodiment, one or more of SoCs 1004 may include a real-time ray tracing hardware accelerator. In at least one embodiment, real-time ray tracing hardware accelerators are used to quickly and efficiently determine the position and range of objects (e.g., within a world model) for RADAR signal interpretation. for sound propagation synthesis and/or analysis; for simulating SONAR systems; for simulating the propagation of generic waveforms; for comparison with LIDAR data for purposes of localization and/or other functions; Real-time visualization simulations for other uses may be generated.In at least one embodiment, accelerator 1014 can have multiple uses for autonomous driving. In at least one embodiment, PVA can be used for key processing stages of ADAS and autonomous vehicles. In at least one embodiment, PVA's performance is well suited for algorithmic domains that require predictable processing with low power and low latency. In other words, PVA is good with small data sets for semi-dense or dense regular computations that may require predictable run times with low latency and low power. function. In at least one embodiment, PVAs can be designed to perform conventional computer vision algorithms, such as in vehicle 1000, as they can be effective for object detection and integer-valued arithmetic. .For example, according to at least one embodiment of the technology, PVA may be used to perform computer stereo vision. In at least one embodiment, an algorithm based on semi-global matching may be used in some instances, but this is not limiting. In at least one embodiment, applications for level 3-5 autonomous driving use motion estimation/stereo matching (e.g. structuring from motion, pedestrian recognition, lane detection, etc.) on-the-fly. do. In at least one embodiment, the PVA may perform computer stereo vision functions on input from two monocular cameras.In at least one embodiment, PVA may be used to perform high density optical flow. For example, in at least one embodiment, the PVA can process raw RADAR data (eg, using a 4D Fast Fourier Transform) to provide processed RADAR data. In at least one embodiment, PVA is used for time-of-flight depth processing, eg, processing raw time-of-flight data to provide processed time-of-flight data.In at least one embodiment, for implementing any type of network for enhancing control and driving safety including, but not limited to, neural networks that output a confidence measure for each object detection. , DLA may be used. In at least one embodiment, reliability may be expressed or interpreted as a probability of each detection compared to other detections, or as providing its relative "weight." In at least one embodiment, the confidence measure allows the system to make further decisions regarding which detections should be considered positive rather than false positives. In at least one embodiment, the system may set a threshold for confidence and consider only detections above the threshold to be positive detections. In embodiments where automatic emergency braking (“AEB”) is used, a false positive will cause the vehicle to automatically apply emergency braking, which is clearly undesirable. In at least one embodiment, a very reliable detection may be considered a trigger for AEB. In at least one embodiment, the DLA may implement a neural network to regress confidence values. In at least one embodiment, the neural network uses, among other things, the dimensions of the bounding box, the ground estimate obtained (eg, from another subsystem), the output from the IMU sensor 1066 correlated with the orientation of the vehicle 1000, the range, the neural network At least some subset of parameters, such as a 3D location estimate of an object obtained from the network and/or other sensors (eg, LIDAR sensor 1064 or RADAR sensor 1060), may be taken as its input.In at least one embodiment, one or more of SoCs 1004 may include data stores 1016 (eg, memory). In at least one embodiment, data store 1016 may be an on-chip memory of SoC 1004, which may store neural networks running on GPU 1008 and/or DLA. In at least one embodiment, the capacity of data store 1016 may be large enough to store multiple instances of neural networks for redundancy and security. In at least one embodiment, data store 1016 may include an L2 or L3 cache.In at least one embodiment, one or more of SoCs 1004 may include any number of processors 1010 (eg, embedded processors). In at least one embodiment, processor 1010 may include a boot and power management processor, which may be a dedicated processor and subsystem for handling boot power and management functions and associated security enforcement. In at least one embodiment, the boot and power management processor may be part of the boot sequence of SoC 1004 and may provide run-time power management services. In at least one embodiment, the boot power and management processor provides programming of clocks and voltages, assistance in transitioning the system into low power states, management of thermal and temperature sensors of SoC 1004, and/or management of power states of SoC 1004. You may In at least one embodiment, each temperature sensor may be implemented as a ring oscillator whose output frequency is proportional to temperature, and SoC 1004 uses the ring oscillator to measure the temperature of CPU 1006, GPU 1008, and/or accelerator 1014. may be detected. In at least one embodiment, if the temperature is determined to be above the threshold, the boot and power management processor enters a temperature failure routine, puts the SoC 1004 into a low power state, and/or puts the vehicle 1000 into a driver-safety stop. mode (eg, bring vehicle 1000 to a safe stop).In at least one embodiment, processor 1010 may also include a set of embedded processors that can serve as audio processing engines, including multi-channel audio over multiple interfaces, and a wide and flexible variety of There may be an audio subsystem that allows full hardware support for audio I/O interfaces. In at least one embodiment, the audio processing engine is a dedicated processor core having a digital signal processor with dedicated RAM.In at least one embodiment, processor 1010 may also include an always-on processor engine that can provide the necessary hardware features to support low-power sensor management and boot-up use cases. In at least one embodiment, the always-on processor engine includes, without limitation, a processor core, tightly coupled RAM, supporting peripherals (e.g., timers and interrupt controllers), various I/O controller peripherals, and May include routing logic.In at least one embodiment, processor 1010 may further include a safety cluster engine, which includes, without limitation, processor subsystems dedicated to addressing safety management for automotive applications. In at least one embodiment, the secure cluster engine includes, without limitation, two or more processor cores, tightly coupled RAM, supporting peripherals (such as timers and interrupt controllers), and/or routing logic. It's okay. In safe mode, two or more cores operate in lockstep mode in at least one embodiment, and may function as a single core with comparison logic to detect any difference between these operations. In at least one embodiment, processor 1010 may further include a real-time camera engine, which may include, without limitation, a dedicated processor subsystem for handling real-time camera management. good. In at least one embodiment, the processor 1010 may further include a high dynamic range signal processor, which is a hardware engine that is part of the camera processing pipeline, defining an image signal processor. may be included withoutIn at least one embodiment, processor 1010 may include a video image compositor that implements the video post-processing functions required by the video playback application to produce the final image in the playback device window. It may also be a processing block (eg implemented in a microprocessor). In at least one embodiment, the video image compositor may perform lens distortion correction for wide-angle camera 1070, ambient camera 1074, and/or in-cabin surveillance camera sensors. In at least one embodiment, the in-cabin surveillance camera sensors are preferably monitored by a neural network running in another instance of SoC 1004 configured to identify events in the cabin and respond accordingly. be done. In at least one embodiment, the in-cabin system activates cellular service, makes calls, writes emails, redirects the vehicle, activates or changes the vehicle's infotainment system and settings. Lip reading may be performed without limitation to provide voice-activated web surfing. In at least one embodiment, certain features are available to the driver when the vehicle is operating in autonomous mode and are disabled otherwise.In at least one embodiment, the video image synthesizer may include extended temporal noise reduction for both spatial and temporal noise reduction. For example, in at least one embodiment, when motion occurs in the video, noise reduction appropriately weights spatial information to lessen the weight of information provided by adjacent frames. In at least one embodiment, if an image or portion of an image does not contain motion, the temporal noise reduction performed by the video image compositor uses information from previous images to reduce the noise in the current image. may be reduced.In at least one embodiment, the video image synthesizer may also be configured to perform stereo rectification on the input stereo lens frames. In at least one embodiment, the video image compositor may also be used to composite user interfaces when the operating system desktop is in use, with GPU 1008 continually rendering new surfaces. no longer need to. In at least one embodiment, a video image compositor may be used to offload GPU 1008 to improve performance and responsiveness when GPU 1008 is powered on and actively doing 3D rendering.In at least one embodiment, one or more of SoCs 1004 is also a mobile industry processor interface (“MIPI”) camera for receiving input from a video and camera. It may include a serial interface, a high speed interface, and/or a video input block that may be used for input functions of the camera and associated pixels. In at least one embodiment, one or more of SoCs 1004 may further include an input/output controller, which may be controlled by software, to direct I/O signals that are not tied to a particular role. may be used to receiveIn at least one embodiment, one or more of SoCs 1004 are further configured to enable communication with peripheral devices, audio encoder/decoders (“codecs”), power management, and/or other devices. may include a wide range of peripheral device interfaces. In at least one embodiment, SoC 1004 receives data from cameras (e.g., connected via a Gigabit multimedia serial link and Ethernet channel), sensors (e.g., Ethernet channel), data from the LIDAR sensor 1064, RADAR sensor 1060, etc., which may be connected via a bus 1002 (e.g., vehicle 1000 speed, steering wheel position, etc.), (e.g., Ethernet bus or CAN bus may be used to process data such as from GNSS sensors 1058 (connected via ). In at least one embodiment, one or more of SoCs 1004 may further include a dedicated high-performance mass storage controller, which may include its own DMA engine, to perform routine data management tasks. may be used to free the CPU 1006 fromIn at least one embodiment, SoC 1004 may be an end-to-end platform with a flexible architecture spanning automation levels 3-5, which leverages computer vision and ADAS techniques for diversity and redundancy. A comprehensive functional safety architecture that leverages and efficiently utilizes is provided, and a flexible, reliable driving software stack is provided with deep learning tools. In at least one embodiment, SoC 1004 is faster, more reliable, and more energy and space efficient than conventional systems. For example, in at least one embodiment, accelerator 1014, in combination with CPU 1006, GPU 1008, and data store 1016, can provide a fast and efficient platform for Level 3-5 autonomous vehicles.In at least one embodiment, a computer vision algorithm may run on a CPU and be constructed using a high-level programming language such as C to run a variety of processing algorithms across a variety of visual data. may be executed. However, in at least one embodiment, CPUs often fail to meet the performance requirements of many computer vision applications, such as those related to execution time and power consumption. In at least one embodiment, many CPUs are incapable of executing complex object detection algorithms used in in-vehicle ADAS applications and realistic Level 3-5 autonomous vehicles in real time.Embodiments described herein may enable multiple neural networks to run simultaneously and/or in sequence, and combine the results to enable level 3-5 autonomous driving capabilities. For example, in at least one embodiment, a CNN running on a DLA or a separate GPU (e.g., GPU 1020) may include text and word recognition, including signs for which the neural network has not been specifically trained. Be able to read and understand traffic signs. In at least one embodiment, the DLA is further capable of identifying and interpreting indicators, providing a semantic understanding of the indicators, and passing that semantic understanding to a route planning module running on the CPU complex. may include a neural network capable ofIn at least one embodiment, multiple neural networks may be run simultaneously for level 3, 4, or 5 driving. For example, in at least one embodiment, a warning sign that reads "Caution: Frozen when blinking" in conjunction with a lightning may be interpreted separately or collectively by several neural networks. . In at least one embodiment, such warning signs themselves may be identified as traffic signs by a first installed neural network (e.g., a neural network that has been trained) with the words "Blinking when frozen". may be interpreted by a second installed neural network which, if a flashing light is detected, indicates that a icing condition exists in the vehicle (preferably running on the CPU complex). notifying the route planning software. In at least one embodiment, flashing lights may be identified by running a third installed neural network over multiple frames, where the presence (or absence) of flashing lights is detected by the vehicle's route planning software. to be notified. In at least one embodiment, all three neural networks may run concurrently, such as within the DLA and/or on GPU 1008 .In at least one embodiment, a CNN for facial recognition and vehicle owner identification may use data from camera sensors to identify the presence of authorized drivers and/or owners of vehicle 1000. . In at least one embodiment, an always-on sensor processing engine is used to unlock vehicles, turn on the lights when an owner approaches the driver's door, and activate security when an owner leaves such a vehicle. • The mode may disable these vehicles. Thus, SoC 1004 provides security against theft and/or car hijacking.In at least one embodiment, a CNN for emergency vehicle detection and identification may use data from microphone 1096 to detect and identify emergency vehicle sirens. In at least one embodiment, SoC 1004 uses CNNs to classify visual data as well as classify environmental and urban sounds. In at least one embodiment, a CNN running on the DLA is trained to identify the relative speed of approaching emergency vehicles (eg, by using the Doppler effect). In at least one embodiment, the CNN may also be trained to identify emergency vehicles specific to the area in which the vehicle is operating, identified by GNSS sensors 1058 . In at least one embodiment, if operating in Europe, CNN will attempt to detect European sirens, and if in North America, it will attempt to identify only North American sirens. In at least one embodiment, when an emergency vehicle is detected, the control program for executing an emergency vehicle safety routine is used to slow the vehicle, pull over the road, stop the vehicle, and/or The vehicle may be idled in conjunction with the ultrasonic sensor 1062 until the vehicle passes.In at least one embodiment, vehicle 1000 may include CPU 1018 (eg, a discrete CPU or dCPU), which may be coupled to SoC 1004 via a high speed interconnect (eg, PCIe). In at least one embodiment, CPU 1018 may include, for example, an X86 processor. The CPU 1018 may, for example, reconcile potentially inconsistent results between the ADAS sensors and the SoC 1004 and/or determine the state and It may be used to perform any of a variety of functions, including health monitoring.In at least one embodiment, vehicle 1000 may include a GPU 1020 (eg, a discrete GPU or a dGPU), which may be coupled to SoC 1004 via a high-speed interconnect (eg, NVIDIA's NVLINK channel). In at least one embodiment, GPU 1020 may provide additional artificial intelligence functionality, such as by running redundant and/or disparate neural networks, and may use input from vehicle 1000 sensors (e.g., sensor data ) may be used to train and/or update neural networks.In at least one embodiment, the vehicle 1000 may further include a network interface 1024, which includes, but is not limited to, wireless antennas 1026 (e.g., cellular antennas, Bluetooth antennas, etc., one for different communication protocols). or multiple wireless antennas). In at least one embodiment, wireless connectivity to Internet cloud services (e.g., servers and/or other network devices) with other vehicles and/or computing devices (e.g., passenger client devices) A network interface 1024 may be used to allow for In at least one embodiment, a direct link may be established between vehicle 100 and other vehicles and/or an indirect link (eg, over a network and via the Internet) to communicate with other vehicles. A link may be established. In at least one embodiment, the direct link may be provided using a vehicle-to-vehicle communication link. In at least one embodiment, the vehicle-to-vehicle communication link may provide vehicle 1000 with information about vehicles in the vicinity of vehicle 1000 (eg, vehicles in front of, beside, and/or behind vehicle 1000). In at least one embodiment, such aforementioned functionality may be part of the cooperative adaptive cruise control functionality of vehicle 1000 .In at least one embodiment, network interface 1024 may include a SoC that provides modulation and demodulation functionality and allows controller 1036 to communicate over a wireless network. In at least one embodiment, network interface 1024 may include a radio frequency front end for baseband to radio frequency up conversion and radio frequency to baseband down conversion. In at least one embodiment, frequency conversion may be performed in any technically feasible manner. For example, frequency conversion can be performed by well-known processes and/or using a super-heterodyne process. In at least one embodiment, radio frequency front end functionality may be provided by a separate chip. In at least one embodiment, the network interface is LTE, WCDMA, UMTS, GSM, CDMA2000, Bluetooth, Bluetooth LE, Wi-Fi, Z-Wave, ZigBee, LoRaWAN, and/or other wireless protocols. may include wireless capabilities for communicating viaIn at least one embodiment, vehicle 1000 may further include data store 1028, which may include, without limitation, off-chip (eg, not on SoC 1004) storage. In at least one embodiment, data store 1028 includes RAM, SRAM, dynamic random-access memory (“DRAM”), video random-access memory (“VRAM”), flash memory. It may include one or more storage elements including, without limitation, memory, hard disk, and/or other components and/or devices capable of storing at least one bit of data.In at least one embodiment, vehicle 1000 may further include GNSS sensors 1058 (eg, GPS and/or auxiliary GPS sensors) to assist with mapping, perception, occupancy grid generation, and/or route planning functions. In at least one embodiment, any number of GNSS sensors 1058 may be used including, but not limited to, GPS using a USB connector with an Ethernet to serial (eg, RS-232) bridge. .In at least one embodiment, vehicle 1000 may further include RADAR sensor 1060 . In at least one embodiment, RADAR sensor 1060 may be used by vehicle 1000 for long-range vehicle detection, even in darkness and/or severe weather conditions. In at least one embodiment, the RADAR functional safety level may be ASIL B. In at least one embodiment, the RADAR sensor 1060 is connected to the CAN bus and/or bus 1002 for control (eg, to transmit data generated by the RADAR sensor 1060) and to access object tracking data. may be used, and in some examples may access an Ethernet channel to access the raw data. In at least one embodiment, various types of RADAR sensors may be used. For example, without limitation, RADAR sensor 1060 may be suitable for front, rear, and side RADAR use. In at least one embodiment, one or more of RADAR sensors 1060 are pulsed Doppler RADAR sensors.In at least one embodiment, RADAR sensor 1060 may include different configurations, such as long range with narrow field of view, short range with wide field of view, and short range with side coverage. In at least one embodiment, long-range RADAR may be used for adaptive cruise control functions. In at least one example, a long-range RADAR system may provide a wide field of view, such as within a range of 250 meters (meters) achieved by two or more independent scans. In at least one embodiment, the RADAR sensor 1060 may help distinguish between static and moving objects and may be used by the ADAS system 1038 to provide emergency braking assistance and forward collision warning. . In at least one embodiment, the sensors 1060 included in the long-range RADAR system include, without limitation, multiple (e.g., six or more) fixed RADAR antennas, and a monostatic, multimode RADAR with high-speed CAN and FlexRay interfaces. may contain. In at least one embodiment, where there are six antennas, the center four antennas are focused beams designed to record around the vehicle 1000 at higher speeds with minimal interference from adjacent lanes. • Patterns may be generated. In at least one embodiment, the other two antennas may extend the field of view, enabling rapid detection of vehicles entering or exiting the lane of vehicle 1000 .In at least one embodiment, a medium-range RADAR system may include a range of up to 160m (forward) or 80m (rear) and a field of view of up to 42 degrees (forward) or 150 degrees (rear), as examples. In at least one embodiment, the short-range RADAR system may include, without limitation, any number of RADAR sensors 1060 designed to be installed at each end of the rear bumper. When installed at both ends of the rear bumper, in at least one embodiment, the RADAR sensor system may generate two beams that constantly monitor the blind spots in the rearward direction and next to the vehicle. In at least one embodiment, short-range RADAR systems may be used in ADAS system 1038 to provide blind spot detection and/or lane change assistance.In at least one embodiment, vehicle 1000 may further include ultrasonic sensor 1062 . In at least one embodiment, ultrasonic sensors 1062 may be positioned at front, rear, and/or side locations of vehicle 1000 and may be used for parking assistance and/or to generate and update an occupancy grid. may be In at least one embodiment, multiple ultrasonic sensors 1062 may be used, and different ultrasonic sensors 1062 may be used for different detection ranges (eg, 2.5m, 4m). In at least one embodiment, ultrasonic sensor 1062 may operate at functional safety level ASIL B.In at least one embodiment, vehicle 1000 may include LIDAR sensor 1064 . In at least one embodiment, LIDAR sensors 1064 may be used for object and pedestrian detection, emergency braking, collision avoidance, and/or other functions. In at least one embodiment, LIDAR sensor 1064 may operate at functional safety level ASIL B. In at least one embodiment, vehicle 1000 may include a plurality of LIDAR sensors 1064 (eg, 2, 4, 6, etc.) that transmit data (eg, data over Gigabit Ethernet An Ethernet channel may be used to provide the switch).In at least one embodiment, LIDAR sensor 1064 may be capable of providing a list of objects and their distances for a 360 degree field of view. In at least one embodiment, a commercially available LIDAR sensor 1064 may, for example, have a advertised range of approximately 100m, an accuracy of 2cm-3cm, and support a 100Mbps Ethernet connection. In at least one embodiment, one or more non-protruding LIDAR sensors may be used. In these examples, LIDAR sensors 1064 may include small devices that can be incorporated in front, rear, side, and/or corner locations of vehicle 1000 . In at least one embodiment, the LIDAR sensor 1064 of such embodiments may provide a horizontal field of view of up to 120 degrees and a vertical field of view of 35 degrees with a range of 200m even for low reflectivity objects. In at least one embodiment, forward-mounted LIDAR sensor 1064 may be configured to provide a horizontal field of view of 45 degrees to 135 degrees.In at least one embodiment, LIDAR technology such as 3D flash LIDAR may also be used. In at least one embodiment, the 3D flash LIDAR uses a laser flash as a transmission source to illuminate up to approximately 200m around the vehicle 1000 . In at least one embodiment, the flash LIDAR unit includes, but is not limited to, a receptor that records the transit time of the laser pulse and the reflected light at each pixel, which extends from the vehicle 1000 to the object. handle. In at least one embodiment, flash LIDAR enables highly accurate and undistorted images of the surroundings to be produced with each flash of the laser. In at least one embodiment, four flash LIDARs may be deployed, one on each side of vehicle 1000 . In at least one embodiment, a 3D flash LIDAR system includes, without limitation, a solid state 3D staring array LIDAR camera (eg, a non-scanning LIDAR device) with no moving parts other than a fan. In at least one embodiment, a flash LIDAR device may use Class I (eye-safe) laser pulses of 5 ns per frame, point cloud of 3D range and co-registered Reflected laser light may be captured as intensity data.In at least one embodiment, vehicle 1000 may further include IMU sensor 1066 . In at least one embodiment, IMU sensor 1066 may be positioned in the middle of the rear axle of vehicle 1000 . In at least one embodiment, IMU sensor 1066 may include, for example, without limitation, an accelerometer, magnetometer, gyroscope, magnetic compass, multiple magnetic compasses, and/or other types of sensors. In at least one embodiment, such as for six-axis applications, IMU sensors 1066 may include, without limitation, accelerometers and gyroscopes. In at least one embodiment, such as for 9-axis applications, IMU sensors 1066 may include, without limitation, accelerometers, gyroscopes, and magnetometers.In at least one embodiment, the IMU sensor 1066 combines micro-electro-mechanical systems (“MEMS”) inertial sensors, a highly sensitive GPS receiver, and advanced Kalman filtering algorithms to detect position, velocity, , and as a compact high performance GPS-Aided Inertial Navigation System (“GPS/INS”) that provides attitude estimates. In at least one embodiment, the IMU sensor 1066 allows the vehicle 1000 to estimate the heading of the vehicle 1000 by directly observing speed changes and correlating them from GPS to the IMU sensor 1066 without requiring input from magnetic sensors. become able to. In at least one embodiment, IMU sensor 1066 and GNSS sensor 1058 may be combined into a single integrated unit.In at least one embodiment, vehicle 1000 may include microphones 1096 located in and/or around vehicle 1000 . In at least one embodiment, microphone 1096 may be used for emergency vehicle detection and identification, among other things.In at least one embodiment, vehicle 1000 further includes stereo camera 1068, wide-angle camera 1070, infrared camera 1072, ambient camera 1074, long-range camera 1098, medium-range camera 1076, and/or any other camera type. A number of camera types may be included. In at least one embodiment, cameras may be used to capture image data around the entire perimeter of vehicle 1000 . In at least one embodiment, which type of camera is used depends on the vehicle 1000 . In at least one embodiment, any combination of camera types may be used to provide the required coverage around vehicle 1000 . In at least one embodiment, the number of cameras installed may vary depending on the embodiment. For example, in at least one embodiment, vehicle 1000 may include 6 cameras, 7 cameras, 10 cameras, 12 cameras, or another number of cameras. In at least one embodiment, the camera may support Gigabit Multimedia Serial Link (“GMSL”) and/or Gigabit Ethernet communications by way of example and not limitation. good. In at least one example, each camera may be as described in further detail herein above with respect to FIGS. 10A and 10B.In at least one embodiment, vehicle 1000 may further include vibration sensor 1042 . In at least one embodiment, vibration sensor 1042 may measure vibration of a component of vehicle 1000, such as an axle. For example, in at least one embodiment, changes in vibration may indicate changes in the road surface. In at least one embodiment, if more than one vibration sensor 1042 is used, the difference in vibration may be used to determine the amount of road surface friction or slippage (e.g., power driven axle and the free-running axle).In at least one embodiment, vehicle 1000 may include ADAS system 1038 . In at least one embodiment, ADAS system 1038 may include, without limitation, SoC in some examples. In at least one embodiment, the ADAS system 1038 includes, without limitation, any number and combination of autonomous/adaptive/automatic cruise control (“ACC”) systems, cooperative adaptive cruise control (“CACC”: cooperative adaptive cruise control) system, forward crash warning (“FCW”: forward crash warning) system, automatic emergency braking (“AEB”: automatic emergency braking) system, lane departure warning ( "LDW": lane departure warning ("LKA": lane keep assist) system, "BSW": blind spot warning ("BSW") system, rear cross traffic warning ("RCTW": ear cross- traffic warning) systems, collision warning (“CW”) systems, lane centering (“LC”) systems, and/or other systems, features, and/or functions.In at least one embodiment, the ACC system may use RADAR sensors 1060, LIDAR sensors 1064, and/or any number of cameras. In at least one embodiment, the ACC system may include a longitudinal ACC system and/or a lateral ACC system. In at least one embodiment, the longitudinal ACC system monitors and controls the distance to another vehicle in front of vehicle 1000 and automatically adjusts the speed of vehicle 1000 to maintain a safe distance from the vehicle in front. maintain. In at least one embodiment, the lateral ACC system performs distance keeping and notifies vehicle 1000 to change lanes when necessary. In at least one embodiment, lateral ACC is relevant for other ADAS applications such as LC and CW.In at least one embodiment, the CACC system uses information from other vehicles, which is transmitted from other vehicles by wireless links or indirectly through network connections (e.g., over the Internet). It may be received by network interface 1024 and/or wireless antenna 1026 . In at least one embodiment, a direct link may be provided by a vehicle-to-vehicle (“V2V”) communication link, while an indirect link may be provided by an infrastructure-to-vehicle (“I2V”) communication link. A link may be provided. In general, V2V communication provides information about the immediate preceding vehicles (eg, vehicles in the same lane immediately in front of vehicle 1000), and I2V communication provides information about further traffic ahead. In at least one embodiment, the CACC system may include either or both I2V and V2V sources. In at least one embodiment, information about the vehicles in front of vehicle 1000 may make the CACC system more reliable, allowing smoother traffic flow and potentially reducing congestion on the road. .In at least one embodiment, the FCW system is designed to advise drivers against hazardous materials so that such drivers can take corrective action. In at least one embodiment, the FCW system uses a front-facing camera and/or RADAR sensor 1060, which are electrically coupled to provide feedback to the driver, such as a display, speakers, and/or vibrating components. dedicated processor, DSP, FPGA, and/or ASIC. In at least one embodiment, the FCW system may provide warnings in the form of sounds, visual warnings, vibrations, and/or quick brake pulses, and the like.In at least one embodiment, the AEB system detects an imminent head-on collision with another vehicle or other object and, if the driver does not take corrective action within a specified time or distance parameter, automatically You can apply the brakes. In at least one embodiment, the AEB system may use a front-facing camera and/or RADAR sensor 1060 coupled to a dedicated processor, DSP, FPGA, and/or ASIC. In at least one embodiment, when the AEB system detects a hazard, the AEB system typically first advises the driver to take corrective action to avoid a collision, and if the driver does not take corrective action, the AEB system may automatically apply the brakes to prevent or at least mitigate an anticipated collision. In at least one embodiment, the AEB system may include techniques such as dynamic brake support and/or pre-collision braking.In at least one embodiment, the LDW system provides visual, audible, and/or tactile warnings, such as steering wheel or seat vibrations, to alert the driver when vehicle 1000 crosses a lane marking. do. In at least one embodiment, the LDW system will not activate if the driver indicates an intentional lane departure, such as by activating a turn signal. In at least one embodiment, the LDW system may use a front-facing camera, which may be electrically coupled to provide feedback to the driver such as a display, speaker, and/or vibrating component. A dedicated processor, DSP, FPGA, and/or ASIC that can In at least one embodiment, the LKA system is a variant of the LDW system. In at least one embodiment, the LKA system provides steering input or brake control to correct the vehicle 1000 if the vehicle 1000 begins to stray from the vehicle's 1000 lane.In at least one embodiment, the BSW system detects vehicles in the blind spot of the vehicle and alerts the driver. In at least one embodiment, the BSW system may provide visual, audible, and/or tactile alerts to indicate that a merge or lane change is unsafe. In at least one embodiment, the BSW system may provide additional warnings when the driver uses turn signals. In at least one embodiment, the BSW system may use a rear camera and/or RADAR sensor 1060 coupled to a dedicated processor, DSP, FPGA, and/or ASIC, and these dedicated processors, DSPs, FPGAs. , and/or the ASIC is electrically coupled to a display, speaker, and/or feedback to a driver such as a vibrating component.In at least one embodiment, the RCTW system may provide visual, audible, and/or tactile notification when an object is detected out of range of the rear camera while the vehicle 1000 is reversing. In at least one embodiment, the RCTW system includes an AEB system to ensure vehicle braking is applied to avoid a collision. In at least one embodiment, the RCTW system may use one or more rear RADAR sensors 1060, which are electrically powered to provide feedback to drivers such as displays, speakers, and/or vibrating components. coupled to a proprietary processor, DSP, FPGA, and/or ASIC that are mechanically coupled.In at least one embodiment, conventional ADAS systems can be prone to false positive results, which can be annoying and distracting to drivers, but usually not a big deal. This is because conventional ADAS systems advise the driver and allow the driver to determine whether a safety-requiring condition really exists and to respond accordingly. In at least one embodiment, if the results conflict, the results from the primary computer (eg, the first controller of controllers 1036) are followed or the results from the secondary computer (eg, the second controller of controllers 1036) are followed. The vehicle 1000 itself determines whether or not. For example, in at least one embodiment, ADAS system 1038 may be a backup and/or secondary computer for resisting perceptual information to the rationality module of the backup computer. In at least one embodiment, the rationality monitor of the backup computer may run a redundant variety of software on hardware components to detect perceptual errors and dynamic driving tasks. In at least one embodiment, the output from ADAS system 1038 may be provided to a supervisory MCU. In at least one embodiment, if the output from the primary computer and the output from the secondary computer conflict, the supervisory MCU determines how to reconcile the conflict to ensure safe operation.In at least one embodiment, the primary computer may be configured to provide a reliability score to the supervisory MCU indicating the reliability of the primary computer's selected results. In at least one embodiment, if the confidence score exceeds a threshold, the supervisory MCU may follow the instructions of the primary computer regardless of whether the secondary computer is providing inconsistent or inconsistent results. In at least one embodiment, if the confidence score does not meet the threshold and the primary and secondary computers show different results (e.g., inconsistencies), the supervisory MCU arbitrates between the computers to determine the appropriate result. may be determined.In at least one embodiment, a neural network trained and configured to determine the conditions under which the secondary computer will provide a false alarm based at least in part on outputs from the primary computer and outputs from the secondary computer. , may be configured to be executed by the supervisory MCU. In at least one embodiment, the supervisory MCU's neural network may learn when the output of the secondary computer can be trusted and when it cannot be trusted. For example, in at least one embodiment, if the secondary computer is a RADAR-based FCW system, the neural network of the supervisory MCU is not actually a hazard, such as a drain grate or manhole cover, that triggers an alarm. It may learn when the FCW system identifies metal objects. In at least one embodiment, if the secondary computer is a camera-based LDW system, the neural network of the surveillance MCU can It may learn to disable the LDW. In at least one embodiment, the supervisory MCU may include at least one of a DLA or GPU suitable for executing neural networks with associated memory. In at least one embodiment, a supervisory MCU may comprise and/or be included as a component of SoC 1004 .In at least one embodiment, ADAS system 1038 may include a secondary computer that uses conventional rules of computer vision to perform ADAS functions. In at least one embodiment, the secondary computer may use conventional computer vision rules (if-then rules), and the presence of a neural network in the supervisory MCU ensures reliability, safety, and Performance may improve. For example, in at least one embodiment, the diversity of implementations and intentional non-identity makes the overall system more error-tolerant, particularly against errors caused by software (or software-hardware interfaces) functionality. For example, in at least one embodiment, there is a bug or error in software running on the primary computer, and non-identical software code running on the secondary computer provides globally consistent results. In some cases, the supervisory MCU may have greater confidence that the overall results are correct and that software or hardware bugs on the primary computer have not caused serious errors.In at least one embodiment, the output of the ADAS system 1038 may be fed to the primary computer's sensory block and/or the primary computer's dynamic driving task block. For example, in at least one embodiment, if the ADAS system 1038 indicates a frontal collision warning due to an object in front, the sensory block may use this information when identifying the object. In at least one embodiment, the secondary computer may have its own neural network trained as described herein, thus reducing the risk of false positives.In at least one embodiment, vehicle 1000 may further include an infotainment SoC 1030 (eg, an in-vehicle infotainment system (IVI)). Infotainment system 1030 is shown and described as an SoC, but in at least one embodiment may not be an SoC and may include two or more separate components without limitation. In at least one embodiment, infotainment SoC 1030 may include, without limitation, a combination of hardware and software that is used to provide audio (e.g., music, personal digital assistants, navigation instructions, news , radio, etc.), video (e.g., TV, movies, streaming, etc.), telephony (e.g., hands-free calling), network connectivity (e.g., LTE, Wi-Fi, etc.), and/or information services (e.g., navigation systems , rear parking assistance, wireless data systems, vehicle-related information such as fuel level, total mileage, brake fuel level, oil level, door open/close, air filter information, etc.) may be provided to vehicle 1000 . For example, the Infotainment SoC 1030 can be used for radio, disc player, navigation system, video player, USB and Bluetooth connectivity, car computer, in-car entertainment, Wi-Fi, steering wheel audio control, hands-free voice control, heads-up displays (“HUD”: heads-up display), HMI displays 1034, telematics devices, control panels (eg, for controlling and/or interacting with various components, features, and/or systems), and / or other components may be included. In at least one embodiment, the infotainment SoC 1030 is also used to retrieve information from the ADAS system 1038, vehicle maneuvering plans, autonomous driving information such as trajectories, surrounding environment information (e.g., intersection information, vehicle information, road information, etc.), and/or other information (eg, visual and/or audible) may be provided to the user of vehicle 1000 .In at least one embodiment, infotainment SoC 1030 may include any amount and type of GPU functionality. In at least one embodiment, infotainment SoC 1030 may communicate with other devices, systems, and/or components of vehicle 1000 via bus 1002 . In at least one embodiment, infotainment SoC 1030 may be coupled to a supervisory MCU such that when primary controller 1036 (e.g., primary and/or backup computer of vehicle 1000) fails, the infotainment system may The GPU may perform some self-driving functions. In at least one embodiment, infotainment SoC 1030 may place vehicle 1000 into a driver-safe stop mode as described herein.In at least one embodiment, vehicle 1000 may further include instrument cluster 1032 (eg, digital dashboard, electronic instrument cluster, digital instrument panel, etc.). In at least one embodiment, instrument cluster 1032 may include, without limitation, controllers and/or supercomputers (eg, separate controllers or supercomputers). In at least one embodiment, instrument cluster 1032 includes, without limitation, speedometer, fuel level, oil pressure, tachometer, odometer, turn signals, shift lever position indicator, seat belt warning light, parking brake warning light. , engine fault lights, supplemental restraint system (eg, airbag) information, light controls, safety system controls, navigation information, etc., in any number and combination of instrument sets. In some examples, information may be displayed and/or shared between infotainment SoC 1030 and instrument cluster 1032 . In at least one embodiment, instrument cluster 1032 may be included as part of infotainment SoC 1030, or vice versa.Inference and/or training logic 715 is used to perform inference and/or training operations associated with one or more embodiments. Details regarding inference and/or training logic 715 are provided herein in conjunction with FIGS. 7A and/or 7B. In at least one embodiment, the inference and/or training logic 715 uses neural network training operations, neural network functionality and/or architecture, or neural network use cases described herein. Based at least in part on the calculated weight parameters may be used in the system of FIG. 10C for inference or prediction operations.In at least one embodiment, at least one component shown or described with respect to FIG. 10C is used to implement the techniques and/or functionality described with respect to FIGS. 1-6. In at least one embodiment, inference and/or training logic 715 includes and/or operates at least one aspect (eg, deep learning compiler 102, stream scheduler 110, memory allocator 112) described with respect to FIG. In at least one embodiment, inference and/or training logic 715 is a computer program representation of speculatively executable operations and/or instructions, such as those described with respect to one or more of FIGS. to train at least one untrained or partially trained neural network. In at least one embodiment, the inference and/or training logic is a computer program representation of speculatively executable operations and/or instructions, such as those described with respect to one or more of FIGS. to perform at least one inference operation.FIG. 10D is a diagram of a system for communicating between a cloud-based server and the autonomous vehicle 1000 of FIG. 10A, according to at least one embodiment. In at least one embodiment, the system may include any number and type of vehicles, including but not limited to server 1078, network 1090, and vehicle 1000. FIG. In at least one embodiment, server 1078 includes, without limitation, multiple GPUs 1084(A)-1084(H) (collectively referred to herein as GPUs 1084), PCIe switches 1082(A)-1082(D) ( (collectively referred to herein as PCIe switch 1082), and/or CPUs 1080(A)-1080(B) (collectively referred to herein as CPU 1080). In at least one embodiment, GPU 1084, CPU 1080, and PCIe switch 1082 may be interconnected by a high speed interconnect such as, for example, without limitation, NVLink interface 1088 developed by NVIDIA, and/or PCIe connection 1086. . In at least one embodiment, GPUs 1084 are connected via NVLink and/or NVS switch SoCs, and GPUs 1084 and PCIe switch 1082 are connected via PCIe interconnects. Although eight GPUs 1084, two CPUs 1080, and four PCIe switches 1082 are shown, this is not limiting. In at least one embodiment, each of servers 1078 may include, without limitation, any number of GPUs 1084, CPUs 1080, and/or PCIe switches 1082 in any combination. For example, in at least one embodiment, servers 1078 may each include 8, 16, 32, and/or more GPUs 1084 .In at least one embodiment, server 1078 may receive image data from vehicles over network 1090 representing images indicative of unexpected or changed road conditions, such as road construction that has recently begun. In at least one embodiment, server 1078 transmits updated or otherwise updated neural network 1092 and/or map information 1094, including without limitation information about traffic and road conditions, to vehicles via network 1090. You may In at least one embodiment, updates to map information 1094 may include, without limitation, updates to HD map 1022, such as information regarding building sites, potholes, detours, floods, and/or other obstacles. In at least one embodiment, neural network 1092 and/or map information 1094 may be obtained from new training and/or experience represented in data received from any number of vehicles in the environment. and/or may have been obtained based, at least in part, on training performed at a data center (eg, using server 1078 and/or other servers).In at least one embodiment, server 1078 may be used to train a machine learning model (eg, a neural network) based at least in part on the training data. In at least one embodiment, the training data may be generated by the vehicle and/or generated in simulation (eg, using a game engine). In at least one embodiment, any amount of training data is tagged and/or undergoes other preprocessing (eg, where the associated neural network would benefit from supervised learning). In at least one embodiment, any amount of training data is not tagged and/or preprocessed (eg, if the associated neural network does not require supervised learning). In at least one embodiment, once the machine learning model is trained, the machine learning model may be used by the vehicle (e.g., transmitted to the vehicle via network 1090 and/or the machine learning model may It may be used by server 1078 to remotely monitor the vehicle.In at least one embodiment, server 1078 may receive data from the vehicle and apply the data to state-of-the-art real-time neural networks to enable real-time intelligent inference. In at least one embodiment, server 1078 may include a deep learning supercomputer and/or a dedicated AI computer powered by GPU 1084, such as the DGX and DGX station machines developed by NVIDIA. However, in at least one embodiment, server 1078 may include a deep learning infrastructure using a CPU powered data center.In at least one embodiment, the deep learning infrastructure of server 1078 may be capable of fast, real-time inference, using its capabilities to implement processor, software, and/or related hardware in vehicle 1000 . may assess and confirm the soundness of For example, in at least one embodiment, the deep learning infrastructure determines the location of the vehicle 1000 (eg, via computer vision and/or other machine learning object classification techniques) in the sequence of images and/or in the sequence of images. Periodic updates may be received from the vehicle 1000, such as objects that have moved. In at least one embodiment, the deep learning infrastructure may run its own neural network to identify an object and compare it to the object identified by vehicle 1000, if the results do not match and vehicle 1000 If the deep learning infrastructure concludes that the AI of the vehicle 1000 has failed, the server 1078 takes control of the fail-safe computer of the vehicle 1000 and notifies the occupants to complete the safe stop maneuver. A command signal may be sent to the vehicle 1000 .In at least one embodiment, server 1078 may include GPU 1084 and one or more programmable inference accelerators (eg, NVIDIA's TensorRT3 devices). In at least one embodiment, a combination of GPU-powered servers and inference acceleration can enable real-time responses. Servers powered by CPUs, FPGAs, and other processors may be used for inference, in at least one embodiment, such as when performance is not critical. In at least one embodiment, hardware structure 715 is used to implement one or more embodiments. Details regarding hardware structure 715 are provided herein in conjunction with FIGS. 7A and/or 7B.Computer System FIG. 11 is a block diagram illustrating an exemplary computer system formed with a processor, which may include an execution unit for executing instructions, according to at least one embodiment. It may be a system with interconnected devices and components, a system-on-chip (SoC), or some combination thereof. In at least one embodiment, computer system 1100 employs an execution unit, such as processor 1102, that includes logic to execute algorithms for processing data in accordance with this disclosure, such as in the embodiments described herein. Components may be included without limitation. In at least one embodiment, computer system 1100 includes the PENTIUM® processor family, Xeon™, Itanium®, XScale™ and/or StrongARM™, Intel® processors available from Intel Corporation of Santa Clara, Calif. ) Core™, or Intel® Nervana™ microprocessors (including PCs with other microprocessors, engineering workstations, set top boxes, etc.) Other systems may be used. In at least one embodiment, computer system 1100 may run a version of the WINDOWS® operating system available from Microsoft Corporation of Redmond, Wash., although other operating systems (eg, , UNIX and Linux), embedded software, and/or graphical user interfaces may be used.Embodiments may be used in other devices such as handheld devices and embedded applications. Some examples of portable devices include cellular phones, internet protocol devices, digital cameras, personal digital assistants (“PDAs”), and portable PCs. In at least one embodiment, embedded applications include microcontrollers, digital signal processors (“DSPs”), systems-on-chips, network computers (“NetPCs”), set-top boxes, It may include a network hub, wide area network (“WAN”) switch, or any other system capable of executing one or more instructions according to at least one embodiment.In at least one embodiment, computer system 1100 may include, without limitation, processor 1102, which performs training and/or inference of machine learning models according to techniques described herein. It may include one or more execution units 1108 for execution. In at least one embodiment, computer system 1100 is a single-processor desktop or server system, although in other embodiments computer system 1100 may be a multi-processor system. In at least one embodiment, processor 1102 includes, without limitation, a complex instruction set computer (“CISC”) microprocessor, a reduced instruction set computing (“RISC”) microprocessor, very long instruction The term (“VLIW”) may include a microprocessor, a processor implementing a combination of instruction sets, or any other processor device such as a digital signal processor. In at least one embodiment, processor 1102 may be coupled to processor bus 1110, which transmits digital signals between processor 1102 and other components within computer system 1100. good too.In at least one embodiment, processor 1102 may include, without limitation, level 1 (“L1”) internal cache memory (“cache”) 1104 . In at least one embodiment, processor 1102 may have a single internal cache or multiple levels of internal cache. In at least one embodiment, cache memory may be external to processor 1102 . Other embodiments may also include a combination of both internal and external caches, depending on the particular implementation and needs. In at least one embodiment, register file 1106 may store different types of data in various registers including, without limitation, integer registers, floating point registers, status registers, and instruction pointer registers.Also located in processor 1102, in at least one embodiment, is execution unit 1108, which includes, but is not limited to, logic for performing integer and floating point operations. In at least one embodiment, processor 1102 may also include microcode (“u-code”) read only memory (“ROM”) that stores microcode for certain macro-instructions. In at least one embodiment, execution unit 1108 may include logic to deal with packed instruction set 1109 . In at least one embodiment, by including the packed instruction set 1109 in the instruction set of a general-purpose processor along with the associated circuitry to execute the instructions, operations used by many multimedia applications can be performed using the packed data of the processor 1102. can be run as In at least one embodiment, many multimedia applications can be accelerated and run more efficiently by using the full width of the processor's data bus to perform operations on packed data, thereby , it can eliminate the need to transfer smaller units of data across the processor's data bus to perform one or more operations on one data element at a time.In at least one embodiment, execution unit 1108 may also be used in microcontrollers, embedded processors, graphics devices, DSPs, and other types of logic circuits. In at least one embodiment, computer system 1100 may include, without limitation, memory 1120 . In at least one embodiment, memory 1120 is a dynamic random access memory (“DRAM”) device, static random access memory (“SRAM”) device, flash memory device, or other memory device. may be In at least one embodiment, memory 1120 may store instructions 1119 and/or data 1121 represented by data signals that may be executed by processor 1102 .In at least one embodiment, a system logic chip may be coupled to processor bus 1110 and memory 1120 . In at least one embodiment, the system logic chip may include, without limitation, a memory controller hub (“MCH”) 1116 , with processor 1102 communicating with MCH 1116 via processor bus 1110 . You may In at least one embodiment, MCH 1116 may provide high bandwidth memory path 1118 to memory 1120 for storing instructions and data, and for storing graphics commands, data and textures. In at least one embodiment, MCH 1116 conducts data signals between processor 1102, memory 1120, and other components of computer system 1100, and connects processor bus 1110, memory 1120, and system I/O interfaces. 1122 may bridge data signals. In at least one embodiment, the system logic chip may provide a graphics port for coupling to the graphics controller. In at least one embodiment, MCH 1116 may be coupled to memory 1120 via high-bandwidth memory path 1118, and graphics/video card 1112 may include an Accelerated Graphics Port (“AGP”). Port) interconnect 1114 to MCH 1116 .In at least one embodiment, computer system 1100 uses the system I/O interface as a proprietary hub interface bus for coupling MCH 1116 to I/O controller hub (“ICH”) 1130 . 1122 may be used. In at least one embodiment, ICH 1130 may provide direct connections to some I/O devices via local I/O buses. In at least one embodiment, local I/O buses may include, but are not limited to, high speed I/O buses for connecting peripheral devices to memory 1120 , chipset, and processor 1102 . Examples include audio controller 1129, firmware hub (“flash BIOS”) 1128, wireless transceiver 1126, data storage 1124, legacy I/O controller 1123 including user input and keyboard interface 1125, universal serial A serial expansion port 1127, such as a bus (“USB”: Universal Serial Bus) port, and a network controller 1134 may be included without limitation. In at least one embodiment, data storage 1124 comprises a hard disk drive, floppy disk drive, CD-ROM device, flash memory device, or other mass storage device. good too.In at least one embodiment, FIG. 11 depicts a system including interconnected hardware devices or "chips," while in other embodiments, FIG. 11 may depict an exemplary SoC. In at least one embodiment, the devices shown in FIG. 11 may be interconnected by proprietary interconnects, standard interconnects (eg, PCIe), or some combination thereof. In at least one embodiment, one or more components of computer system 1100 may be interconnected using a compute express link (CXL) interconnect.Inference and/or training logic 715 is used to perform inference and/or training operations associated with one or more embodiments. Details regarding inference and/or training logic 715 are provided herein in conjunction with FIGS. 7A and/or 7B. In at least one embodiment, the inference and/or training logic 715 uses neural network training operations, neural network functionality and/or architecture, or neural network use cases described herein. Based at least in part on the calculated weighting parameters may be used in the system of FIG. 11 for inference or prediction operations.In at least one embodiment, at least one component shown or described with respect to FIG. 11 is used to implement the techniques and/or functionality described with respect to FIGS. 1-6. In at least one embodiment, inference and/or training logic 715 includes and/or operates at least one aspect (eg, deep learning compiler 102, stream scheduler 110, memory allocator 112) described with respect to FIG. In at least one embodiment, inference and/or training logic 715 is a computer program representation of speculatively executable operations and/or instructions, such as those described with respect to one or more of FIGS. to train at least one untrained or partially trained neural network. In at least one embodiment, the inference and/or training logic is a computer program representation of speculatively executable operations and/or instructions, such as those described with respect to one or more of FIGS. to perform at least one inference operation. In at least one embodiment, processor 1102 and/or other components of computer system 1100 of FIG. 11 are utilized to implement the techniques and/or functions described with respect to FIGS. .FIG. 12 is a block diagram illustrating an electronic device 1200 for utilizing processor 1210, according to at least one embodiment. In at least one embodiment, electronic device 1200 may be, for example, without limitation, a notebook, tower server, rack server, blade server, laptop, desktop, tablet, mobile device, phone, embedded computer, or any other suitable electronic devices.In at least one embodiment, electronic device 1200 may include, without limitation, processor 1210 communicatively coupled to any suitable number or type of components, peripherals, modules, or devices. In at least one embodiment, processor 1210 interfaces with an I2C bus, a System Management Bus (“SMBus”), a Low Pin Count (LPC) bus, a serial peripheral interface (“SPI ": Serial Peripheral Interface), High Definition Audio ("HDA": High Definition Audio) Bus, Serial Advance Technology Attachment ("SATA": Serial Advance Technology Attachment) Bus, Universal Serial Bus ("USB ) (versions 1, 2, 3, etc.), or a Universal Asynchronous Receiver/Transmitter (“UART”) bus or interface. In at least one embodiment, FIG. 12 depicts a system including interconnected hardware devices or "chips," while in other embodiments, FIG. 12 may depict an exemplary SoC. In at least one embodiment, the devices shown in FIG. 12 may be interconnected by proprietary interconnects, standard interconnects (eg, PCIe), or some combination thereof. In at least one embodiment, one or more of the components in Figure 12 may be interconnected using a Compute Express Link (CXL) interconnect.In at least one embodiment, FIG. 12 includes a display 1224, a touch screen 1225, a touch pad 1230, a Near Field Communications unit (“NFC”) 1245, a sensor hub 1240, a thermal sensor 1246, an express Chipset (“EC”: Express Chipset) 1235, Trusted Platform Module (“TPM”: Trusted Platform Module) 1238, BIOS/firmware/flash memory (“BIOS, FW flash”: BIOS/firmware/flash memory) 1222, DSP 1260, drive 1220 such as a Solid State Disk (“SSD”) or Hard Disk Drive (“HDD”), Wireless Local Area Network Unit (“WLAN : wireless local area network unit) 1250, Bluetooth unit 1252, Wireless Wide Area Network unit ("WWAN") 1256, Global Positioning System (GPS) unit 1255, USB3.0 camera (“USB 3.0 camera”) 1254 such as, and/or Low Power Double Data Rate (“LPDDR”) memory unit (“LPDDR3”) implemented, for example, to the LPDDR3 standard. 1215 may be included. Each of these components may be implemented in any suitable manner.In at least one embodiment, other components may be communicatively coupled to processor 1210 through the components described above. In at least one embodiment, an accelerometer 1241 , an ambient light sensor (“ALS”) 1242 , a compass 1243 , and a gyroscope 1244 may be communicatively coupled to sensor hub 1240 . In at least one embodiment, thermal sensor 1239 , fan 1237 , keyboard 1236 and touch pad 1230 may be communicatively coupled to EC 1235 . In at least one embodiment, speakers 1263, headphones 1264, and a microphone (“mic”) 1265 may be communicatively coupled to an audio unit (audio codec and class D amplifier) 1262, which may , DSP 1260 may be communicatively coupled. In at least one embodiment, audio unit 1262 may include, for example, without limitation, an audio coder/decoder (“codec”) and a class D amplifier. In at least one embodiment, a SIM card (“SIM”) 1257 may be communicatively coupled to WWAN unit 1256 . In at least one embodiment, components such as WLAN unit 1250 and Bluetooth unit 1252 as well as WWAN 1256 may be implemented in a Next Generation Form Factor (“NGFF”).Inference and/or training logic 715 is used to perform inference and/or training operations associated with one or more embodiments. Details regarding inference and/or training logic 715 are provided herein in conjunction with FIGS. 7A and/or 7B. In at least one embodiment, the inference and/or training logic 715 uses neural network training operations, neural network functionality and/or architecture, or neural network use cases described herein. Based at least in part on the calculated weight parameters may be used in the system of FIG. 12 for inference or prediction operations.In at least one embodiment, at least one component shown or described with respect to Figure 12 is used to implement the techniques and/or functionality described with respect to Figures 1-6. In at least one embodiment, inference and/or training logic 715 includes and/or operates at least one aspect (eg, deep learning compiler 102, stream scheduler 110, memory allocator 112) described with respect to FIG. In at least one embodiment, inference and/or training logic 715 is a computer program representation of speculatively executable operations and/or instructions, such as those described with respect to one or more of FIGS. to train at least one untrained or partially trained neural network. In at least one embodiment, the inference and/or training logic is a computer program representation of speculatively executable operations and/or instructions, such as those described with respect to one or more of FIGS. to perform at least one inference operation. In at least one embodiment, system 1200 and/or processor 1210 of FIG. 12 are utilized to implement the techniques and/or functions described with respect to FIGS. 1-6.FIG. 13 illustrates computer system 1300, according to at least one embodiment. In at least one embodiment, computer system 1300 is configured to implement various processes and methods described throughout this disclosure.In at least one embodiment, computer system 1300 includes, without limitation, at least one central processing unit (“CPU”) 1302, which implements PCI (Peripheral Component Interconnect) ), peripheral component interconnect express (“PCI-Express”), AGP: Accelerated Graphics Port (“accelerated graphics port”), hypertransport, or any other bus or connected to communication bus 1310 implemented using any suitable protocol, such as a point-to-point communication protocol. In at least one embodiment, computer system 1300 includes, without limitation, main memory 1304 and control logic (e.g., implemented as hardware, software, or a combination thereof), and data is randomly accessed. • Stored in main memory 1304, which may take the form of memory ("RAM": random access memory). In at least one embodiment, a network interface subsystem (“network interface”) 1322 receives data from other systems with computer system 1300 and sends data to other systems with computer system 1300. It provides an interface with other computing devices and networks forIn at least one embodiment, computer system 1300 includes, without limitation, input device 1308, parallel processing system 1312, and display device 1306, which may be a conventional cathode ray tube. ("CRT": cathode ray tube), liquid crystal display ("LCD"), light emitting diode ("LED") display, plasma display, or other suitable display technology. can be implemented. In at least one embodiment, user input is received from an input device 1308 such as a keyboard, mouse, touch pad, microphone, or the like. In at least one embodiment, each module described herein can be placed on a single semiconductor platform to form a processing system.Inference and/or training logic 715 is used to perform inference and/or training operations associated with one or more embodiments. Details regarding inference and/or training logic 715 are provided herein in conjunction with FIGS. 7A and/or 7B. In at least one embodiment, the inference and/or training logic 715 uses neural network training operations, neural network functionality and/or architecture, or neural network use cases described herein. Based at least in part on the calculated weight parameters may be used in the system of FIG. 13 for inference or prediction operations.In at least one embodiment, at least one component shown or described with respect to FIG. 13 is used to implement the techniques and/or functions described with respect to FIGS. 1-6. In at least one embodiment, inference and/or training logic 715 includes and/or operates at least one aspect (eg, deep learning compiler 102, stream scheduler 110, memory allocator 112) described with respect to FIG. In at least one embodiment, inference and/or training logic 715 is a computer program representation of speculatively executable operations and/or instructions, such as those described with respect to one or more of FIGS. to train at least one untrained or partially trained neural network. In at least one embodiment, the inference and/or training logic is a computer program representation of speculatively executable operations and/or instructions, such as those described with respect to one or more of FIGS. to perform at least one inference operation. In at least one embodiment, computer system 1300 and/or at least one PPU 1314 of FIG. 13 are utilized to implement the techniques and/or functions described with respect to FIGS. 1-6.FIG. 14 illustrates computer system 1400, according to at least one embodiment. In at least one embodiment, computer system 1400 includes, without limitation, computer 1410 and USB stick 1420 . In at least one embodiment, computer 1410 may include, without limitation, any number and type of processors (not shown) and memory (not shown). In at least one embodiment, computer 1410 includes, without limitation, servers, cloud instances, laptops, and desktop computers.In at least one embodiment, USB stick 1420 includes, without limitation, processing unit 1430 , USB interface 1440 , and USB interface logic 1450 . In at least one embodiment, processing unit 1430 may be any instruction execution system, apparatus, or device capable of executing instructions. In at least one embodiment, processing unit 1430 may include, without limitation, any number and type of processing cores (not shown). In at least one embodiment, processing unit 1430 comprises an application-specific integrated circuit (“ASIC”) optimized to perform any amount and type of operations associated with machine learning. For example, in at least one embodiment, processing unit 1430 is a tensor processing unit (“TPC”) optimized to perform machine learning inference operations. In at least one embodiment, processing unit 1430 is a visual processing unit (“VPU”) optimized to perform machine vision and machine learning inference operations.In at least one embodiment, USB interface 1440 may be any type of USB connector or USB socket. For example, in at least one embodiment, USB interface 1440 is a USB 3.0 Type-C socket for data and power. In at least one embodiment, USB interface 1440 is a USB 3.0 Type-A connector. In at least one embodiment, USB interface logic 1450 may include any amount and type of logic that enables processing unit 1430 to interface with a device (eg, computer 1410 ) via USB connector 1440 .Inference and/or training logic 715 is used to perform inference and/or training operations associated with one or more embodiments. Details regarding inference and/or training logic 715 are provided herein in conjunction with FIGS. 7A and/or 7B. In at least one embodiment, the inference and/or training logic 715 uses neural network training operations, neural network functionality and/or architecture, or neural network use cases described herein. Based at least in part on the calculated weighting parameters may be used in the system of FIG. 14 for inference or prediction operations.In at least one embodiment, at least one component shown or described with respect to Figure 14 is used to implement the techniques and/or functionality described with respect to Figures 1-6. In at least one embodiment, inference and/or training logic 715 includes and/or operates at least one aspect (eg, deep learning compiler 102, stream scheduler 110, memory allocator 112) described with respect to FIG. In at least one embodiment, inference and/or training logic 715 is a computer program representation of speculatively executable operations and/or instructions, such as those described with respect to one or more of FIGS. to train at least one untrained or partially trained neural network. In at least one embodiment, the inference and/or training logic is a computer program representation of speculatively executable operations and/or instructions, such as those described with respect to one or more of FIGS. to perform at least one inference operation. In at least one embodiment, processing unit 1430 of FIG. 14 is utilized to implement the techniques and/or functionality described with respect to FIGS. 1-6.FIG. 15A illustrates multiple GPUs 1510(1)-1510(N) connecting to multiple multi-core processors via high-speed links 1540(1)-1540(N) (eg, buses, point-to-point interconnects, etc.). 15 shows an exemplary architecture communicatively coupled to 1505(1)-1505(M). In at least one embodiment, high speed links 1540(1)-1540(N) support communication throughput of 4 GB/s, 30 GB/s, 80 GB/s, or higher. Various interconnection protocols may be used, including but not limited to 1540(1)-1540(N) PCIe 4.0 or 5.0, and NVLink 2.0. In the various figures, "N" and "M" represent positive integers, the values of which may vary from figure to figure.Further, in at least one embodiment, two or more of GPUs 1510 are interconnected via high speed links 1529(1)-1529(2), which are used for high speed links 1540(1)-1540(N). may be implemented using similar or different protocols/links to those described. Similarly, two or more of multi-core processors 1505 may be connected via a high-speed link 1528, which may be 20 GB/s, 30 GB/s, 120 GB/s, or higher. It can be a symmetric multiprocessor (SMP) bus that operates on Alternatively, all communications between the various system components shown in FIG. 15A may be accomplished using similar protocols/links (eg, via a common interconnection fabric).In at least one embodiment, each multi-core processor 1505 is communicatively coupled to processor memories 1501(1)-1501(M) via memory interconnects 1526(1)-1526(M), respectively; Each GPU 1510(1)-1510(N) is communicatively coupled to GPU memory 1520(1)-1520(N) via GPU memory interconnects 1550(1)-1550(N), respectively. In at least one embodiment, memory interconnects 1526 and 1550 may utilize similar or different memory access technologies. By way of example, and not limitation, processor memory 1501(1)-1501(M) and GPU memory 1520 may include dynamic random access memory (DRAM) (including stacked DRAM), graphics DDR SDRAM (GDDR) ( For example, GDDR5, GDDR6), or volatile memory such as High Bandwidth Memory (HBM), and/or non-volatile memory such as 3D XPoint or Nano-Ram. In at least one embodiment, some portions of processor memory 1501 may be volatile memory and other portions may be non-volatile memory (eg, using a two-level memory (2LM) hierarchy). There may be.As described herein, the various multi-core processors 1505 and GPUs 1510 may be physically coupled to specific memories 1501, 1520, respectively, and/or virtual system address spaces ("effective address A unified memory architecture may be implemented in which the "space") is distributed among various physical memories. For example, processor memories 1501(1)-1501(M) may each comprise 64 GB of system memory address space, and GPU memories 1520(1)-1520(N) may each comprise 32 GB of system memory. • An address space may be provided, with M=2 and N=4, resulting in a total of 256 GB of addressable memory. Other values for N and M are possible.FIG. 15B shows further details of the interconnection of multi-core processor 1507 and graphics acceleration module 1546 according to one illustrative embodiment. In at least one embodiment, graphics acceleration module 1546 includes one or more GPU chips integrated into a line card coupled to processor 1507 via high-speed link 1540 (eg, PCIe bus, NVLink, etc.). It's okay. Alternatively, in at least one embodiment, graphics acceleration module 1546 may be integrated into a package or chip with processor 1507 .In at least one embodiment, processor 1507 includes multiple cores 1560A-1560D, each core having a translation lookaside buffer (“TLB”) 1561A-1561D and one or more caches 1562A-1562D. and In at least one embodiment, cores 1560A-1560D may include various other components not shown for executing instructions and processing data. In at least one embodiment, caches 1562A-1562D may comprise level one (L1) and level two (L2) caches. Additionally, one or more shared caches 1556 may be included in caches 1562A-1562D and shared by the set of cores 1560A-1560D. For example, one embodiment of processor 1507 includes 24 cores, each core having its own L1 cache, 12 shared L2 caches, and 12 shared L3 caches. In this embodiment, one or more L2 and L3 caches are shared by two adjacent cores. In at least one embodiment, processor 1507 and graphics acceleration module 1546 are coupled to system memory 1514, which may include processor memories 1501(1)-1501(M) of FIG. 15A. good.In at least one embodiment, coherence is maintained for data and instructions stored in the various caches 1562A-1562D, 1556, and system memory 1514 through inter-core communication via coherence bus 1564. FIG. In at least one embodiment, for example, each cache has its associated cache coherence logic/logic for communicating over coherence bus 1564 in response to detecting a read or write to a particular cache line. circuit. In at least one embodiment, a cache snooping protocol is implemented over coherence bus 1564 to monitor cache accesses.In at least one embodiment, proxy circuitry 1525 communicatively couples graphics acceleration module 1546 to coherence bus 1564 to allow graphics acceleration module 1546 to participate in cache coherence protocols as a peer of cores 1560A-1560D. make it In particular, in at least one embodiment, interface 1535 provides a connection to proxy circuit 1525 via high speed link 1540 and interface 1537 connects graphics acceleration module 1546 to high speed link 1540 .In at least one embodiment, accelerator integration circuit 1536 performs cache management, memory access, content management, and interrupt management on behalf of multiple graphics processing engines 1531(1)-1531(N) of graphics acceleration module 1546. provide the services of In at least one embodiment, each of graphics processing engines 1531(1)-1531(N) may comprise a separate graphics processing unit (GPU). In at least one embodiment, or alternatively, graphics processing engines 1531(1)-1531(N) include graphics execution units, media processing engines (eg, video encoder/decoders), samplers, and blit • May have different types of graphics processing engines, such as engines. In at least one embodiment, the graphics acceleration module 1546 may be a GPU having multiple graphics processing engines 1531(1)-1531(N) or graphics processing engines 1531(1)-1531(N). ) may be individual GPUs integrated into a common package, line card, or chip.In at least one embodiment, accelerator integrated circuit 1536 is used to perform various memory management functions, such as virtual-to-physical memory translation (also called effective-to-real memory translation). memory management unit (MMU) 1539 and memory access protocols for accessing system memory 1514 . In at least one embodiment, MMU 1539 may also include a translation lookaside buffer (TLB) (not shown) for caching virtual/effective to physical/real address translations. In at least one embodiment, cache 1538 may store commands and data for efficient access by graphics processing engines 1531(1)-1531(N). In at least one embodiment, data stored in cache 1538 and graphics memories 1533(1)-1533(M) are retrieved, optionally using fetch unit 1544, from core caches 1562A-1562D, 1556, and It is kept coherent with system memory 1514 . As mentioned, this is done via proxy circuit 1525 on behalf of cache 1538 and memory 1533(1)-1533(M) (eg, for cache line modifications/accesses in processor caches 1562A-1562D, 1556). sending updates to cache 1538 and receiving updates from cache 1538).In at least one embodiment, a set of registers 1545 store context data for threads executed by graphics processing engines 1531(1)-1531(N), and context management circuitry 1548 manages thread context. to manage. For example, context management circuit 1548 may perform save and restore operations to save and restore the context of various threads during a context switch (eg, where the second thread is now The first thread is saved and the second thread is stored for execution by the processing engine). For example, upon a context switch, context management circuit 1548 may store current register values in a designated area of memory (eg, identified by a context pointer). The context management circuit 1548 may then restore the register values when returning to the context. In at least one embodiment, interrupt management circuitry 1547 receives and processes interrupts received from system devices.In at least one embodiment, virtual/effective addresses from graphics processing engine 1531 are translated by MMU 1539 to real/physical addresses in system memory 1514 . In at least one embodiment, an embodiment of accelerator integration circuit 1536 supports multiple (eg, 4, 8, 16) graphics accelerator modules 1546 and/or other accelerator devices. In at least one embodiment, graphics accelerator module 1546 may be dedicated to a single application running on processor 1507, or may be shared among multiple applications. In at least one embodiment, a virtualized graphics execution environment exists in which the resources of graphics processing engines 1531(1)-1531(N) are shared with multiple applications or virtual machines (VMs). In at least one embodiment, resources may be subdivided into "slices" that are allocated to different VMs and/or applications based on processing requirements and priorities associated with the VMs and/or applications. beIn at least one embodiment, accelerator integrated circuit 1536 acts as a bridge to the system for graphics acceleration module 1546 and provides address translation and system memory caching services. Further, in at least one embodiment, accelerator integrated circuit 1536 provides virtualization facilities for the host processor to manage virtualization, interrupts, and memory management of graphics processing engines 1531(1)-1531(N). You mayIn at least one embodiment, the hardware resources of graphics processing engines 1531(1)-1531(N) are explicitly mapped into the real address space seen by host processor 1507, so that any host processor can These resources can be directly addressed using effective address values. In at least one embodiment, one function of accelerator integrated circuit 1536 is to physically separate graphics processing engines 1531(1)-1531(N) so that they appear to the system as independent units.In at least one embodiment, one or more graphics memories 1533(1)-1533(M) are each coupled to each of the graphics processing engines 1531(1)-1531(N), with N=M be. In at least one embodiment, graphics memories 1533(1)-1533(M) store instructions and data to be processed by respective graphics processing engines 1531(1)-1531(N). In at least one embodiment, graphics memory 1533(1)-1533(M) is DRAM (including stacked DRAM), GDDR memory (eg, GDDR5, GDDR6), or volatile memory such as HBM. and/or non-volatile memory such as 3D XPoint or Nano-Ram.In at least one embodiment, to reduce data traffic over high-speed link 1540, data stored in graphics memories 1533(1)-1533(M) are processed by graphics processing engines 1531(1)-1533(M). 1531(N) and preferably data that is not used (or at least infrequently) by cores 1560A-1560D. can be Similarly, in at least one embodiment, the biasing mechanism stores data needed by the cores (and thus preferably not needed by the graphics processing engines 1531(1)-1531(N)) in the core's caches 1562A-1562D. , 1556, and system memory 1514.FIG. 15C shows another exemplary embodiment in which accelerator integrated circuit 1536 is integrated within processor 1507 . In at least this embodiment, graphics processing engines 1531(1)-1531(N) communicate directly with accelerator integrated circuit 1536 via high-speed link 1540 via interface 1537 and interface 1535 (again, any form of bus or interface protocol). In at least one embodiment, accelerator integrated circuit 1536 may perform operations similar to those described with respect to FIG. , may potentially operate at higher throughput. In at least one embodiment, the accelerator integrated circuit supports different programming models, including a dedicated process programming model (without virtualization of the graphics acceleration module) and a shared programming model (with virtualization). , these may include programming models controlled by accelerator integrated circuit 1536 and programming models controlled by graphics acceleration module 1546 .In at least one embodiment, graphics processing engines 1531(1)-1531(N) are dedicated to a single application or process under a single operating system. In at least one embodiment, a single application can focus other application requests to graphics processing engines 1531(1)-1531(N) to achieve virtualization within a VM/partition.In at least one embodiment, graphics processing engines 1531(1)-1531(N) may be shared by multiple VMs/application partitions. In at least one embodiment, the shared model may use a system hypervisor to virtualize the graphics processing engines 1531(1)-1531(N) to allow access by each operating system. In at least one embodiment, in a single-partition system without a hypervisor, graphics processing engines 1531(1)-1531(N) are owned by the operating system. In at least one embodiment, the operating system can virtualize graphics processing engines 1531(1)-1531(N) to provide access to each process or application.In at least one embodiment, graphics acceleration module 1546 or individual graphics processing engines 1531(1)-1531(N) use process handles to select process elements. In at least one embodiment, process elements are stored in system memory 1514 and are addressable using the effective to real address translation techniques described herein. In at least one embodiment, the process handle registers the context of the host process with the graphics processing engines 1531(1)-1531(N) (i.e., the system for adding process elements to the process element linked list). • It may be an implementation-specific value provided to the host process when calling software). In at least one embodiment, the lower 16 bits of the process handle may be the offset of the process element within the process element linked list.FIG. 15D shows an exemplary accelerator integration slice 1590. FIG. In at least one embodiment, a “slice” comprises a designated portion of the processing resources of accelerator integrated circuit 1536 . In at least one embodiment, application effective address space 1582 in system memory 1514 stores process elements 1583 . In at least one embodiment, process element 1583 is stored in response to GPU invocation 1581 from application 1580 running on processor 1507 . In at least one embodiment, process element 1583 contains the process state of corresponding application 1580 . In at least one embodiment, work descriptor (WD) 1584 contained in process element 1583 may be a single job requested by an application, or may contain a pointer to a queue of jobs. In at least one embodiment, WD 1584 is a pointer to a job request queue in the application's effective address space 1582 .In at least one embodiment, graphics acceleration module 1546 and/or individual graphics processing engines 1531(1)-1531(N) can be shared by all or a subset of the processes in the system. In at least one embodiment, infrastructure may be included for setting process states and sending WD 1584 to graphics acceleration module 1546 to start jobs in a virtualized environment.In at least one embodiment, the proprietary process programming model is implementation specific. In at least one embodiment, in this model, a single process owns the graphics acceleration module 1546 or individual graphics processing engine 1531 . In at least one embodiment, when the graphics acceleration module 1546 is owned by a single process, the hypervisor initializes the accelerator integrated circuit 1536 for the owning partition and the operating • The system initializes the accelerator integration circuit 1536 for the owning process.In at least one embodiment, in operation, WD fetch unit 1591 within accelerator integration slice 1590 displays the following work containing representations of work to be performed by one or more graphics processing engines of graphics acceleration module 1546: Fetch WD1584. In at least one embodiment, data from WD 1584 may be stored in registers 1545 and used by MMU 1539, interrupt management circuitry 1547, and/or context management circuitry 1548, as shown. For example, one embodiment of MMU 1539 includes segment/page walk circuitry for accessing segment/page table 1586 within OS virtual address space 1585 . In at least one embodiment, interrupt management circuitry 1547 may process interrupt events 1592 received from graphics acceleration module 1546 . In at least one embodiment, effective addresses 1593 generated by graphics processing engines 1531(1)-1531(N) are translated to real addresses by MMU 1539 when performing graphics operations.In at least one embodiment, registers 1545 may be replicated for each graphics processing engine 1531(1)-1531(N) and/or graphics acceleration module 1546 and initialized by the hypervisor or operating system. . In at least one embodiment, each of these replicated registers may be included in accelerator integration slice 1590 . Exemplary registers that may be initialized by the hypervisor are shown in Table 1.Exemplary registers that may be initialized by the operating system are shown in Table 2.In at least one embodiment, each WD 1584 is unique to a particular graphics acceleration module 1546 and/or graphics processing engines 1531(1)-1531(N). In at least one embodiment, WD 1584 contains all the information graphics processing engines 1531(1)-1531(N) need to do work, or a command queue of work to be completed by an application. It can be a pointer to a memory location where it is set up.FIG. 15E shows further details of an exemplary embodiment of the sharing model. This embodiment includes hypervisor real address space 1598 in which process element list 1599 is stored. In at least one embodiment, hypervisor real address space 1598 is accessible via hypervisor 1596 , which virtualizes the graphics acceleration module engine of operating system 1595 .In at least one embodiment, the shared programming model allows graphics acceleration module 1546 to be used by all or a subset of processes from all or a subset of partitions in the system. In at least one embodiment, there are two programming models in which the graphics acceleration module 1546 is shared by multiple processes and partitions: time slice sharing and graphics-directed shared.In at least one embodiment, in this model the system hypervisor 1596 owns the graphics acceleration module 1546 and makes its functionality available to all operating systems 1595 . In at least one embodiment, in order for graphics acceleration module 1546 to support virtualization by system hypervisor 1596, graphics acceleration module 1546 must: 1) job requests of applications be autonomous (i.e., state must be maintained between jobs), or the graphics acceleration module 1546 must provide a mechanism for saving and restoring context; guaranteed by graphics acceleration module 1546 to complete in a specified amount of time, or graphics acceleration module 1546 provides the ability to preempt the processing of jobs; When operating in a programming model, some requirements may be adhered to, such as fairness must be guaranteed between processes.In at least one embodiment, the application 1580 can be downloaded from the operating system with the graphics acceleration module type, work descriptor (WD), authority mask register (AMR) value, and context save/restore area pointer (CSRP). A System 1595 system call needs to be made. In at least one embodiment, the graphics acceleration module type describes the desired acceleration function in the system call. In at least one embodiment, the graphics acceleration module type may be system specific. In at least one embodiment, the WD is formatted specifically for the Graphics Acceleration Module 1546 and includes commands for the Graphics Acceleration Module 1546, effective address pointers to user-defined structures, and effective addresses to queues of commands. • Can be in the form of pointers or any other data structure for describing the work done by the graphics acceleration module 1546;In at least one embodiment, the AMR value is the AMR state to use for the current process. In at least one embodiment, the value passed to the operating system is similar to the application setting AMR. In at least one embodiment, if the implementation of accelerator integrated circuit 1536 (not shown) and graphics acceleration module 1546 does not support a User Authority Mask Override Register (UAMOR), the operating system will set the AMR value may be applied to the current UAMOR value before passing the AMR to the hypervisor call. In at least one embodiment, hypervisor 1596 may optionally apply current authority mask override register (AMOR) values before entering the AMR into process element 1583 . In at least one embodiment, the CSRP is one of the registers 1545 containing the effective address of an area within the application's effective address space 1582 for the graphics acceleration module 1546 to save and restore context state. In at least one embodiment, this pointer is optional if no state needs to be saved between jobs or when a job is preempted. In at least one embodiment, the context save/restore area may be pinned system memory.Upon receiving the system call, operating system 1595 may verify that application 1580 is registered and authorized to use graphics acceleration module 1546 . In at least one embodiment, operating system 1595 then calls hypervisor 1596 with the information shown in Table 3.In at least one embodiment, upon receiving the hypervisor call, hypervisor 1596 verifies that operating system 1595 is registered and authorized to use graphics acceleration module 1546 . In at least one embodiment, hypervisor 1596 then places process element 1583 into the process element linked list of the corresponding graphics acceleration module 1546 type. In at least one embodiment, a process element may include the information shown in Table 4.In at least one embodiment, the hypervisor initializes registers 1545 of multiple accelerator integration slices 1590 .As shown in FIG. 15F, in at least one embodiment, common virtual memory used to access physical processor memory 1501(1)-1501(N) and GPU memory 1520(1)-1520(N) • A unified memory is used that is addressable via an address space. In this implementation, operations performed on GPUs 1510(1)-1510(N) utilize the same virtual/effective memory address space as accessing processor memory 1501(1)-1501(M), And vice versa, which simplifies programmability. In at least one embodiment, a first portion of the virtual/effective address space is allocated to processor memory 1501(1), a second portion is allocated to second processor memory 1501(N), and a third portion is allocated to processor memory 1501(N). Portions are allocated in GPU memory 1520(1) and so on. In at least one embodiment, the entire virtual/effective memory space (sometimes referred to as the effective address space) is thereby distributed across each of processor memory 1501 and GPU memory 1520 to map virtual addresses to physical memory. state allows any processor or GPU to access any physical memory.In at least one embodiment, the bias/coherence management circuits 1594A-1594E in one or more of MMUs 1539A-1539E are configured to coordinate the cache of one or more host processors (eg, 1505) with the cache of GPU 1510. It ensures cache coherence between and implements biasing techniques to indicate in which physical memory certain types of data should be stored. In at least one embodiment, multiple instances of bias/coherence management circuits 1594A-1594E are shown in FIG. , and/or may be implemented within the accelerator integrated circuit 1536 .One embodiment allows GPU memory 1520 to be mapped as part of system memory, and may be accessible using shared virtual memory (SVM) techniques, but is not associated with full system cache coherence. no performance degradation occurs. In at least one embodiment, GPU memory 1520 is accessible as system memory without cumbersome cache coherence overhead, providing a beneficial operating environment for GPU offload. In at least one embodiment, this arrangement allows host processor 1505 software to set operands and access computation results without the overhead of conventional I/O DMA data copying. In at least one embodiment, such conventional copying requires driver calls, interrupts, and memory-mapped I/O (MMIO) accesses, all of which are less efficient than simple memory accesses. In at least one embodiment, being able to access GPU memory 1520 without cache coherence overhead may be essential to the execution time of offloaded computations. In at least one embodiment, cache coherence overhead can significantly reduce the effective write bandwidth that GPU 1510 sees, for example, if there is significant streaming write memory traffic. In at least one embodiment, the efficiency of setting operands, the efficiency of accessing results, and the efficiency of GPU computation may help in determining the effectiveness of GPU offloading.In at least one embodiment, the selection of GPU bias and host processor bias is determined by a bias tracker data structure. In at least one embodiment, a bias table, for example, may be used, which may be a page granularity structure containing 1 or 2 bits per GPU-attached memory page (e.g., memory page granularity ). In at least one embodiment, there is one bias table, with or without a bias cache (e.g., for caching frequently/recently used entries in the bias table) on GPU 1510. Or it may be implemented in a stolen memory range of GPU memory 1520 . Alternatively, in at least one embodiment, the entire bias table may be maintained within the GPU.In at least one embodiment, the bias table entry associated with each access to GPU-attached memory 1520 is accessed prior to the actual access to GPU memory, resulting in the following actions. In at least one embodiment, local requests from a GPU 1510 to find its page within GPU bias are forwarded directly to the corresponding GPU memory 1520 . In at least one embodiment, local requests from the GPU that find their pages in host bias are forwarded to processor 1505 (eg, via the high-speed link described above). In at least one embodiment, a request from processor 1505 that finds the requested page in the host processor bias completes the request like a normal memory read. Alternatively, requests directed to GPU-biased pages may be forwarded to GPU 1510 . In at least one embodiment, the GPU may then migrate the page to host processor bias if the page is not currently in use. In at least one embodiment, the page bias state is determined either by a software-based mechanism, a hardware-assisted software-based mechanism, or, for a limited set of cases, simply by a hardware-based mechanism. , can be changed.In at least one embodiment, one mechanism for changing the bias state utilizes an API call (eg, OpenCL) that calls the GPU's device driver, which (or enqueue command descriptors) to change the bias state and, for some transitions, direct the GPU to perform cache flushing operations in the host. In at least one embodiment, cache flushing operations are used for transitions from host processor 1505 bias to GPU bias, but not for the opposite transition.In at least one embodiment, cache coherence is maintained by temporarily rendering GPU-biased pages that cannot be cached by host processor 1505 . In at least one embodiment, to access these pages, processor 1505 may request access from GPU 1510, which may or may not grant access immediately. . In at least one embodiment, therefore, to reduce communication between the processor 1505 and the GPU 1510, GPU-biased pages are requested by the GPU but not by the host processor 1505, or It is useful to do the opposite.Hardware structure 715 is used to implement one or more embodiments. Details regarding hardware structure 715 may be provided herein in conjunction with FIGS. 7A and/or 7B.FIG. 16 illustrates an exemplary integrated circuit and associated graphics processor that can be made using one or more IP cores according to various embodiments described herein. In addition to what is shown, in at least one embodiment, other logic and circuitry may be included including additional graphics processors/cores, peripheral device interface controllers, or general purpose processor cores.FIG. 16 is a block diagram illustrating an exemplary system-on-chip integrated circuit 1600 that can be made using one or more IP cores according to at least one embodiment. In at least one embodiment, integrated circuit 1600 includes one or more application processors 1605 (eg, CPUs), at least one graphics processor 1610, and an image processor 1615 and/or video processor 1620. any of these may be modular IP cores. In at least one embodiment, integrated circuit 1600 includes peripherals or bus logic including USB controller 1625 , UART controller 1630 , SPI/SDIO controller 1635 , and I22S/I22C controller 1640 . In at least one embodiment, integrated circuit 1600 includes a high-definition multimedia interface (HDMI) controller 1650 and a mobile industry processor interface (MIPI). A display device 1645 coupled to one or more of the display interfaces 1655 may be included. In at least one embodiment, storage may be provided by flash memory subsystem 1660, which includes flash memory and a flash memory controller. In at least one embodiment, a memory interface may be provided through memory controller 1665 to access SDRAM or SRAM memory devices. In at least one embodiment, some integrated circuits also include an embedded security engine 1670 .Inference and/or training logic 715 is used to perform inference and/or training operations associated with one or more embodiments. Details regarding inference and/or training logic 715 are provided herein in conjunction with FIGS. 7A and/or 7B. In at least one embodiment, the inference and/or training logic 715 uses neural network training operations, neural network functionality and/or architecture, or neural network use cases described herein. Based at least in part on the calculated weight parameters may be used in integrated circuit 1600 for inference or prediction operations.In at least one embodiment, at least one component shown or described with respect to Figure 16 is used to implement the techniques and/or functionality described with respect to Figures 1-6. In at least one embodiment, inference and/or training logic 715 includes and/or operates at least one aspect (eg, deep learning compiler 102, stream scheduler 110, memory allocator 112) described with respect to FIG. In at least one embodiment, inference and/or training logic 715 is a computer program representation of speculatively executable operations and/or instructions, such as those described with respect to one or more of FIGS. to train at least one untrained or partially trained neural network. In at least one embodiment, the inference and/or training logic is a computer program representation of speculatively executable operations and/or instructions, such as those described with respect to one or more of FIGS. to perform at least one inference operation. In at least one embodiment, integrated circuit 1600 of FIG. 16 is utilized to implement the techniques and/or functionality described with respect to FIGS. 1-6.17A-17B illustrate exemplary integrated circuits and associated graphics processors that can be made using one or more IP cores according to various embodiments described herein. In addition to what is shown, in at least one embodiment, other logic and circuitry may be included including additional graphics processors/cores, peripheral device interface controllers, or general purpose processor cores.17A-17B are block diagrams illustrating exemplary graphics processors for use within SoCs, according to embodiments described herein. FIG. 17A illustrates an exemplary graphics processor 1710 for a system-on-chip integrated circuit that can be fabricated using one or more IP cores, according to at least one embodiment. FIG. 17B illustrates a further exemplary graphics processor 1740 for a system-on-chip integrated circuit that can be fabricated using one or more IP cores, according to at least one embodiment. In at least one embodiment, graphics processor 1710 of FIG. 17A is a low power graphics processor core. In at least one embodiment, graphics processor 1740 of FIG. 17B is a high performance graphics processor core. In at least one embodiment, each of graphics processors 1710, 1740 may be a variation of graphics processor 1610 of FIG.In at least one embodiment, graphics processor 1710 includes vertex processor 1705 and one or more fragment processors 1715A-1715N (eg, 1715A, 1715B, 1715C, 1715D-1715N-1, and 1715N). . In at least one embodiment, the graphics processor 1710 can execute different shader programs via separate logic such that the vertex processor 1705 executes operations for the vertex shader programs. while one or more fragment processors 1715A-1715N perform fragment (eg, pixel) shading operations for the fragment or pixel shader program. In at least one embodiment, vertex processor 1705 executes the vertex processing stage of the 3D graphics pipeline to generate primitives and vertex data. In at least one embodiment, fragment processors 1715A-1715N use the primitives and vertex data generated by vertex processor 1705 to generate a frame buffer that is displayed on a display device. In at least one embodiment, the fragment processors 1715A-1715N are optimized to execute fragment shader programs provided in the OpenGL API, the OpenGL API being pixel shaders provided in the Direct 3D API. • May be used to perform similar operations as programs.In at least one embodiment, graphics processor 1710 further includes one or more memory management units (MMUs) 1720A-1720B, caches 1725A-1725B, and circuit interconnects 1730A-1730B. In at least one embodiment, one or more of MMUs 1720A-1720B provide virtual-to-physical address mapping for graphics processor 1710, including vertex processor 1705 and/or fragment processors 1715A-1715N; They may reference vertex or image/text data stored in memory in addition to vertex or image/texture data stored in one or more caches 1725A-1725B. In at least one embodiment, one or more MMUs 1720A-1720B are one or more MMUs associated with one or more application processor 1605, image processor 1615, and/or video processor 1620 of FIG. , so that each processor 1605-1620 can participate in a shared or unified virtual memory system. In at least one embodiment, one or more of the circuit interconnects 1730A-1730B allow the graphics processor 1710 to interface with other IP cores in the SoC, either through the SoC's internal bus or through direct connections. be able to takeIn at least one embodiment, as shown in FIG. 17B, graphics processor 1740 includes one or more shader cores 1755A-1755N (eg, 1755A, 1755B, 1755C, 1755D, 1755E, 1755F-1755N-1, and 1755N), which may be a single core, or type, or all cores containing shader program code for implementing a vertex shader, a fragment shader, and/or a compute shader. provides an integrated shader core architecture that can execute programmable shader code of the type: In at least one embodiment, the number of shader cores can vary. In at least one embodiment, graphics processor 1740 includes an inter-core task manager 1745 that acts as a thread dispatcher for dispatching execution threads to one or more shader cores 1755A-1755N, and a Tiling to accelerate tiling operations for tile-based rendering, where scene rendering operations are subdivided in image space to take advantage of local spatial coherence or to optimize internal cache usage. and ring unit 1758 .Inference and/or training logic 715 is used to perform inference and/or training operations associated with one or more embodiments. Details regarding inference and/or training logic 715 are provided herein in conjunction with FIGS. 7A and/or 7B. In at least one embodiment, the inference and/or training logic 715 uses neural network training operations, neural network functionality and/or architecture, or neural network use cases described herein. Based at least in part on the calculated weight parameters may be used in integrated circuits 17A and/or 17B for inference or prediction operations.In at least one embodiment, at least one component shown or described with respect to FIGS. 17A and/or 17B may be used to implement the techniques and/or functionality described with respect to FIGS. used. In at least one embodiment, inference and/or training logic 715 includes and/or operates at least one aspect (eg, deep learning compiler 102, stream scheduler 110, memory allocator 112) described with respect to FIG. In at least one embodiment, inference and/or training logic 715 is a computer program representation of speculatively executable operations and/or instructions, such as those described with respect to one or more of FIGS. to train at least one untrained or partially trained neural network. In at least one embodiment, the inference and/or training logic is a computer program representation of speculatively executable operations and/or instructions, such as those described with respect to one or more of FIGS. to perform at least one inference operation. In at least one embodiment, graphics processor 1710 of FIG. 17A and/or graphics processor 1740 of FIG. 17B are utilized to implement the techniques and/or functions described with respect to FIGS. be done.18A-18B illustrate further exemplary graphics processor logic according to embodiments described herein. FIG. 18A shows graphics core 1800, which in at least one embodiment may be included in graphics processor 1610 of FIG. 16, and in at least one embodiment as in FIG. 17B. , integrated shader cores 1755A-1755N. FIG. 18B illustrates a highly parallel general purpose graphics processing unit (“GPGPU”) 1830 suitable for deployment in multi-chip modules in at least one embodiment.In at least one embodiment, graphics core 1800 includes shared instruction cache 1802 , texture unit 1818 , and cache/shared memory 1820 , which are common to execution resources within graphics core 1800 . In at least one embodiment, graphics core 1800 may include multiple slices 1801A-1801N, or partitions per core, and graphics processor may include multiple instances of graphics core 1800. can. In at least one embodiment, slices 1801A-1801N may include supporting logic including local instruction caches 1804A-1804N, thread schedulers 1806A-1806N, thread dispatchers 1808A-1808N, and sets of registers 1810A-1810N. In at least one embodiment, slices 1801A-1801N include additional functional units (AFUs 1812A-1812N), floating point units (FPUs 1814A-1814N), integer arithmetic logic units (ALUs 1816-1816N), address calculation units (ACUs 1813A-1813N), A set of double precision floating point units (DPFPUs 1815A-1815N) and matrix processing units (MPUs 1817A-1817N) may be included.In at least one embodiment, FPUs 1814A-1814N are capable of performing single-precision (32-bit) and half-precision (16-bit) floating point operations, and DFPPUs 1815A-1815N are capable of performing double-precision (64-bit) floating point operations. to run. In at least one embodiment, ALUs 1816A-1816N are capable of performing variable-precision integer arithmetic with 8-bit, 16-bit, and 32-bit precision, and can be configured for mixed-precision arithmetic. be. In at least one embodiment, MPUs 1817A-1817N can also be configured to allow mixed-precision matrix operations, including half-precision floating point and 8-bit integer operations. In at least one embodiment, MPUs 1817A-1817N perform various matrix operations to accelerate machine learning application frameworks, including enabling support for generalized matrix-matrix multiplication (GEMM) acceleration. can be done. In at least one embodiment, AFUs 1812A-1812N can perform additional logical operations not supported by floating point or integer units, including trigonometric operations (eg, sine, cosine, etc.).Inference and/or training logic 715 is used to perform inference and/or training operations associated with one or more embodiments. Details regarding inference and/or training logic 715 are provided herein in conjunction with FIGS. 7A and/or 7B. In at least one embodiment, the inference and/or training logic 715 uses neural network training operations, neural network functionality and/or architecture, or neural network use cases described herein. Based at least in part on the calculated weight parameters may be used in graphics core 1800 for inference or prediction operations.In at least one embodiment, at least one component shown or described with respect to Figure 18A is used to implement the techniques and/or functionality described with respect to Figures 1-6. In at least one embodiment, inference and/or training logic 715 includes and/or operates at least one aspect (eg, deep learning compiler 102, stream scheduler 110, memory allocator 112) described with respect to FIG. In at least one embodiment, inference and/or training logic 715 is a computer program representation of speculatively executable operations and/or instructions, such as those described with respect to one or more of FIGS. to train at least one untrained or partially trained neural network. In at least one embodiment, the inference and/or training logic is a computer program representation of speculatively executable operations and/or instructions, such as those described with respect to one or more of FIGS. to perform at least one inference operation. In at least one embodiment, graphics core 1800 of FIG. 18A is utilized to implement the techniques and/or functionality described with respect to FIGS. 1-6.FIG. 18B illustrates a general purpose processing unit (GPGPU) 1830 that can be configured to enable highly parallel computational operations to be performed by an array of graphics processing units in at least one embodiment. In at least one embodiment, GPGPU 1830 may be directly linked to other instances of GPGPU 1830 to generate multiple GPU clusters to improve training speed of deep neural networks. In at least one embodiment, GPGPU 1830 includes host interface 1832 to allow connection with a host processor. In at least one embodiment, host interface 1832 is a PCI Express interface. In at least one embodiment, host interface 1832 may be a vendor-specific communication interface or communication fabric. In at least one embodiment, GPGPU 1830 receives commands from host processors and uses global scheduler 1834 to distribute execution threads associated with those commands to a set of compute clusters 1836A-1836H. In at least one embodiment, compute clusters 1836 A- 1836 H share cache memory 1838 . In at least one embodiment, cache memory 1838 can act as a high level cache for cache memories within compute clusters 1836A-1836H.In at least one embodiment, GPGPU 1830 includes memory 1844A-1844B coupled to compute clusters 1836A-1836H via a set of memory controllers 1842A-1842B. In at least one embodiment, memories 1844A-1844B are dynamic random access memory, such as synchronous graphics random access memory (SGRAM), including graphics double data rate (GDDR) memory. • Can include various types of memory devices, including memory (DRAM) or graphics random access memory.In at least one embodiment, compute clusters 1836A-1836H each include a set of graphics cores, such as graphics core 1800 of FIG. 18A, which are suitable for machine learning computations. may include multiple types of integer and floating point logic units capable of performing computational operations with varying degrees of precision, including . For example, in at least one embodiment, at least a subset of the floating point units in each of compute clusters 1836A-1836H may be configured to perform 16-bit or 32-bit floating point operations, while Another subset of floating point units can be configured to perform 64-bit floating point arithmetic.In at least one embodiment, multiple instances of GPGPU 1830 can be configured to operate as a compute cluster. In at least one embodiment, the communications used by compute clusters 1836A-1836H for synchronization and data exchange differ across embodiments. In at least one embodiment, multiple instances of GPGPU 1830 communicate via host interface 1832 . In at least one embodiment, GPGPU 1830 includes an I/O hub 1839 that couples GPGPU 1830 to GPU links 1840 that allow direct connections to other instances of GPGPU 1830 . In at least one embodiment, GPU link 1840 is coupled to a dedicated GPU-to-GPU bridge that enables communication and synchronization between multiple instances of GPGPU 1830 . In at least one embodiment, GPU link 1840 is coupled to a high speed interconnect for sending and receiving data to other GPGPUs or parallel processors. In at least one embodiment, multiple instances of GPGPU 1830 are located in separate data processing systems and communicate via network devices accessible via host interface 1832 . In at least one embodiment, GPU link 1840 may be configured to allow connection to a host processor in addition to or instead of host interface 1832 .In at least one embodiment, GPGPU 1830 can be configured to train a neural network. In at least one embodiment, GPGPU 1830 may be used within an inference platform. In at least one embodiment where GPGPU 1830 is used for inference, GPGPU 1830 may include fewer compute clusters 1836A-1836H than when GPGPU 1830 is used for neural network training. In at least one embodiment, the memory technology associated with memories 1844A-1844B may be different for the inference configuration and the training configuration, with high bandwidth memory technology being applied to the training configuration. In at least one embodiment, the speculation configuration of GPGPU 1830 can support speculation-specific instructions. For example, in at least one embodiment, an inference construct can support one or more 8-bit integer dot-product instructions, which may be used during inference operations of deployed neural networks. .Inference and/or training logic 715 is used to perform inference and/or training operations associated with one or more embodiments. Details regarding inference and/or training logic 715 are provided herein in conjunction with FIGS. 7A and/or 7B. In at least one embodiment, the inference and/or training logic 715 uses neural network training operations, neural network functionality and/or architecture, or neural network use cases described herein. Based at least in part on the calculated weight parameters may be used in GPGPU 1830 for inference or prediction operations.In at least one embodiment, at least one component shown or described with respect to Figure 18B is used to implement the techniques and/or functionality described with respect to Figures 1-6. In at least one embodiment, inference and/or training logic 715 includes and/or operates at least one aspect (eg, deep learning compiler 102, stream scheduler 110, memory allocator 112) described with respect to FIG. In at least one embodiment, inference and/or training logic 715 is a computer program representation of speculatively executable operations and/or instructions, such as those described with respect to one or more of FIGS. to train at least one untrained or partially trained neural network. In at least one embodiment, the inference and/or training logic is a computer program representation of speculatively executable operations and/or instructions, such as those described with respect to one or more of FIGS. to perform at least one inference operation. In at least one embodiment, GPGPU 1830 of FIG. 18B is utilized to implement the techniques and/or functions described with respect to FIGS. 1-6.Figure 19 is a block diagram that illustrates a computing system 1900 in accordance with at least one embodiment. In at least one embodiment, computing system 1900 includes processing subsystem 1901 having one or more processors 1902 and system memory 1904 in communication via an interconnection path that may include memory hub 1905. . In at least one embodiment, memory hub 1905 may be a separate component within a chipset component or integrated within one or more processors 1902 . In at least one embodiment, memory hub 1905 is coupled to I/O subsystem 1911 via communication link 1906 . In at least one embodiment, I/O subsystem 1911 includes I/O hub 1907 that can enable computing system 1900 to receive input from one or more input devices 1908 . In at least one embodiment, I/O hub 1907 can enable a display controller, which is included in one or more processors 1902, to operate one or more display devices. An output may be provided to 1910A. In at least one embodiment, one or more display devices 1910A coupled to I/O hub 1907 can include local, internal, or embedded display devices.In at least one embodiment, processing subsystem 1901 includes one or more parallel processors 1912 coupled to memory hub 1905 via bus or other communication link 1913 . In at least one embodiment, communication link 1913 may use one of any number of standard-based communication link technologies or protocols, such as, but not limited to, PCI Express, or vendor-specific communication. It may be an interface or communication fabric. In at least one embodiment, the one or more parallel processors 1912 are computationally intensive, which can include multiple processing cores and/or processing clusters, such as many integrated core (MIC) processors. form a parallel or vector processing system. In at least one embodiment, some or all of parallel processors 1912 form a graphics processing subsystem, which is connected to one or more display devices 1910A coupled via I/O hub 1907. A pixel can be output to one of the . In at least one embodiment, parallel processor 1912 can also include a display controller and display interface (not shown) to allow direct connection to one or more display devices 1910B.In at least one embodiment, system storage unit 1914 may be connected to I/O hub 1907 to provide a storage mechanism for computing system 1900 . In at least one embodiment, I/O switch 1916 is used to connect I/O hub 1907 and other components such as network adapter 1918 and/or wireless network adapter 1919 that may be integrated into the platform. , as well as various other devices that may be added through one or more add-in devices 1920 . In at least one embodiment, network adapter 1918 may be an Ethernet adapter or another wired network adapter. In at least one embodiment, wireless network adapter 1919 is one or more of Wi-Fi, Bluetooth, Near Field Communication (NFC), or other network devices that include one or more wireless radios. can includeIn at least one embodiment, computing system 1900 may include other components not explicitly shown, including USB or other port connections, optical storage drives, video capture devices, etc., and I/O hub 1907. may be connected to In at least one embodiment, the communication paths interconnecting the various components of FIG. It may also be implemented using other bus or point-to-point communication interfaces and/or protocols, such as the Link high speed interconnect, or other interconnect protocols.In at least one embodiment, parallel processor 1912 incorporates circuitry optimized for graphics and video processing, including, for example, video output circuitry, and constitutes a graphics processing unit (GPU). In at least one embodiment, parallel processor 1912 incorporates circuitry optimized for general purpose processing. In at least one embodiment, the components of computing system 1900 may be integrated with one or more other system elements on a single integrated circuit. For example, in at least one embodiment, parallel processor 1912, memory hub 1905, processor 1902, and I/O hub 1907 can be integrated into a system-on-chip (SoC) integrated circuit. In at least one embodiment, the components of computing system 1900 can be integrated into a single package to form a system in package (SIP) configuration. In at least one embodiment, at least some of the components of computing system 1900 may be integrated into a multi-chip module (MCM), which may be integrated with other multi-chip modules. Modules can be interconnected into a modular computing system.Inference and/or training logic 715 is used to perform inference and/or training operations associated with one or more embodiments. Details regarding inference and/or training logic 715 are provided herein in conjunction with FIGS. 7A and/or 7B. In at least one embodiment, the inference and/or training logic 715 uses neural network training operations, neural network functionality and/or architecture, or neural network use cases described herein. Based at least in part on the weight parameters calculated may be used in the system of diagram 1900 for inference or prediction operations.In at least one embodiment, at least one component shown or described with respect to Figure 19 is used to implement the techniques and/or functionality described with respect to Figures 1-6. In at least one embodiment, inference and/or training logic 715 includes and/or operates at least one aspect (eg, deep learning compiler 102, stream scheduler 110, memory allocator 112) described with respect to FIG. In at least one embodiment, inference and/or training logic 715 is a computer program representation of speculatively executable operations and/or instructions, such as those described with respect to one or more of FIGS. to train at least one untrained or partially trained neural network. In at least one embodiment, the inference and/or training logic is a computer program representation of speculatively executable operations and/or instructions, such as those described with respect to one or more of FIGS. to perform at least one inference operation. In at least one embodiment, system 1900 of FIG. 19 is utilized to implement the techniques and/or functionality described with respect to FIGS. 1-6.Processor FIG. 20A illustrates a parallel processor 2000 according to at least one embodiment. In at least one embodiment, the various components of parallel processor 2000 are one or more integrated circuits such as programmable processors, application specific integrated circuits (ASICs), or field programmable gate arrays (FPGAs). It may be implemented using a device. In at least one embodiment, the illustrated parallel processor 2000 is a variation of one or more parallel processors 1912 shown in Figure 19 according to an illustrative embodiment.In at least one embodiment, parallel processor 2000 includes parallel processing units 2002 . In at least one embodiment, parallel processing unit 2002 includes I/O unit 2004 that enables communication with other devices, including other instances of parallel processing unit 2002 . In at least one embodiment, I/O unit 2004 may be directly connected to other devices. In at least one embodiment, I/O unit 2004 is connected to other devices through the use of a hub or switch interface, such as memory hub 2005 . In at least one embodiment, the connection between memory hub 2005 and I/O unit 2004 forms communication link 2013 . In at least one embodiment, the I/O unit 2004 is connected to a host interface 2006 and a memory crossbar 2016, where the host interface 2006 receives commands directed to performing processing operations and communicates with the memory crossbar. Bar 2016 receives commands directed to performing memory operations.In at least one embodiment, when host interface 2006 receives command buffers via I/O unit 2004, host interface 2006 directs work operations to front end 2008 to execute those commands. be able to. In at least one embodiment, front end 2008 is coupled to scheduler 2010 , which is configured to distribute commands or other work items to processing cluster array 2012 . In at least one embodiment, scheduler 2010 ensures that processing cluster array 2012 is properly configured and in a valid state before tasks are distributed to the clusters of processing cluster array 2012 . In at least one embodiment, scheduler 2010 is implemented via firmware logic running on a microcontroller. In at least one embodiment, the microcontroller-implemented scheduler 2010 is configurable to perform complex scheduling and work distribution operations at coarse and fine granularity, allowing rapid preemption of threads executing in the processing array 2012. and allows context switching. In at least one embodiment, host software can demonstrate scheduling workloads on processing cluster array 2012 through one of multiple graphics processing paths. In at least one embodiment, the workload can then be automatically distributed across the processing cluster array 2012 by the scheduler 2010 logic within the microcontroller that contains the scheduler 2010 .In at least one embodiment, processing cluster array 2012 can include up to “N” processing clusters (eg, cluster 2014A, cluster 2014B through cluster 2014N), where “N” is a positive integer. (which may be a different integer 'N' than used in other figures). In at least one embodiment, each cluster 2014A-2014N of processing cluster array 2012 is capable of executing a large number of concurrent threads. In at least one embodiment, scheduler 2010 may use various scheduling and/or work distribution algorithms to distribute work to clusters 2014A-2014N of processing cluster array 2012, which algorithms may Or it may be different, depending on the workload produced by each type of computation. In at least one embodiment, scheduling may be handled dynamically by scheduler 2010 or partially assisted by compiler logic during compilation of program logic configured to be executed by processing cluster array 2012. may be In at least one embodiment, different clusters 2014A-2014N of processing cluster array 2012 can be distributed to process different types of programs or perform different types of computations.In at least one embodiment, processing cluster array 2012 can be configured to perform various types of parallel processing operations. In at least one embodiment, processing cluster array 2012 is configured to perform general purpose parallel computing operations. For example, in at least one embodiment, processing cluster array 2012 includes logic for performing processing tasks including filtering video and/or audio data, performing modeling operations including physics operations, and performing data transformations. can contain.In at least one embodiment, processing cluster array 2012 is configured to perform parallel graphics processing operations. In at least one embodiment, processing cluster array 2012 performs graphics processing operations including, but not limited to, texture sampling logic for performing texture operations, mosaic logic, and other vertex processing logic. Additional logic can be included to support. In at least one embodiment, processing cluster array 2012 is configured to execute graphics processing related shader programs such as, but not limited to, vertex shaders, mosaic shaders, geometry shaders, and pixel shaders. can be configured. In at least one embodiment, parallel processing unit 2002 can transfer data from system memory through I/O unit 2004 for processing. In at least one embodiment, during processing, transferred data may be stored in on-chip memory (eg, parallel processor memory 2022) during processing and then written back to system memory.In at least one embodiment, when graphics processing is performed using parallel processing unit 2002, it enables better distribution of graphics processing operations among multiple clusters 2014A-2014N of processing cluster array 2012. As such, the scheduler 2010 can be configured to divide the processing workload into tasks of approximately equal size. In at least one embodiment, portions of processing cluster array 2012 can be configured to perform different types of processing. For example, in at least one embodiment, a first portion may be configured to perform vertex shading and topology generation, and a second portion may perform mosaic and geometry generation to generate and display a rendered image. and the third part may be configured to perform pixel shading or other screen-space operations. In at least one embodiment, intermediate data generated by one or more of clusters 2014A-2014N is buffered so that the intermediate data can be transmitted between clusters 2014A-2014N for further processing. mayIn at least one embodiment, processing cluster array 2012 can receive processing tasks to be executed via scheduler 2010, which receives commands from front end 2008 defining the processing tasks. In at least one embodiment, a processing task is an index of data to be processed, e.g., surface (patch) data, primitive data, vertex data, and/or pixel data, as well as state parameters, and how the data is processed. It can contain commands that define what to do (eg, which program to run). In at least one embodiment, scheduler 2010 may be configured to fetch indices corresponding to tasks or may receive indices from front end 2008 . In at least one embodiment, front end 2008 enables processing cluster array 2012 before a workload specified by an incoming command buffer (eg, batch buffer, push buffer, etc.) is started. can be configured to ensure that it is configured in aIn at least one embodiment, each of the one or more instances of parallel processing unit 2002 may be coupled with parallel processor memory 2022 . In at least one embodiment, parallel processor memory 2022 may be accessed through memory crossbar 2016 , which receives memory requests from processing cluster array 2012 as well as I/O units 2004 . can receive. In at least one embodiment, memory crossbar 2016 can access parallel processor memory 2022 via memory interface 2018 . In at least one embodiment, memory interface 2018 may include multiple partition units (eg, partition unit 2020A, partition unit 2020B through partition unit 2020N), each of which is a parallel processor unit. It can be coupled to a portion of memory 2022 (eg, memory unit). In at least one embodiment, the number of partition units 2020A-2020N is configured to be equal to the number of memory units, such that a first partition unit 2020A has a corresponding first memory unit 2024A. , the second partition unit 2020B has a corresponding memory unit 2024B, and the Nth partition unit 2020N has a corresponding Nth memory unit 2024N. In at least one embodiment, the number of partition units 2020A-2020N may not equal the number of memory devices.In at least one embodiment, memory units 2024A-2024N include dynamic random access memory (SGRAM), such as synchronous graphics random access memory (SGRAM), including graphics double data rate (GDDR) memory. Various types of memory devices may be included, including DRAM) or graphics random access memory. In at least one embodiment, memory units 2024A-2024N may also include 3D stacked memory, including but not limited to high bandwidth memory (HBM). In at least one embodiment, render targets such as frame buffers or texture maps are stored across memory units 2024A-2024N in order to efficiently use the available bandwidth of parallel processor memory 2022. , partition units 2020A-2020N may be able to write portions of each render target in parallel. In at least one embodiment, local instances of parallel processor memory 2022 may be excluded in favor of integrated memory designs that use both system memory and local cache memory.In at least one embodiment, any one of clusters 2014A-2014N of processing cluster array 2012 processes data to be written to any of memory units 2024A-2024N within parallel processor memory 2022. can do. In at least one embodiment, the memory crossbar 2016 directs the output of each cluster 2014A-2014N to any partition unit 2020A-2020N or another cluster 2014A-N that can perform further processing operations on the output. 2014N. In at least one embodiment, each cluster 2014A-2014N can communicate with memory interface 2018 through memory crossbar 2016 to read from or write to various external memory devices. . In at least one embodiment, memory crossbar 2016 has connections to memory interfaces 2018 for communicating with I/O units 2004, as well as connections to local instances of parallel processor memory 2022. , allows processing units in different processing clusters 2014 A- 2014 N to communicate with system memory or other memory not local to parallel processing unit 2002 . In at least one embodiment, memory crossbar 2016 may use virtual channels to separate traffic streams between clusters 2014A-2014N and partition units 2020A-2020N.In at least one embodiment, multiple instances of parallel processing unit 2002 may be provided on a single add-in card, or multiple add-in cards may be interconnected. In at least one embodiment, different instances of parallel processing unit 2002 are configured to interoperate, even though the different instances have different numbers of processing cores, different amounts of local parallel processor memory, and/or other different configurations. It is possible to be For example, in at least one embodiment, some instances of parallel processing unit 2002 may include higher precision floating point units than other instances. In at least one embodiment, a system incorporating one or more instances of parallel processing unit 2002 or parallel processor 2000 may be a desktop, laptop, or portable personal computer, server, workstation, game console. , and/or in various configurations and form factors including, but not limited to, embedded systems.FIG. 20B is a block diagram of partition unit 2020 in accordance with at least one embodiment. In at least one embodiment, partition unit 2020 is an instance of one of partition units 2020A-2020N of FIG. 20A. In at least one embodiment, partition unit 2020 includes L2 cache 2021, frame buffer interface 2025, and raster operations unit 2026 (ROP). In at least one embodiment, L2 cache 2021 is a read/write cache configured to perform load and store operations received from memory crossbar 2016 and ROP 2026 . In at least one embodiment, read misses and urgent writeback requests are output by L2 cache 2021 to frame buffer interface 2025 for processing. In at least one embodiment, updates are also sent to the frame via frame buffer interface 2025 to be processed. In at least one embodiment, frame buffer interface 2025 interfaces with one of the memory units of a parallel processor memory, such as memory units 2024A-2024N (eg, within parallel processor memory 2022) of FIG. Take.In at least one embodiment, ROP 2026 is a processing unit that performs raster operations such as stenciling, z-testing, blending, and the like. In at least one embodiment, ROP 2026 then outputs the processed graphics data stored in graphics memory. In at least one embodiment, ROP 2026 includes compression logic for compressing depth or color data written to memory and decompressing depth or color data read from memory. In at least one embodiment, the compression logic can be lossless compression logic that utilizes one or more of multiple compression algorithms. In at least one embodiment, the type of compression performed by ROP 2026 can vary based on statistical characteristics of the data being compressed. For example, in at least one embodiment, delta color compression is performed per tile on depth and color data.In at least one embodiment, ROPs 2026 are contained within each processing cluster (eg, clusters 2014A-2014N of FIG. 20A) rather than within partition unit 2020. FIG. In at least one embodiment, pixel data read and write requests are sent across memory crossbar 2016 rather than pixel fragment data. In at least one embodiment, the processed graphics data may be displayed on a display device, such as one of one or more display devices 1910 of FIG. 19, for further processing by processor 1902. or for further processing by one of the processing entities within parallel processor 2000 of FIG. 20A.Figure 20C is a block diagram of a processing cluster 2014 within a parallel processing unit in accordance with at least one embodiment. In at least one embodiment, the processing cluster is an instance of one of processing clusters 2014A-2014N of FIG. 20A. In at least one embodiment, processing cluster 2014 may be configured to execute multiple threads in parallel, where a "thread" is a particular thread that is executing on a particular set of input data. Points to an instance of a program. In at least one embodiment, single instruction multiple data (SIMD) instruction issue techniques are used to support parallel execution of multiple threads without providing multiple independent instruction units. At least one embodiment supports parallel execution of multiple threads that are globally synchronized using a common instruction unit configured to issue instructions to a set of processing engines within each processing cluster. To this end, a single-instruction, multiple-thread (SIMT) technique is used.In at least one embodiment, operation of processing cluster 2014 may be controlled via pipeline manager 2032, which distributes processing tasks to SIMT parallel processors. In at least one embodiment, pipeline manager 2032 receives instructions from scheduler 2010 of FIG. In at least one embodiment, graphics multiprocessor 2034 is an exemplary instance of a SIMT parallel processor. However, in at least one embodiment, various types of SIMT parallel processors with different architectures may be included within processing cluster 2014 . In at least one embodiment, one or more instances of graphics multiprocessor 2034 may be included within processing cluster 2014 . In at least one embodiment, graphics multiprocessor 2034 is capable of processing data and a data processor to distribute the processed data to one of multiple possible destinations, including other shader units. A crossbar 2040 may be used. In at least one embodiment, pipeline manager 2032 can facilitate distribution of processed data by specifying destinations for processed data to be distributed through data crossbar 2040 .In at least one embodiment, each graphics multiprocessor 2034 within processing cluster 2014 may include an identical set of function execution logic (eg, arithmetic logic units, load store units, etc.). In at least one embodiment, the function execution logic can be pipelined to allow new instructions to be issued before previous instructions have completed. In at least one embodiment, the function execution logic supports a variety of operations including integer and floating point arithmetic, comparison operations, Boolean operations, bit shifts, and various algebraic function calculations. In at least one embodiment, the same functional unit hardware can be utilized to perform different operations, and any combination of functional units may be present.In at least one embodiment, instructions sent to processing cluster 2014 constitute threads. In at least one embodiment, a set of threads executing across a set of parallel processing engines is a thread group. In at least one embodiment, thread groups execute a common program on different input data. In at least one embodiment, each thread within a thread group can be assigned to a different processing engine within graphics multiprocessor 2034 . In at least one embodiment, a thread group may contain fewer threads than the number of processing engines within graphics multiprocessor 2034 . In at least one embodiment, if a thread group contains fewer threads than the number of processing engines, one or more of the processing engines are idle during the cycle that the thread group is being processed. good too. In at least one embodiment, a thread group may also include more threads than the number of processing engines within graphics multiprocessor 2034 . In at least one embodiment, if a thread group contains more threads than the number of processing engines in graphics multiprocessor 2034, processing can be performed over consecutive clock cycles. In at least one embodiment, multiple thread groups can execute concurrently on graphics multiprocessor 2034 .In at least one embodiment, graphics multiprocessor 2034 includes internal cache memory for performing load and store operations. In at least one embodiment, graphics multiprocessor 2034 may forgo internal caching and use cache memory within processing cluster 2014 (eg, L1 cache 2048). In at least one embodiment, each graphics multiprocessor 2034 also has access to L2 caches within a partition unit (eg, partition units 2020A-2020N of FIG. It may be shared between processing clusters 2014 and used to transfer data between threads. In at least one embodiment, graphics multiprocessor 2034 may also access off-chip global memory, which may be one of local parallel processor memory and/or system memory or Can contain multiple. In at least one embodiment, any memory external to parallel processing unit 2002 may be used as global memory. In at least one embodiment, processing cluster 2014 may include multiple instances of graphics multiprocessor 2034 and share common instructions and data, which may be stored in L1 cache 2048 .In at least one embodiment, each processing cluster 2014 may include an MMU 2045 (memory management unit) configured to map virtual addresses to physical addresses. In at least one embodiment, one or more instances of MMU 2045 may reside within memory interface 2018 of FIG. 20A. In at least one embodiment, MMU 2045 includes a set of page table entries (PTEs) used to map virtual addresses to physical addresses of tiles and optionally cache line indices. In at least one embodiment, MMU 2045 may include a translation lookaside buffer (TLB) or cache of addresses, such as graphics multiprocessor 2034 or L1 2048 cache, or processing cluster 2014 cache. may be inside. In at least one embodiment, physical addresses are processed to distribute surface data accesses locally, allowing efficient interleaving of requests between partition units. In at least one embodiment, a cache line index may be used to determine if a cache line request is a hit or a miss.In at least one embodiment, each graphics multiprocessor 2034 is coupled to a texture unit 2036 to perform texture mapping operations, such as determining texture sample locations, reading texture data, and filtering texture data. A processing cluster 2014 may be configured to execute. In at least one embodiment, texture data is read from an internal texture L1 cache (not shown) or from an L1 cache within graphics multiprocessor 2034, and optionally from L2 cache, local parallel processor memory. , or fetched from system memory. In at least one embodiment, each graphics multiprocessor 2034 outputs processed tasks to data crossbar 2040 to provide processed tasks to another processing cluster 2014 for further processing, or to memory • Store processed tasks in L2 cache, local parallel processor memory, or system memory via crossbar 2016; In at least one embodiment, pre-ROP 2042 (pre-raster arithmetic unit) is configured to receive data from graphics multiprocessor 2034 and direct data to the ROP unit, which is described herein. may be positioned within a partition unit (eg, partition units 2020A-2020N in FIG. 20A) as in FIG. In at least one embodiment, the pre-ROP 2042 unit can perform color blending optimization, organize pixel color data, and perform address translation.Inference and/or training logic 715 is used to perform inference and/or training operations associated with one or more embodiments. Details regarding inference and/or training logic 715 are provided herein in conjunction with FIGS. 7A and/or 7B. In at least one embodiment, the inference and/or training logic 715 uses neural network training operations, neural network functionality and/or architecture, or neural network use cases described herein. Based at least in part on the weight parameters calculated may be used in graphics processing cluster 2014 for inference or prediction operations.In at least one embodiment, at least one component shown or described with respect to FIGS. 20A, 20B, and/or 20C performs the techniques and/or functions described with respect to FIGS. used to implement. In at least one embodiment, inference and/or training logic 715 includes and/or operates at least one aspect (eg, deep learning compiler 102, stream scheduler 110, memory allocator 112) described with respect to FIG. In at least one embodiment, inference and/or training logic 715 is a computer program representation of speculatively executable operations and/or instructions, such as those described with respect to one or more of FIGS. to train at least one untrained or partially trained neural network. In at least one embodiment, the inference and/or training logic is a computer program representation of speculatively executable operations and/or instructions, such as those described with respect to one or more of FIGS. to perform at least one inference operation. In at least one embodiment, parallel processor 2000 of FIG. 20A is utilized to implement the techniques and/or functions described with respect to FIGS. 1-6.FIG. 20D illustrates graphics multiprocessor 2034 according to at least one embodiment. In at least one embodiment, graphics multiprocessor 2034 is coupled to pipeline manager 2032 of processing cluster 2014 . In at least one embodiment, graphics multiprocessor 2034 includes instruction cache 2052, instruction unit 2054, address mapping unit 2056, register file 2058, one or more general purpose graphics processing unit (GPGPU) cores. 2062, and one or more load/store units 2066. In at least one embodiment, GPGPU core 2062 and load/store unit 2066 are coupled to cache memory 2072 and shared memory 2070 via memory and cache interconnect 2068 .In at least one embodiment, instruction cache 2052 receives a stream of instructions to execute from pipeline manager 2032 . In at least one embodiment, instructions are cached in instruction cache 2052 and dispatched for execution by instruction unit 2054 . In at least one embodiment, instruction unit 2054 can dispatch instructions as thread groups (eg, warps), with each thread in a thread group assigned to a different execution unit within GPGPU core 2062 . In at least one embodiment, instructions can access either the local, shared, or global address space by specifying an address within the unified address space. In at least one embodiment, address mapping unit 2056 may be used to translate addresses in a unified address space into separate memory addresses that load/store unit 2066 can access.In at least one embodiment, register file 2058 provides a set of registers for the functional units of graphics multiprocessor 2034 . In at least one embodiment, register file 2058 provides temporary storage for operands coupled to the datapaths of the functional units of graphics multiprocessor 2034 (eg, GPGPU core 2062, load/store unit 2066). I will provide a. In at least one embodiment, register file 2058 is split between respective functional units such that each functional unit is allocated a dedicated portion of register file 2058 . In one embodiment, register file 2058 is split between different warps being executed by graphics multiprocessor 2034 .In at least one embodiment, GPGPU cores 2062 may each include a floating point unit (FPU) and/or an integer arithmetic logic unit (ALU) used to execute instructions for graphics multiprocessor 2034. . In at least one embodiment, GPGPU cores 2062 may be of similar or different architectures. In at least one embodiment, a first portion of GPGPU core 2062 includes a single precision FPU and an integer ALU, and a second portion of GPGPU core includes a double precision FPU. In at least one embodiment, the FPU can implement IEEE 754-2008 standard floating point arithmetic or can enable variable precision floating point arithmetic. In at least one embodiment, graphics multiprocessor 2034 further includes one or more fixed or special function units for performing specific functions such as rectangle copy or pixel blending operations. can be done. In at least one embodiment, one or more of GPGPU cores 2062 may also include fixed or special functional logic.In at least one embodiment, GPGPU core 2062 includes SIMD logic capable of executing a single instruction on multiple data sets. In at least one embodiment, GPGPU core 2062 is capable of physically executing SIMD4, SIMD8, and SIMD16 instructions and logically executing SIMD1, SIMD2, and SIMD32 instructions. In at least one embodiment, SIMD instructions for the GPGPU core may be generated at compile time by a shader compiler, or written and compiled for a single program multiple data (SPMD) or SIMT architecture. may be generated automatically when running the program. In at least one embodiment, multiple threads of a program configured for the SIMT execution model can execute via a single SIMD instruction. For example, in at least one embodiment, eight SIMT threads performing the same or similar operations may be executed in parallel via a single SIMD8 logic unit.In at least one embodiment, memory and cache interconnect 2068 is an interconnect network that connects each functional unit of graphics multiprocessor 2034 to register file 2058 and shared memory 2070 . In at least one embodiment, memory and cache interconnect 2068 is a crossbar interconnect that allows load/store unit 2066 to implement load and store operations between shared memory 2070 and register file 2058. . In at least one embodiment, register file 2058 can operate at the same frequency as GPGPU core 2062, so data transfers between GPGPU core 2062 and register file 2058 can have very low latency. In at least one embodiment, shared memory 2070 may be used to enable communication between threads executing on functional units within graphics multiprocessor 2034 . In at least one embodiment, cache memory 2072 may be used, for example, as a data cache to cache texture data communicated between functional units and texture unit 2036 . In at least one embodiment, shared memory 2070 can also be used as a program management cache. In at least one embodiment, threads executing on GPGPU core 2062 can programmatically store data in shared memory in addition to automatically caching data stored in cache memory 2072 .In at least one embodiment, the parallel processors or GPGPUs described herein are communicatively coupled to a host/processor core to perform graphics operations, machine learning operations, pattern analysis operations, and various general purpose GPUs (GPGPUs). ) function to accelerate. In at least one embodiment, the GPU may be communicatively coupled to the host processor/core via a bus or other interconnect (eg, a high speed interconnect such as PCIe or NVLink). In at least one embodiment, the GPU may be integrated into the package or chip as a core, and may be communicatively coupled to the core via an internal processor bus/interconnect inside the package or chip. In at least one embodiment, regardless of how the GPUs are connected, the processor core may distribute work to such GPUs in the form of sequences of commands/instructions contained in work descriptors. In at least one embodiment, the GPU then uses dedicated circuitry/logic to efficiently process these commands/instructions.Inference and/or training logic 715 is used to perform inference and/or training operations associated with one or more embodiments. Details regarding inference and/or training logic 715 are provided herein in conjunction with FIGS. 7A and/or 7B. In at least one embodiment, the inference and/or training logic 715 uses neural network training operations, neural network functionality and/or architecture, or neural network use cases described herein. Based at least in part on the calculated weighting parameters, may be used in graphics multiprocessor 2034 for inference or prediction operations.In at least one embodiment, at least one component shown or described with respect to Figure 20D is used to implement the techniques and/or functionality described with respect to Figures 1-6. In at least one embodiment, inference and/or training logic 715 includes and/or operates at least one aspect (eg, deep learning compiler 102, stream scheduler 110, memory allocator 112) described with respect to FIG. In at least one embodiment, inference and/or training logic 715 is a computer program representation of speculatively executable operations and/or instructions, such as those described with respect to one or more of FIGS. to train at least one untrained or partially trained neural network. In at least one embodiment, the inference and/or training logic is a computer program representation of speculatively executable operations and/or instructions, such as those described with respect to one or more of FIGS. to perform at least one inference operation. In at least one embodiment, graphics multiprocessor 2034 of FIG. 20D is utilized to implement the techniques and/or functions described with respect to FIGS. 1-6.FIG. 21 illustrates a multi-GPU computing system 2100, according to at least one embodiment. In at least one embodiment, multi-GPU computing system 2100 may include processor 2102 coupled to multiple general purpose graphics processing units (GPGPUs) 2106A-D via host interface switch 2104. . In at least one embodiment, host interface switch 2104 is a PCI Express switch device that couples processor 2102 to a PCI Express bus through which processor 2102 communicates with GPGPUs 2106A-D. can communicate. In at least one embodiment, GPGPUs 2106A-D may be interconnected via a set of high speed point-to-point GPU-to-GPU links 2116. In at least one embodiment, GPU-to-GPU link 2116 is connected to each of GPGPUs 2106A-D via dedicated GPU links. In at least one embodiment, P2P GPU link 2116 enables direct communication between each of GPGPUs 2106A-D without requiring communication over host interface bus 2104 to which processor 2102 is connected. . In at least one embodiment, when there is GPU-to-GPU traffic destined for the P2P GPU link 2116, the host interface bus 2104 is configured to allow system memory access or, for example, one or more network interfaces. It remains available for communication with other instances of the multi-GPU computing system 2100 via the device. In at least one embodiment, GPGPUs 2106A-D are connected to processor 2102 via host interface switch 2104, and in at least one embodiment, processor 2102 includes direct support for P2P GPU link 2116 and GPGPU 2106A. ~D can be directly connected.Inference and/or training logic 715 is used to perform inference and/or training operations associated with one or more embodiments. Details regarding inference and/or training logic 715 are provided herein in conjunction with FIGS. 7A and/or 7B. In at least one embodiment, the inference and/or training logic 715 uses neural network training operations, neural network functionality and/or architecture, or neural network use cases described herein. Based at least in part on the calculated weight parameters may be used in the multi-GPU computing system 2100 for inference or prediction operations.In at least one embodiment, at least one component shown or described with respect to FIG. 21 is used to implement the techniques and/or functionality described with respect to FIGS. 1-6. In at least one embodiment, inference and/or training logic 715 includes and/or operates at least one aspect (eg, deep learning compiler 102, stream scheduler 110, memory allocator 112) described with respect to FIG. In at least one embodiment, inference and/or training logic 715 is a computer program representation of speculatively executable operations and/or instructions, such as those described with respect to one or more of FIGS. to train at least one untrained or partially trained neural network. In at least one embodiment, the inference and/or training logic is a computer program representation of speculatively executable operations and/or instructions, such as those described with respect to one or more of FIGS. to perform at least one inference operation. In at least one embodiment, multi-GPU computing system 2100 of FIG. 21 is utilized to implement the techniques and/or functions described with respect to FIGS. 1-6.FIG. 22 is a block diagram of graphics processor 2200, according to at least one embodiment. In at least one embodiment, graphics processor 2200 includes ring interconnect 2202, pipeline front end 2204, media engine 2237, and graphics cores 2280A-2280N. In at least one embodiment, ring interconnect 2202 couples graphics processor 2200 to other graphics processors or other processing units including one or more general purpose processor cores. In at least one embodiment, graphics processor 2200 is one of multiple processors integrated within a multi-core processing system.In at least one embodiment, graphics processor 2200 receives batches of commands over ring interconnect 2202 . In at least one embodiment, incoming commands are interpreted by command streamer 2203 of pipeline front end 2204 . In at least one embodiment, graphics processor 2200 includes scalable execution logic for performing 3D geometry processing and media processing via graphics cores 2280A-2280N. In at least one embodiment, for 3D geometry processing commands, command streamer 2203 feeds commands to geometry pipeline 2236 . In at least one embodiment, for at least some media processing commands, command streamer 2203 feeds commands to video front end 2234 , which is coupled to media engine 2237 . In at least one embodiment, the media engine 2237 includes a Video Quality Engine (VQE) 2230 for post-processing of video and images, and a multi-format encoding that provides hardware accelerated encoding and decoding of media data. /decode (MFX) 2233 engine. In at least one embodiment, geometry pipeline 2236 and media engine 2237 each generate threads of execution for thread execution resources provided by at least one graphics core 2280 .In at least one embodiment, graphics processor 2200 includes scalable thread execution resources featuring graphics cores 2280A-2280N (which may be modular and may be referred to as core slices), each The graphics cores 2280A-2280N have multiple sub-cores 2250A-50N, 2260A-2260N (sometimes called core sub-slices). In at least one embodiment, graphics processor 2200 may have any number of graphics cores 2280A. In at least one embodiment, graphics processor 2200 includes graphics core 2280A having at least a first sub-core 2250A and a second sub-core 2260A. In at least one embodiment, graphics processor 2200 is a low power processor with a single sub-core (eg, 2250A). In at least one embodiment, graphics processor 2200 includes multiple graphics cores 2280A-2280N, each of which includes a first set of sub-cores 2250A-2250N and a second set of sub-cores 2260A-2260A-2280N. 2260N set. In at least one embodiment, each sub-core of first sub-cores 2250A-2250N includes at least a first set of execution units 2252A-2252N and media/texture samplers 2254A-2254N. In at least one embodiment, each sub-core of the second sub-cores 2260A-2260N includes at least a second set of execution units 2262A-2262N and samplers 2264A-2264N. In at least one embodiment, each sub-core 2250A-2250N, 2260A-2260N shares a set of shared resources 2270A-2270N. In at least one embodiment, shared resources include shared cache memory and pixel operating logic.Inference and/or training logic 715 is used to perform inference and/or training operations associated with one or more embodiments. Details regarding inference and/or training logic 715 are provided herein in conjunction with FIGS. 7A and/or 7B. In at least one embodiment, the inference and/or training logic 715 uses neural network training operations, neural network functionality and/or architecture, or neural network use cases described herein. Based at least in part on the calculated weight parameters may be used in graphics processor 2200 for inference or prediction operations.In at least one embodiment, at least one component shown or described with respect to Figure 22 is used to implement the techniques and/or functionality described with respect to Figures 1-6. In at least one embodiment, inference and/or training logic 715 includes and/or operates at least one aspect (eg, deep learning compiler 102, stream scheduler 110, memory allocator 112) described with respect to FIG. In at least one embodiment, inference and/or training logic 715 is a computer program representation of speculatively executable operations and/or instructions, such as those described with respect to one or more of FIGS. to train at least one untrained or partially trained neural network. In at least one embodiment, the inference and/or training logic is a computer program representation of speculatively executable operations and/or instructions, such as those described with respect to one or more of FIGS. to perform at least one inference operation. In at least one embodiment, graphics processor 2200 of FIG. 22 is utilized to implement the techniques and/or functions described with respect to FIGS. 1-6.FIG. 23 is a block diagram illustrating the microarchitecture of processor 2300, which may include logic circuitry for implementing instructions, according to at least one embodiment. In at least one embodiment, processor 2300 may execute instructions including x86 instructions, AMR instructions, special instructions for application specific integrated circuits (ASICs), and the like. In at least one embodiment, processor 2300 may include registers for storing packed data, such as 64-bit wide MMXTM registers in MMX technology enabled microprocessors by Intel Corporation of Santa Clara, Calif. . In at least one embodiment, MMX registers available in both integer and floating point formats are packed with single instruction multiple data ("SIMD") and streaming SIMD extensions ("SSE") instructions. May operate on data elements. In at least one embodiment, 128-bit wide XMM registers for SSE2, SSE3, SSE4, AVX, or higher (collectively referred to as "SSEx") technologies may hold such packed data operands. . In at least one embodiment, processor 2300 may execute instructions to accelerate machine learning or deep learning algorithms, training, or inference.In at least one embodiment, processor 2300 includes an in-order front end (“front end”) 2301 that fetches instructions to be executed and prepares instructions for later use in the processor pipeline. In at least one embodiment, front end 2301 may include several units. In at least one embodiment, an instruction prefetcher 2326 fetches instructions from memory and provides the instructions to an instruction decoder 2328, which decodes or interprets the instructions. For example, in at least one embodiment, instruction decoder 2328 interprets received instructions, called "microinstructions" or "micro-operations" (also called "micro-ops" or "uops"), that can be executed by a machine. Decode into one or more operations. In at least one embodiment, the instruction decoder 2328 parses instructions into opcodes and corresponding data and control fields that may be used by the micro-architecture to perform operations in accordance with at least one embodiment. good. In at least one embodiment, trace cache 2330 may assemble the decoded uops into a program-order sequence or trace in uop queue 2334 for execution. In at least one embodiment, when trace cache 2330 encounters a complex instruction, microcode ROM 2332 provides the uops necessary to complete the operation.In at least one embodiment, some instructions can be transformed into a single micro-op, while other instructions require several micro-ops to complete the entire operation. In at least one embodiment, instruction decoder 2328 may access microcode ROM 2332 to execute an instruction if more than five micro-ops are required to complete the instruction. In at least one embodiment, instructions may be decoded into a small number of micro-ops for processing in instruction decoder 2328 . In at least one embodiment, instructions may be stored in microcode ROM 2332 if multiple micro-ops are required to complete such operations. In at least one embodiment, trace cache 2330 includes an entry point programmable logic array (“PLA”) for completing one or more instructions from microcode ROM 2332 in accordance with at least one embodiment. to determine the correct microinstruction pointer to read the microcode sequence. In at least one embodiment, machine front end 2301 may resume fetching micro-ops from trace cache 2330 after microcode ROM 2332 has finished sequencing micro-ops for an instruction.In at least one embodiment, an out-of-order execution engine (“out-of-order engine”) 2303 may prepare instructions for execution. In at least one embodiment, the out-of-order execution logic has multiple buffers to smooth the flow of instructions and change their order so that instructions are scheduled down the pipeline for execution. Optimize performance when In at least one embodiment, out-of-order execution engine 2303 includes, without limitation, allocator/register renamer 2340, memory uop queue 2342, integer/floating point uop queue 2344, memory scheduler 2346, fast scheduler 2302, It includes a slow/general floating point scheduler (“slow/general floating point scheduler”) 2304 and a simple floating point scheduler (“simple FP scheduler”) 2306 . In at least one embodiment, fast scheduler 2302, slow/general floating point scheduler 2304, and simple floating point scheduler 2306 are also collectively referred to herein as "uop schedulers 2302, 2304, 2306." In at least one embodiment, allocator/register renamer 2340 allocates the machine buffers and resources each uop needs to execute. In at least one embodiment, allocator/register renamer 2340 renames logical registers upon entry into the register file. In at least one embodiment, allocator/register renamer 2340 also maintains two uop queues in front of memory scheduler 2346 and uop schedulers 2302, 2304, 2306: memory uop queue 2342 for memory operations and non-memory operations. Distributes each uop's entry into one of the integer/floating point uop queues 2344 for the . In at least one embodiment, uop schedulers 2302, 2304, 2306 determine when uops are ready to run, that the sources of their dependent input register operands are ready, and that the sources of their dependent input register operands are ready to complete their operations. A determination is made based on the availability of the execution resources required by the uop. In at least one embodiment, fast scheduler 2302 may schedule every half of the main clock cycle, and slow/general floating point scheduler 2304 and simple floating point scheduler 2306 may schedule per main processor clock cycle. It may be scheduled once. In at least one embodiment, uop schedulers 2302, 2304, 2306 arbitrate dispatch ports to schedule uops for execution.In at least one embodiment, execution block 2311 includes, without limitation, integer register file/bypass network 2308, floating point register file/bypass network (“FP register file/bypass network”) 2310, address generation units (“AGUs”) 2312 and 2314, fast arithmetic logic units (ALUs) (“fast ALUs”) 2316 and 2318, slow arithmetic logic units (“slow ALUs”) 2320, floating point ALUs ( “FP”) 2322 , as well as a floating point move unit (“FP move”) 2324 . In at least one embodiment, integer register file/bypass network 2308 and floating point register file/bypass network 2310 are also referred to herein as "register files 2308, 2310." In at least one embodiment, AGUs 2312 and 2314, fast ALUs 2316 and 2318, slow ALU 2320, floating point ALU 2322, and floating point move unit 2324 are referred to herein as "execution units 2312, 2314, 2316, 2318, 2320, 2322, and 2324" is also called. In at least one embodiment, execution block 2311 may include, without limitation, any number and type of register files (including zero), bypass networks, address generation units, and execution units in any combination. good.In at least one embodiment, register networks 2308 , 2310 may be located between uop schedulers 2302 , 2304 , 2306 and execution units 2312 , 2314 , 2316 , 2318 , 2320 , 2322 , and 2324 . In at least one embodiment, integer register file/bypass network 2308 performs integer arithmetic. In at least one embodiment, floating point register file/bypass network 2310 performs floating point operations. In at least one embodiment, each of the register networks 2308, 2310 may include, without limitation, a bypass network that transfers just-completed results that have not yet been written to the register file. , may be bypassed or forwarded to the new dependent uops. In at least one embodiment, the register networks 2308, 2310 may communicate data with each other. In at least one embodiment, integer register file/bypass network 2308 includes, without limitation, two separate register files: one register file for the low order 32 bits of data, and one for the high order 32 bits. may include a second register file for data of In at least one embodiment, since floating point instructions typically have operands that are 64-128 bits wide, floating point register file/bypass network 2310 may include, without limitation, 128-bit wide entries. good.In at least one embodiment, execution units 2312, 2314, 2316, 2318, 2320, 2322, 2324 may execute instructions. In at least one embodiment, the register networks 2308, 2310 store integer and floating point data operand values that microinstructions need to execute. In at least one embodiment, processor 2300 may include any number and combination of execution units 2312, 2314, 2316, 2318, 2320, 2322, 2324, without limitation. In at least one embodiment, floating point ALU 2322 and floating point move unit 2324 may perform other operations including floating point, MMX, SIMD, AVX, and SEE, or special machine learning instructions. In at least one embodiment, floating point ALU 2322 may include, without limitation, floating point dividers by 64 bits to perform division, square root, and residual micro-ops. In at least one embodiment, instructions involving floating point values may be served by floating point hardware. In at least one embodiment, ALU operations may be passed to fast ALUs 2316,2318. In at least one embodiment, fast ALUs 2316, 2318 may perform fast operations with an effective latency of half a clock cycle. In at least one embodiment, slow ALU 2320 may include, without limitation, integer execution hardware for long-latency type operations such as multipliers, shifts, flag logic, and branching, so that most complex Integer arithmetic proceeds to slow ALU 2320 . In at least one embodiment, memory load/store operations may be performed by the AGUS 2312,2314. In at least one embodiment, fast ALU 2316, fast ALU 2318, and slow ALU 2320 may perform integer operations on 64-bit data operands. In at least one embodiment, fast ALU 2316, fast ALU 2318, and slow ALU 2320 may be implemented to support various data bit sizes, including 16, 32, 128, 256, and so on. In at least one embodiment, floating point ALU 2322 and floating point move unit 2324 are implemented to support wide operands with various bit widths, such as 128 bit wide packed data operands in conjunction with SIMD and multimedia instructions. mayIn at least one embodiment, uop schedulers 2302, 2304, 2306 dispatch dependent operations before the parent load has finished executing. In at least one embodiment, since uops may be speculatively scheduled and executed in processor 2300, processor 2300 may also include logic to handle memory misses. In at least one embodiment, if a data load misses in the data cache, there may be dependent operations in progress in the pipeline past the scheduler with temporarily incorrect data. In at least one embodiment, a replay mechanism tracks and reexecutes instructions that use incorrect data. In at least one embodiment, dependent operations may need to be replayed and independent operations may be allowed to complete. In at least one embodiment, the scheduler and replay mechanism of at least one embodiment of the processor may also be designed to capture instruction sequences for text string comparison operations.In at least one embodiment, a "register" may refer to an on-board processor storage location that can be used as part of an instruction to identify an operand. In at least one embodiment, the registers may be available from outside the processor (from a programmer's point of view). In at least one embodiment, registers may not be limited to a particular type of circuit. Rather, in at least one embodiment, registers may store data, provide data, and perform the functions described herein. In at least one embodiment, the registers described herein include dedicated physical registers, dynamically allocated physical registers using register renaming, dedicated physical registers and dynamically allocated physical registers. It may be implemented by circuitry within the processor using any number of different techniques, such as combinatorial. In at least one embodiment, the integer registers store 32-bit integer data. The register file of at least one embodiment also includes eight multimedia SIMD registers for packed data.Inference and/or training logic 715 is used to perform inference and/or training operations associated with one or more embodiments. Details regarding inference and/or training logic 715 are provided herein in conjunction with FIGS. 7A and/or 7B. In at least one embodiment, some or all of inference and/or training logic 715 may be incorporated in execution block 2311 and other memories or registers shown or not shown. For example, in at least one embodiment, the training and/or inference techniques described herein may use one or more of the ALUs shown in action block 2311 . Further, the weight parameters may be used on-chip to configure the ALUs of execution block 2311 for executing one or more of the machine learning algorithms, neural network architectures, use cases, or training techniques described herein. or stored in off-chip memory and/or registers (shown or not shown).In at least one embodiment, at least one component shown or described with respect to Figure 23 is used to implement the techniques and/or functionality described with respect to Figures 1-6. In at least one embodiment, inference and/or training logic 715 includes and/or operates at least one aspect (eg, deep learning compiler 102, stream scheduler 110, memory allocator 112) described with respect to FIG. In at least one embodiment, inference and/or training logic 715 is a computer program representation of speculatively executable operations and/or instructions, such as those described with respect to one or more of FIGS. to train at least one untrained or partially trained neural network. In at least one embodiment, the inference and/or training logic is a computer program representation of speculatively executable operations and/or instructions, such as those described with respect to one or more of FIGS. to perform at least one inference operation. In at least one embodiment, processor 2300 of FIG. 23 is utilized to implement the techniques and/or functions described with respect to FIGS. 1-6.FIG. 24 illustrates a deep learning application processor 2400, according to at least one embodiment. In at least one embodiment, deep learning application processor 2400 applies some or all of the processes and techniques described throughout this disclosure to deep learning application processor 2400 when executed by deep learning application processor 2400. Use an instruction to execute. In at least one embodiment, deep learning application processor 2400 is an application specific integrated circuit (ASIC). In at least one embodiment, application processor 2400 performs matrix multiplication operations that are both “hardwired” to hardware as a result of executing one or more instructions, or both. In at least one embodiment, deep learning application processor 2400 includes, without limitation, processing clusters 2410(1)-2410(12), inter-chip links (“ICL”) 2420(1)-2420(12), chip Inter Controller (“ICC”) 2430(1)-2430(2), High Bandwidth Memory Second Generation (“HBM2”) 2440(1)-2440(4), Memory Controller (“Mem Ctrlrs”) 2442 ( 1)-2442(4), High Bandwidth Memory Physical Layer (“HBM PHY”) 2444(1)-2444(4), Management-Controller Central Processing Unit (“Management-Controller CPU”) 2450, Serial Peripheral Interface, inter-integrated circuit, and general purpose input/output blocks (“SPI, I2C, GPIO”) 2460, peripheral component interconnect express controller and direct memory access blocks (“PCIe controller and DMA”) 2470, and Includes a 16-lane Peripheral Component Interconnect Express Port (“PCI Expressx16”) 2480 .In at least one embodiment, processing cluster 2410 performs deep learning operations, including inference or prediction operations, based on weight parameters computed using one or more training techniques, including techniques described herein. may be executed. In at least one embodiment, each processing cluster 2410 may include, without limitation, any number and type of processors. In at least one embodiment, deep learning application processor 2400 may include any number and type of processing clusters 2400 . In at least one embodiment, chip-to-chip link 2420 is bi-directional. In at least one embodiment, inter-chip link 2420 and inter-chip controller 2430 are configured to process information including activation information resulting from executing one or more machine learning algorithms embodied in one or more neural networks. are exchangeable by multiple deep learning application processors 2400 . In at least one embodiment, deep learning application processor 2400 may include any number and type of ICLs 2420 and ICCs 2430 (including zero).In at least one embodiment, the HBM2 2440 provides a total of 32 gigabytes (GB) of memory. In at least one embodiment, HBM2 2440(i) is associated with both memory controller 2442(i) and HBM PHY 2444(i), where "i" is any integer. In at least one embodiment, any number of HBM2 2440 may provide any type and total amount of high-bandwidth memory, and any number and type (including zero) of memory controllers 2442 and HBM PHYs 2444. may be associated with In at least one embodiment, SPI, I2C, GPIO 2460, PCIe controller and DMA 2470, and/or PCIe 2480 may be any number and type enabling any number and type of communication standards in any technically feasible manner. may be replaced by blocks ofInference and/or training logic 715 is used to perform inference and/or training operations associated with one or more embodiments. Details regarding inference and/or training logic 715 are provided herein in conjunction with FIGS. 7A and/or 7B. In at least one embodiment, the deep learning application processor is used to train machine learning models, such as neural networks, to predict or infer information provided to deep learning application processor 2400 . In at least one embodiment, the deep learning application processor 2400 extracts information based on a trained machine learning model (eg, a neural network) that has been trained by another processor or system or by the deep learning application processor 2400. Used to infer or predict. In at least one embodiment, processor 2400 may be used to perform one or more of the neural network use cases described herein.In at least one embodiment, at least one component shown or described with respect to Figure 24 is used to implement the techniques and/or functionality described with respect to Figures 1-6. In at least one embodiment, inference and/or training logic 715 includes and/or operates at least one aspect (eg, deep learning compiler 102, stream scheduler 110, memory allocator 112) described with respect to FIG. In at least one embodiment, inference and/or training logic 715 is a computer program representation of speculatively executable operations and/or instructions, such as those described with respect to one or more of FIGS. to train at least one untrained or partially trained neural network. In at least one embodiment, the inference and/or training logic is a computer program representation of speculatively executable operations and/or instructions, such as those described with respect to one or more of FIGS. to perform at least one inference operation. In at least one embodiment, deep learning application processor 2400 of FIG. 24 is utilized to implement the techniques and/or functions described with respect to FIGS. 1-6.FIG. 25 is a block diagram of a neuromorphic processor 2500, according to at least one embodiment. In at least one embodiment, neuromorphic processor 2500 receives one or more inputs from sources external to neuromorphic processor 2500 . In at least one embodiment, these inputs may be sent to one or more neurons 2502 within neuromorphic processor 2500 . In at least one embodiment, neuron 2502 and its components may be implemented using circuitry or logic including one or more arithmetic logic units (ALUs). In at least one embodiment, neuromorphic processor 2500 may include, without limitation, thousands or millions of instances of neurons 2502, although any suitable number of neurons 2502 may be used. . In at least one embodiment, each instance of neuron 2502 may include neuron input 2504 and neuron output 2506 . In at least one embodiment, neuron 2502 may generate an output, which may be sent to inputs of other instances of neuron 2502 . For example, in at least one embodiment neuron input 2504 and neuron output 2506 may be interconnected via synapses 2508 .In at least one embodiment, neurons 2502 and synapses 2508 may be interconnected such that neuromorphic processor 2500 operates to process or analyze information received by neuromorphic processor 2500 . In at least one embodiment, neuron 2502 may transmit an output pulse (or "fire" or "spike") when the input received via neuron input 2504 exceeds a threshold. In at least one embodiment, neuron 2502 may sum or integrate signals received at neuron input 2504 . For example, in at least one embodiment, neuron 2502 may be implemented as a leaky integrate-and-fire neuron, where if the sum (called "membrane potential") exceeds a threshold , neuron 2502 may use a transfer function, such as a sigmoid function or a threshold function, to generate an output (or "firing"). In at least one embodiment, leak-integrate-fire neurons may sum signals received at neuron inputs 2504 into a membrane potential, and may apply a decay factor (or leak) to reduce the membrane potential. . In at least one embodiment, if multiple input signals are received at neuron input 2504 quickly enough to exceed the threshold (i.e., before the membrane potential collapses too little to fire), leak-integral firing occurs. Neurons may fire. In at least one embodiment, neuron 2502 may be implemented using circuitry or logic that receives an input, integrates the input into a membrane potential, and collapses the membrane potential. In at least one embodiment, the inputs may be averaged or any other suitable transfer function may be used. Additionally, in at least one embodiment, neuron 2502 may include, without limitation, comparator circuitry or logic that generates an output spike at neuron 2506 when the result of applying the transfer function to neuron 2504 exceeds a threshold. In at least one embodiment, when neuron 2502 fires, it may ignore previously received input information, eg, by resetting the membrane potential to 0 or some other suitable default value. In at least one embodiment, once the membrane potential is reset to 0, neuron 2502 may resume normal operation after a suitable period (or refractory period).In at least one embodiment, neurons 2502 may be interconnected through synapses 2508 . In at least one embodiment, synapse 2508 may operate to transmit a signal from the output of first neuron 2502 to the input of second neuron 2502 . In at least one embodiment, neuron 2502 may transmit information via more than one instance of synapse 2508 . In at least one embodiment, one or more instances of neuron outputs 2506 may be connected to instances of neuron inputs 2504 of the same neuron 2502 via instances of synapses 2508 . In at least one embodiment, an instance of neuron 2502 that produces an output that is to be transmitted via an instance of synapse 2508 may be referred to as a “presynaptic neuron” for that instance of synapse 2508 . In at least one embodiment, an instance of neuron 2502 that receives input to be transmitted via an instance of synapse 2508 may be referred to as a “postsynaptic neuron” for that instance of synapse 2508 . In at least one embodiment, an instance of neuron 2502 may receive input from one or more instances of synapse 2508 and may send output via one or more instances of synapse 2508. As may be so, a single instance of neuron 2502 may therefore be both a “presynaptic neuron” and a “postsynaptic neuron” for various instances of synapses 2508 .In at least one embodiment, neurons 2502 may be organized in one or more layers. In at least one embodiment, each instance of neuron 2502 may have one neuron output 2506 that can fan out to one or more neuron inputs 2504 through one or more synapses 2508. . In at least one embodiment, neuron outputs 2506 of neurons 2502 of first layer 2510 may be connected to neuron inputs 2504 of neurons 2502 of second layer 2512 . In at least one embodiment, layer 2510 may be referred to as a "feed forward" layer. In at least one embodiment, each instance of neuron 2502 in the first layer 2510 instance may fan out to each instance of neuron 2502 in the second layer 2512 . In at least one embodiment, first layer 2510 may be referred to as a "fully connected feed forward layer." In at least one embodiment, each instance of neuron 2502 in the second layer 2512 instance may fan out to less than all instances of neuron 2502 in the third layer 2514 . In at least one embodiment, second layer 2512 may be referred to as a "sparsely connected feed forward layer." In at least one embodiment, neurons 2502 in second layer 2512 may fan out to neurons 2502 in multiple other layers, including neurons 2502 in second layer 2512 . In at least one embodiment, second layer 2512 may be referred to as a "regression layer." In at least one embodiment, the neuromorphic processor 2500 includes, without limitation, both sparsely connected feed-forward layers and fully connected feed-forward layers. Any suitable combination may be included without limitation.In at least one embodiment, neuromorphic processor 2500 may include, without limitation, a reconfigurable interconnect architecture or dedicated hard-wired interconnects for connecting synapses 2508 to neurons 2502 . In at least one embodiment, neuromorphic processor 2500 includes circuitry or circuitry that allows synapses to be distributed to different neurons 2502 as needed based on neural network topology and neuron fan-in/fan-out. Logic may be included without limitation. For example, in at least one embodiment synapses 2508 may be connected to neurons 2502 using an interconnection fabric, such as a network-on-chip, or using dedicated connections. In at least one embodiment, a synaptic interconnect and its components may be implemented using circuitry or logic.In at least one embodiment, at least one component shown or described with respect to Figure 25 is used to implement the techniques and/or functionality described with respect to Figures 1-6. In at least one embodiment, inference and/or training logic 715 includes and/or operates at least one aspect (eg, deep learning compiler 102, stream scheduler 110, memory allocator 112) described with respect to FIG. In at least one embodiment, inference and/or training logic 715 is a computer program representation of speculatively executable operations and/or instructions, such as those described with respect to one or more of FIGS. to train at least one untrained or partially trained neural network. In at least one embodiment, the inference and/or training logic is a computer program representation of speculatively executable operations and/or instructions, such as those described with respect to one or more of FIGS. to perform at least one inference operation. In at least one embodiment, neuromorphic processor 2500 of FIG. 25 is utilized to implement the techniques and/or functions described with respect to FIGS. 1-6.Figure 26 is a block diagram of a processing system in accordance with at least one embodiment. In at least one embodiment, system 2600 includes one or more processors 2602 and one or more graphics processors 2608 and can be a single processor desktop system, a multiprocessor workstation system, or multiple processors. It may be a server system having a processor 2602 or processor core 2607. In at least one embodiment, system 2600 is a processing platform embedded within a system-on-chip (SoC) integrated circuit for use in mobile, handheld, or embedded devices.In at least one embodiment, system 2600 may include a server-based gaming platform, a game console including game and media consoles, a mobile gaming console, a handheld game console, or an online game console. or may be incorporated therein. In at least one embodiment, system 2600 is a mobile phone, smart phone, tablet computing device, or mobile Internet device. In at least one embodiment, processing system 2600 may also include and be coupled to wearable devices such as smart watch wearable devices, smart eyewear devices, augmented reality devices, or virtual reality devices. , or may be integrated therein. In at least one embodiment, processing system 2600 is a television or set top box device having one or more processors 2602 and a graphical interface generated by one or more graphics processors 2608. be.In at least one embodiment, one or more processors 2602 each include one or more processor cores 2607 for processing instructions that, when executed, perform operations for system and user software. . In at least one embodiment, one or more processor cores 2607 are each configured to process a particular sequence of instructions 2609 . In at least one embodiment, instruction sequence 2609 may facilitate computing via Complex Instruction Set Computing (CISC), Reduced Instruction Set Computing (RISC), or Very Long Instruction Words (VLIW). . In at least one embodiment, processor cores 2607 may each process different instruction sequences 2609, and the instruction set may include instructions that facilitate emulation of other instruction sequences. In at least one embodiment, processor core 2607 may also include other processing devices such as a digital signal processor (DSP).In at least one embodiment, processor 2602 includes cache memory 2604 . In at least one embodiment, processor 2602 may have a single internal cache or multiple levels of internal cache. In at least one embodiment, cache memory is shared between various components of processor 2602 . In at least one embodiment, processor 2602 also employs an external cache (eg, a level three (L3) cache or last level cache (LLC)) (not shown), which is similar to known caches. - May be shared between processor cores 2607 using coherence techniques. In at least one embodiment, further included in processor 2602 is register file 2606, which stores different types of registers (e.g., integer registers, floating point registers, status registers, and instruction pointer register). In at least one embodiment, register file 2606 may include general purpose registers or other registers.In at least one embodiment, one or more processors 2602 are coupled to one or more interface buses 2610 to transmit communication signals, such as address, data, or control signals, between processors 2602 and others within system 2600 . to and from components of In at least one embodiment, interface bus 2610 may be a processor bus, such as a version of a Direct Media Interface (DMI) bus, in one embodiment. In at least one embodiment, interface 2610 is not limited to a DMI bus, but one or more peripheral component interconnect buses (eg, PCI, PCI Express), memory buses, or other types of interface buses. may include In at least one embodiment, processor 2602 includes integrated memory controller 2616 and platform controller hub 2630 . In at least one embodiment, memory controller 2616 facilitates communication between memory devices and other components of system 2600, while platform controller hub (PCH) 2630 provides local I/O busses. Provides connectivity to I/O devices via .In at least one embodiment, memory device 2620 is a dynamic random access memory (DRAM) device, a static random access memory (SRAM) device, a flash memory device, a phase change memory device, or a process • Can be any other memory device with suitable performance to serve as memory. In at least one embodiment, memory device 2620 acts as system memory for system 2600, storing data 2622 and instructions 2621 for use by one or more processors 2602 in executing applications or processes. can be stored. In at least one embodiment, memory controller 2616 is also coupled to optional external graphics processor 2612 , which communicates with one or more graphics processors 2608 within processor 2602 . may communicate to perform graphics and media operations. In at least one embodiment, display device 2611 can be connected to processor 2602 . In at least one embodiment, display device 2611 is an internal display device, such as a mobile electronic device or laptop device, or an external display device attached via a display interface (eg, display port, etc.). can include one or more of In at least one embodiment, display device 2611 may include a head mounted display (HMD), such as a stereoscopic display device for use in virtual reality (VR) or augmented reality (AR) applications.In at least one embodiment, platform controller hub 2630 allows peripheral devices to connect to memory device 2620 and processor 2602 via a high speed I/O bus. In at least one embodiment, the I/O peripherals include audio controller 2646, network controller 2634, firmware interface 2628, wireless transceiver 2626, touch sensor 2625, data storage device 2624 (e.g., hard disk • drives, flash memory, etc.). In at least one embodiment, data storage device 2624 is connected via a storage interface (eg, SATA) or via a peripheral bus such as a peripheral component interconnect bus (eg, PCI, PCI Express). , can be connected. In at least one embodiment, touch sensor 2625 can include a touch screen sensor, pressure sensor, or fingerprint sensor. In at least one embodiment, wireless transceiver 2626 may be a WiFi transceiver, a Bluetooth transceiver, or a mobile network transceiver such as a 3G, 4G, or Long Term Evolution (LTE) transceiver. In at least one embodiment, firmware interface 2628 enables communication with system firmware and may be, for example, Unified Extensible Firmware Interface (UEFI). In at least one embodiment, network controller 2634 can enable network connectivity to a wired network. In at least one embodiment, a high performance network controller (not shown) couples to interface bus 2610 . In at least one embodiment, audio controller 2646 is a multi-channel high definition audio controller. In at least one embodiment, system 2600 includes optional legacy I/O controller 2640 for coupling legacy (eg, Personal System 2 (PS/2)) devices to system 2600 . In at least one embodiment, platform controller hub 2630 provides connection inputs for one or more universal serial bus (USB) controllers 2642, such as keyboard and mouse 2643 combinations, cameras 2644, or other USB input devices. You can also connect to your device.In at least one embodiment, memory controller 2616 and platform controller hub 2630 instances may be integrated into a separate external graphics processor, such as external graphics processor 2612 . In at least one embodiment, platform controller hub 2630 and/or memory controller 2616 may be external to one or more processors 2602 . For example, in at least one embodiment, system 2600 can include external memory controller 2616 and platform controller hub 2630, which are memory controller hubs within the system chipset that communicate with processor 2602. and may be configured as a peripheral device controller hub.Inference and/or training logic 715 is used to perform inference and/or training operations associated with one or more embodiments. Details regarding inference and/or training logic 715 are provided herein in conjunction with FIGS. 7A and/or 7B. In at least one embodiment, some or all of inference and/or training logic 715 may be incorporated into graphics processor 2608 . For example, in at least one embodiment, the training and/or inference techniques described herein may use one or more of the ALUs embodied in the 3D pipeline. Further, in at least one embodiment, the inference and/or training operations described herein may be performed using logic other than that shown in FIG. 7A or 7B. In at least one embodiment, the weight parameters are used by the ALU of graphics processor 2608 for executing one or more of the machine learning algorithms, neural network architectures, use cases, or training techniques described herein. may be stored in on-chip or off-chip memory and/or registers (shown or not shown) that configure the .In at least one embodiment, at least one component shown or described with respect to Figure 26 is used to implement the techniques and/or functionality described with respect to Figures 1-6. In at least one embodiment, inference and/or training logic 715 includes and/or operates at least one aspect (eg, deep learning compiler 102, stream scheduler 110, memory allocator 112) described with respect to FIG. In at least one embodiment, inference and/or training logic 715 is a computer program representation of speculatively executable operations and/or instructions, such as those described with respect to one or more of FIGS. to train at least one untrained or partially trained neural network. In at least one embodiment, the inference and/or training logic is a computer program representation of speculatively executable operations and/or instructions, such as those described with respect to one or more of FIGS. to perform at least one inference operation. In at least one embodiment, system 2600 of FIG. 26 is utilized to implement the techniques and/or functionality described with respect to FIGS. 1-6.FIG. 27 is a block diagram of a processor 2700 having one or more processor cores 2702A-2702N, an integrated memory controller 2714, and an integrated graphics processor 2708, according to at least one embodiment. In at least one embodiment, processor 2700 may include a fewer number of additional cores, including additional cores 2702N represented by dashed squares. In at least one embodiment, each of processor cores 2702A-2702N includes one or more internal cache units 2704A-2704N. In at least one embodiment, each processor core also has access to one or more shared cache units 2706 .In at least one embodiment, internal cache units 2704 A- 2704 N and shared cache unit 2706 represent a cache memory hierarchy within processor 2700 . In at least one embodiment, cache memory units 2704A-2704N comprise at least one level of instruction and data cache within each processor core, as well as level two (L2), level three (L3), level four (L4). ) or other levels of caches, where the highest level cache before external memory is classified as LLC. In at least one embodiment, cache coherence logic maintains coherence between various cache units 2706 and 2704A-2704N.In at least one embodiment, processor 2700 may also include one or more bus controller units 2716 and a set of system agent cores 2710 . In at least one embodiment, bus controller unit 2716 manages a set of peripheral buses, such as one or more PCI or PCI Express buses. In at least one embodiment, system agent core 2710 provides management functions for various processor components. In at least one embodiment, system agent core 2710 includes one or more integrated memory controllers 2714 for managing access to various external memory devices (not shown).In at least one embodiment, one or more of processor cores 2702A-2702N includes support for simultaneous multithreading. In at least one embodiment, system agent core 2710 includes components for coordinating and operating cores 2702A-2702N during multithreaded processing. In at least one embodiment, system agent core 2710 may further include a power control unit (PCU), which controls one or more power states of processor cores 2702A-2702N and graphics processor 2708. contains logic and components for coordinatingIn at least one embodiment, processor 2700 further includes graphics processor 2708 for performing graphics processing operations. In at least one embodiment, graphics processor 2708 is coupled to shared cache unit 2706 and system agent core 2710 including one or more integrated memory controllers 2714 . In at least one embodiment, system agent core 2710 also includes display controller 2711 for directing graphics processor output to one or more coupled displays. In at least one embodiment, display controller 2711 may also be a separate module coupled to graphics processor 2708 via at least one interconnect or integrated within graphics processor 2708 . may beIn at least one embodiment, a ring-based interconnection unit 2712 is used to couple the internal components of processor 2700 . In at least one embodiment, alternative interconnection units such as point-to-point interconnections, switched interconnections, or other techniques may be used. In at least one embodiment, graphics processor 2708 is coupled to ring interconnect 2712 via I/O link 2713 .In at least one embodiment, I/O links 2713 include on-package I/O interconnects that facilitate communication between various processor components and high performance embedded memory modules 2718, such as eDRAM modules. Represents at least one of the various I/O interconnects. In at least one embodiment, each of processor cores 2702A-2702N and graphics processor 2708 use embedded memory module 2718 as a shared last level cache.In at least one embodiment, processor cores 2702A-2702N are homogeneous cores executing a common instruction set architecture. In at least one embodiment, the processor cores 2702A-2702N are heterogeneous from an Instruction Set Architecture (ISA) perspective, where one or more of the processor cores 2702A-2702N share common instructions. set, while one or more other cores of processor cores 2702A-2702N execute a subset of the common instruction set or a different instruction set. In at least one embodiment, the processor cores 2702A-2702N are heterogeneous from a micro-architectural point of view, where one or more cores with relatively higher power consumption have lower power consumption. Combine with one or more cores. In at least one embodiment, processor 2700 may be implemented on one or more chips or as an SoC integrated circuit.Inference and/or training logic 715 is used to perform inference and/or training operations associated with one or more embodiments. Details regarding inference and/or training logic 715 are provided herein in conjunction with FIGS. 7A and/or 7B. In at least one embodiment, some or all of inference and/or training logic 715 may be incorporated into graphics processor 2708 . For example, in at least one embodiment, the training and/or inference techniques described herein can be applied to ALUs embodied in the 3D pipeline, graphics core 2702, shared function logic, or other logic of FIG. may be used. Further, in at least one embodiment, the inference and/or training operations described herein may be performed using logic other than that shown in FIG. 7A or 7B. In at least one embodiment, the weight parameters constitute an ALU of processor 2700 for executing one or more machine learning algorithms, neural network architectures, use cases, or training techniques described herein. It may be stored in on-chip or off-chip memory and/or registers (shown or not shown).In at least one embodiment, at least one component shown or described with respect to Figure 27 is used to implement the techniques and/or functionality described with respect to Figures 1-6. In at least one embodiment, inference and/or training logic 715 includes and/or operates at least one aspect (eg, deep learning compiler 102, stream scheduler 110, memory allocator 112) described with respect to FIG. In at least one embodiment, inference and/or training logic 715 is a computer program representation of speculatively executable operations and/or instructions, such as those described with respect to one or more of FIGS. to train at least one untrained or partially trained neural network. In at least one embodiment, the inference and/or training logic is a computer program representation of speculatively executable operations and/or instructions, such as those described with respect to one or more of FIGS. to perform at least one inference operation. In at least one embodiment, processor 2700 of FIG. 27 is utilized to implement the techniques and/or functions described with respect to FIGS. 1-6.FIG. 28 is a block diagram of graphics processor 2800, which may be a discrete graphics processing unit or a graphics processor integrated with multiple processing cores. In at least one embodiment, graphics processor 2800 communicates with registers of graphics processor 2800 using commands stored in memory via a memory-mapped I/O interface. In at least one embodiment, graphics processor 2800 includes memory interface 2814 for accessing memory. In at least one embodiment, memory interface 2814 is an interface to local memory, one or more internal caches, one or more shared external caches, and/or system memory.In at least one embodiment, graphics processor 2800 also includes display controller 2802 for driving display output data to display device 2820 . In at least one embodiment, display controller 2802 includes hardware for compositing one or more overlapping planes for display device 2820 and multiple layers of video or user interface elements. In at least one embodiment, display device 2820 can be an internal or external display device. In at least one embodiment, display device 2820 is a head-mounted display device, such as a virtual reality (VR) display device or an augmented reality (AR) display device. In at least one embodiment, graphics processor 2800 supports a Motion Picture Experts Group (MPEG) format, such as MPEG-2, H.264, and others. 264/MPEG-4 AVC, and Joint Photographic Experts Group (JPEG) formats, such as Society of Motion Picture and Television Engineers (SMPTE) 421M/VC-1, and JPEG, and video encoding for encoding, decoding, or transcoding media into, from, or between one or more media encoding formats, including but not limited to Motion JPEG (MJPEG) format; Includes codec engine 2806 .In at least one embodiment, graphics processor 2800 includes a block image transfer (BLIT) engine 2804 for performing two-dimensional (2D) rasterizer operations including, for example, bit boundary block transfers. However, in at least one embodiment, 2D graphics operations are performed using one or more components of graphics processing engine (GPE) 2810 . In at least one embodiment, GPE 2810 is a compute engine for performing graphics operations, including three-dimensional (3D) graphics operations and media operations.In at least one embodiment, GPE 2810 is a 3D pipe for performing 3D operations, such as rendering 3D images and scenes using processing functions that operate on 3D primitive shapes (e.g., rectangles, triangles, etc.). Including line 2812 . In at least one embodiment, 3D pipeline 2812 includes programmable, fixed functional elements that perform various tasks and/or spawn threads of execution for 3D/media subsystem 2815. do. 3D pipeline 2812 can be used to perform media operations, but in at least one embodiment, GPE 2810 also includes media pipeline 2816, which is used to perform media operations such as video post-processing and image enhancement. .In at least one embodiment, media pipeline 2816 instead of or on behalf of video codec engine 2806, performs one or more functions such as video decode acceleration, video deinterlacing, and encode acceleration. Contains fixed function or programmable logic units for performing special media operations. In at least one embodiment, media pipeline 2816 further includes a thread spawning unit for spawning threads for execution in 3D/media subsystem 2815 . In at least one embodiment, the spawned threads perform computations for media operations on one or more graphics execution units included in 3D/media subsystem 2815 .In at least one embodiment, 3D/media subsystem 2815 includes logic for executing threads spawned by 3D pipeline 2812 and media pipeline 2816 . In at least one embodiment, 3D pipeline 2812 and media pipeline 2816 send thread execution requests to 3D/media subsystem 2815, which arbitrates the various requests, Contains thread dispatch logic for dispatching to available thread execution resources. In at least one embodiment, the execution resources include an array of graphics execution units for processing 3D and media threads. In at least one embodiment, 3D/media subsystem 2815 includes one or more internal caches for thread instructions and data. In at least one embodiment, subsystem 2815 also includes shared memory, including registers and addressable memory, for sharing data between threads and storing output data.Inference and/or training logic 715 is used to perform inference and/or training operations associated with one or more embodiments. Details regarding inference and/or training logic 715 are provided herein in conjunction with FIGS. 7A and/or 7B. In at least one embodiment, some or all of inference and/or training logic 715 may be incorporated into graphics processor 2800 . For example, in at least one embodiment, the training and/or inference techniques described herein may use one or more of the ALUs embodied in 3D pipeline 2812 . Further, in at least one embodiment, the inference and/or training operations described herein may be performed using logic other than that shown in FIG. 7A or 7B. In at least one embodiment, weight parameters are used in ALUs of graphics processor 2800 for executing one or more machine learning algorithms, neural network architectures, use cases, or training techniques described herein. may be stored in on-chip or off-chip memory and/or registers (shown or not shown) that configure the .In at least one embodiment, at least one component shown or described with respect to Figure 28 is used to implement the techniques and/or functionality described with respect to Figures 1-6. In at least one embodiment, inference and/or training logic 715 includes and/or operates at least one aspect (eg, deep learning compiler 102, stream scheduler 110, memory allocator 112) described with respect to FIG. In at least one embodiment, inference and/or training logic 715 is a computer program representation of speculatively executable operations and/or instructions, such as those described with respect to one or more of FIGS. to train at least one untrained or partially trained neural network. In at least one embodiment, the inference and/or training logic is a computer program representation of speculatively executable operations and/or instructions, such as those described with respect to one or more of FIGS. to perform at least one inference operation. In at least one embodiment, graphics processor 2800 of FIG. 28 is utilized to implement the techniques and/or functions described with respect to FIGS. 1-6.FIG. 29 is a block diagram of a graphics processing engine 2910 of a graphics processor in accordance with at least one embodiment. In at least one embodiment, graphics processing engine (GPE) 2910 is a version of GPE 2810 shown in FIG. In at least one embodiment, media pipeline 2916 is optional and need not be explicitly included within GPE 2910 . In at least one embodiment, a separate media and/or image processor is coupled to GPE 2910 .In at least one embodiment, GPE 2910 is coupled to or includes command streamer 2903 , which provides command streams to 3D pipeline 2912 and/or media pipeline 2916 . In at least one embodiment, command streamer 2903 is coupled to memory, which may be system memory, or one or more of an internal cache memory and a shared cache memory. good. In at least one embodiment, command streamer 2903 receives commands from memory and sends commands to 3D pipeline 2912 and/or media pipeline 2916 . In at least one embodiment, commands are instructions, primitives, or micro-ops fetched from a ring buffer, which stores commands for 3D pipeline 2912 and media pipeline 2916. . In at least one embodiment, the ring buffer may further include a batch command buffer that stores batches of commands. In at least one embodiment, commands for the 3D pipeline 2912 may also include vertex and shape data for the 3D pipeline 2912 and/or image data and memory objects for the media pipeline 2916. It can also include references to data stored in non-limiting memory. In at least one embodiment, 3D pipeline 2912 and media pipeline 2916 process commands and data by performing operations or dispatching one or more threads of execution to graphics core array 2914. process. In at least one embodiment, graphics core array 2914 includes one or more blocks of graphics cores (eg, graphics core 2915A, graphics core 2915B), each block having one or includes multiple graphics cores. In at least one embodiment, each graphics core includes general-purpose and graphics-specific execution logic for performing graphics and compute operations, as well as inference and/or training logic 715 of FIGS. 7A and 7B. , fixed function texture processing and/or machine learning, and a set of graphics execution resources including artificial intelligence acceleration logic.In at least one embodiment, the 3D pipeline 2912 processes instructions and dispatches threads of execution to the graphics core array 2914 to implement vertex shaders, geometry shaders, pixel shaders, fragment shaders, compute shaders, and so on. It contains fixed function and programmable logic for processing one or more shader programs, such as shaders or other shader programs. In at least one embodiment, graphics core array 2914 provides an integrated block of execution resources for use in processing shader programs. In at least one embodiment, multi-purpose execution logic (eg, execution units) within graphics cores 2915A-2915B of graphics core array 2914 includes support for various 3D API shader languages and supports multiple shaders. It can run multiple concurrent threads associated with it.In at least one embodiment, graphics core array 2914 also includes execution logic for performing media functions, such as video and/or image processing. In at least one embodiment, the execution unit further includes general-purpose logic programmable to perform parallel general-purpose computing operations in addition to graphics processing operations.In at least one embodiment, output data generated by threads executing on graphics core array 2914 may output data to memory in unified return buffer (URB) 2918 . In at least one embodiment, URB 2918 can store data for multiple threads. In at least one embodiment, URB 2918 may be used to transmit data between different threads running on graphics core array 2914 . In at least one embodiment, URB 2918 may also be used for synchronization between threads on graphics core array 2914 and fixed function logic within shared function logic 2920 .In at least one embodiment, graphics core array 2914 is scalable such that graphics core array 2914 includes a variable number of graphics cores, each graphics core being associated with GPE 2910 . It has a variable number of execution units based on the level of power and performance desired. In at least one embodiment, execution resources are dynamically scalable, whereby execution resources may be enabled or disabled as needed.In at least one embodiment, graphics core array 2914 is coupled to shared functionality logic 2920 that includes a plurality of resources shared among graphics cores of graphics core array 2914 . In at least one embodiment, the shared functions performed by shared function logic 2920 are embodied in hardware logic units that provide dedicated supplemental functions to graphics core array 2914 . In at least one embodiment, shared function logic 2920 includes, but is not limited to, sampler unit 2921 , mathematics unit 2922 , and inter-thread communication (ITC) logic 2923 . In at least one embodiment, one or more caches 2925 are included in or coupled to shared function logic 2920 .In at least one embodiment, shared functions are used when the demand for dedicated functions is insufficient to be included within graphics core array 2914 . In at least one embodiment, a single instantiation of a dedicated function is used in shared function logic 2920 and shared among other execution resources within graphics core array 2914 . In at least one embodiment, certain shared functions within shared function logic 2920 that are used only by graphics core array 2914 may be included within shared function logic 2926 within graphics core array 2914. good. In at least one embodiment, shared functionality logic 2926 within graphics core array 2914 may include some or all of the logic within shared functionality logic 2920 . In at least one embodiment, all logic elements within shared function logic 2920 may be replicated within shared function logic 2926 of graphics core array 2914 . In at least one embodiment, shared function logic 2920 is excluded in favor of shared function logic 2926 within graphics core array 2914 .Inference and/or training logic 715 is used to perform inference and/or training operations associated with one or more embodiments. Details regarding inference and/or training logic 715 are provided herein in conjunction with FIGS. 7A and/or 7B. In at least one embodiment, some or all of inference and/or training logic 715 may be incorporated into graphics processor 2910 . For example, in at least one embodiment, the training and/or inference techniques described herein may use 3D pipeline 2912, graphics core 2915A, shared function logic 2926, shared function logic 2920, or other One or more of the ALUs embodied in logic may be used. Further, in at least one embodiment, the inference and/or training operations described herein may be performed using logic other than that shown in FIG. 7A or 7B. In at least one embodiment, weight parameters are used in ALUs of graphics processor 2910 for executing one or more machine learning algorithms, neural network architectures, use cases, or training techniques described herein. may be stored in on-chip or off-chip memory and/or registers (shown or not shown) that configure the .In at least one embodiment, at least one component shown or described with respect to Figure 29 is used to implement the techniques and/or functionality described with respect to Figures 1-6. In at least one embodiment, inference and/or training logic 715 includes and/or operates at least one aspect (eg, deep learning compiler 102, stream scheduler 110, memory allocator 112) described with respect to FIG. In at least one embodiment, inference and/or training logic 715 is a computer program representation of speculatively executable operations and/or instructions, such as those described with respect to one or more of FIGS. to train at least one untrained or partially trained neural network. In at least one embodiment, the inference and/or training logic is a computer program representation of speculatively executable operations and/or instructions, such as those described with respect to one or more of FIGS. to perform at least one inference operation. In at least one embodiment, graphics processing engine 2910 of FIG. 29 is utilized to implement the techniques and/or functions described with respect to FIGS. 1-6.FIG. 30 is a block diagram of the hardware logic of graphics processor core 3000 in accordance with at least one embodiment described herein. In at least one embodiment, graphics processor core 3000 is included within a graphics core array. In at least one embodiment, graphics processor core 3000, sometimes referred to as a core slice, may be one or more graphics cores within a modular graphics processor. In at least one embodiment, graphics processor core 3000 is illustrative of one graphics core slice, and the graphics processors described herein can be configured based on target power and performance envelopes to: Multiple graphics core slices may be included. In at least one embodiment, each graphics core 3000 includes a fixed function block 3030 coupled to multiple sub-cores 3001A-3001F, also called sub-slices, containing modular blocks of general purpose and fixed function logic. can be done.In at least one embodiment, fixed-function block 3030 is a geometry and fixed-function pipeline that can be shared by all sub-cores within graphics processor 3000, for example, in low-performance and/or low-power graphics processor implementations. 3036 included. In at least one embodiment, the geometry and fixed function pipeline 3036 is a unified return unit that manages the 3D fixed function pipeline, video front end unit, thread spawner and thread dispatcher, and unified return buffer. • Contains a buffer manager.In at least one embodiment, fixed function block 3030 also includes graphics SoC interface 3037 , graphics microcontroller 3038 , and media pipeline 3039 . In at least one embodiment, graphics SoC interface 3037 provides an interface between graphics core 3000 and other processor cores within a system-on-chip integrated circuit. In at least one embodiment, graphics microcontroller 3038 is a programmable sub-processor configurable to manage various functions of graphics processor 3000, including thread dispatch, scheduling, and preemption. In at least one embodiment, media pipeline 3039 includes logic that facilitates decoding, encoding, pre-processing, and/or post-processing multimedia data, including image and video data. In at least one embodiment, media pipeline 3039 implements media operations via requests to compute or sampling logic within sub-cores 3001-3001F.In at least one embodiment, SoC interface 3037 allows graphics core 3000 to communicate with a general-purpose application processor core (e.g., CPU) and/or other components within the SoC, and other components within the SoC. components include memory hierarchy elements such as shared last level cache memory, system RAM, and/or embedded on-chip or on-package DRAM. In at least one embodiment, the SoC interface 3037 also enables communication with fixed function devices within the SoC, such as the camera imaging pipeline, shared between the graphics core 3000 and the CPU within the SoC. enables and/or implements the use of global memory atomics. In at least one embodiment, graphics SoC interface 3037 can also implement power management control for graphics processor core 3000, and can control the clock domain of graphics processor core 3000 and other Allows interfacing with clock domains. In at least one embodiment, the SoC interface 3037 is provided from a command streamer and global thread dispatcher configured to provide commands and instructions to each of one or more graphics cores within the graphics processor. , so that it can receive command buffers. In at least one embodiment, commands and instructions can be dispatched to the media pipeline 3039 when media operations are performed, or to the geometry and fixed function pipelines when graphics processing operations are performed. (eg, geometry and fixed function pipeline 3036 and/or geometry and fixed function pipeline 3014).In at least one embodiment, graphics microcontroller 3038 can be configured to perform various scheduling and management tasks for graphics core 3000 . In at least one embodiment, graphics microcontroller 3038 performs graphics processing on various graphics parallel engines within execution unit (EU) arrays 3002A-3002F within sub-cores 3001A-3001F, 3004A-3004F. and/or compute workload scheduling. In at least one embodiment, host software running on a CPU core of an SoC that includes graphics core 3000 can dispatch a workload to one of multiple graphics processor paths, This path invokes scheduling operations on the appropriate graphics engine. In at least one embodiment, the scheduling operations include determining which workload to run next, submitting the workload to the command streamer, and preempting existing workloads running on the engine. , managing the progress of the workload, and notifying the host software when the workload is complete. In at least one embodiment, graphics microcontroller 3038 also facilitates low power or idle states of graphics core 3000 to operate independently of the operating system and/or graphics driver software on the system. , the graphics core 3000 may be provided with the ability to save and restore registers within the graphics core 3000 across low power state transitions.In at least one embodiment, graphics core 3000 may have up to N more or fewer modular sub-cores than the illustrated sub-cores 3001A-3001F. For each set of N sub-cores, in at least one embodiment, graphics core 3000 also includes shared function logic 3010, shared and/or cache memory 3012, geometry/fixed function pipeline 3014, and various Additional fixed function logic 3016 may be included to accelerate graphics and compute processing operations. In at least one embodiment, shared functional logic 3010 includes logic units (eg, sampler, math, and/or inter-thread communication logic) that can be shared by each of the N sub-cores in graphics core 3000. can be done. In at least one embodiment, shared and/or cache memory 3012 may be a last level cache for N sub-cores 3001A-3001F within graphics core 3000, and It can also serve as a shared memory that can be accessed by multiple sub-cores. In at least one embodiment, geometry/fixed-function pipeline 3014 may be included in place of geometry/fixed-function pipeline 3036 in fixed-function block 3030 and may include similar logic units.In at least one embodiment, graphics core 3000 includes additional fixed function logic 3016 that can include various fixed function acceleration logic for use by graphics core 3000 . In at least one embodiment, additional fixed function logic 3016 includes additional geometry pipelines for use in position only shading. For position-only shading, there are at least two geometry pipelines, the full geometry pipeline and the cull pipeline within the geometry and fixed function pipelines 3014, 3036, which pipe Lines are additional geometry pipelines that may be included within additional fixed function logic 3016 . In at least one embodiment, the screening pipeline is a scaled down version of the full geometry pipeline. In at least one embodiment, the complete pipeline and the culled pipeline can run different instances of the application, each instance having a separate context. In at least one embodiment, position-only shading can hide long screening runs of truncated triangles, allowing shading to complete faster in some instances. For example, in at least one embodiment, the culling pipeline fetches and shades vertex position attributes without rasterizing and rendering the pixels to the frame buffer, so the culling pipeline logic in additional fixed function logic 3016 can run the position shader in parallel with the main application and produce critical results faster overall than the full pipeline. In at least one embodiment, the culling pipeline can use the generated critical results to compute visibility information for all triangles, whether or not those triangles have been culled. In at least one embodiment, the complete pipeline (which may be referred to as the replay pipeline in this instance) can consume the visibility information to skip the culled triangles and shade only the visible triangles. , this visibility triangle is finally passed to the rasterization phase.In at least one embodiment, additional fixed function logic 3016 may also include machine learning acceleration logic, such as fixed function matrix multiplication logic, for implementations involving machine learning training or inference optimization. .In at least one embodiment, within each graphics sub-core 3001A-3001F, includes a set of execution resources that respond to requests from a graphics pipeline, media pipeline, or shader program. and may be used to perform graphics operations, media operations, and compute operations. In at least one embodiment, the graphics sub-cores 3001A-3001F include multiple EU arrays 3002A-3002F, 3004A-3004F, thread dispatch and inter-thread communication (TD/IC) logic. 3003A-3003F, 3D (eg, texture) samplers 3005A-3005F, media samplers 3006A-3006F, shader processors 3007A-3007F, and shared local memory (SLM) 3008A-3008F. In at least one embodiment, EU arrays 3002A-3002F, 3004A-3004F each include a plurality of execution units that perform graphics, media, or compute operations including graphics, media, or compute shader programs. A general-purpose graphics processing unit capable of performing floating-point and integer/fixed-point logic operations in the service of operations. In at least one embodiment, the TD/IC logic 3003A-3003F performs local thread dispatch and thread control operations for execution units within sub-cores and are executing on execution units of sub-cores. Facilitates communication between threads. In at least one embodiment, 3D samplers 3005A-3005F can read textures or other 3D graphics-related data into memory. In at least one embodiment, a 3D sampler can read texture data differently based on the configured sample state and texture format associated with a given texture. In at least one embodiment, media samplers 3006A-3006F can perform similar read operations based on the type and format associated with the media data. In at least one embodiment, each graphics sub-core 3001A-3001F may alternatively include an integrated 3D and media sampler. In at least one embodiment, threads executing on execution units within each sub-core 3001A-3001F are configured such that threads executing within a thread group use a common pool of on-chip memory. To enable execution, shared local memory 3008A-3008F within each sub-core may be utilized.Inference and/or training logic 715 is used to perform inference and/or training operations associated with one or more embodiments. Details regarding inference and/or training logic 715 are provided herein in conjunction with FIGS. 7A and/or 7B. In at least one embodiment, some or all of inference and/or training logic 715 may be incorporated into graphics processor 3000 . For example, in at least one embodiment, the training and/or inference techniques described herein may be applied to the 3D pipeline, graphics microcontroller 3038, geometry and fixed function pipelines 3014 and 3036, or other One or more of the ALUs embodied in logic may be used. Further, in at least one embodiment, the inference and/or training operations described herein may be performed using logic other than that shown in FIG. 7A or 7B. In at least one embodiment, weight parameters are used in the ALUs of graphics processor 3000 for executing one or more machine learning algorithms, neural network architectures, use cases, or training techniques described herein. may be stored in on-chip or off-chip memory and/or registers (shown or not shown) that configure the .In at least one embodiment, at least one component shown or described with respect to Figure 30 is used to implement the techniques and/or functionality described with respect to Figures 1-6. In at least one embodiment, inference and/or training logic 715 includes and/or operates at least one aspect (eg, deep learning compiler 102, stream scheduler 110, memory allocator 112) described with respect to FIG. In at least one embodiment, inference and/or training logic 715 is a computer program representation of speculatively executable operations and/or instructions, such as those described with respect to one or more of FIGS. to train at least one untrained or partially trained neural network. In at least one embodiment, the inference and/or training logic is a computer program representation of speculatively executable operations and/or instructions, such as those described with respect to one or more of FIGS. to perform at least one inference operation. In at least one embodiment, graphics processor core 3000 of FIG. 30 is utilized to implement the techniques and/or functions described with respect to FIGS. 1-6.31A-31B illustrate thread execution logic 3100 including an array of processing elements of a graphics processor core, according to at least one embodiment. FIG. 31A shows at least one embodiment in which thread execution logic 3100 is used. FIG. 31B is a diagram illustrating exemplary internal details of graphics execution unit 3108, in accordance with at least one embodiment.As shown in FIG. 31A, in at least one embodiment, thread execution logic 3100 includes a scalable execution unit architecture including shader processor 3102, thread dispatcher 3104, instruction cache 3106, and multiple execution units 3107A-3107N, 3108A-3108N. It includes arrays, samplers 3110 , data caches 3112 and data ports 3114 . In at least one embodiment, the scalable execution unit array enables or disables one or more execution units (eg, any of execution units 3108A-N or 3107A-N), for example, based on the computational requirements of the workload. By disabling it, you can scale dynamically. In at least one embodiment, the scalable execution units are interconnected via an interconnect fabric linked to each of the execution units. In at least one embodiment, thread execution logic 3100 uses a system memory or cache memory, or the like, via one or more of instruction cache 3106, data port 3114, sampler 3110, and execution units 3107 or 3108. Contains one or more connections to memory. In at least one embodiment, each execution unit (eg, 3107A) is a stand-alone programmable general-purpose unit capable of executing multiple concurrent hardware threads while processing multiple data elements per thread in parallel. A computing unit. In at least one embodiment, arrays of execution units 3107 and/or 3108 are scalable to include any number of individual execution units.In at least one embodiment, execution units 3107 and/or 3108 are primarily used to execute shader programs. In at least one embodiment, shader processor 3102 may process various shader programs and dispatch execution threads associated with the shader programs via thread dispatcher 3104 . In at least one embodiment, thread dispatcher 3104 arbitrates thread start requests from the graphics and media pipelines and dispatches requested threads on one or more of execution units 3107 and/or 3108 . contains logic for instantiating with For example, in at least one embodiment, the geometry pipeline can dispatch vertex shaders, mosaic shaders, or geometry shaders to thread execution logic for processing. In at least one embodiment, thread dispatcher 3104 can also handle run-time thread spawning requests from executing shader programs.In at least one embodiment, execution units 3107 and/or 3108 support an instruction set that includes native support for many standard 3D graphics shader instructions, thereby allowing graphics libraries (e.g., Direct3D and Shader programs from OpenGL are executed with minimal translation. In at least one embodiment, the execution units include vertex and geometry processing (eg, vertex programs, geometry programs, and/or vertex shaders), pixel processing (eg, pixel shaders, fragment shaders), and general processing ( For example, compute and media shaders). In at least one embodiment, each execution unit 3107 and/or 3108, including one or more arithmetic logic units (ALUs), can issue multiple single instruction multiple data (SIMD) executions, Multithreaded operation allows an efficient execution environment despite high memory access latencies. In at least one embodiment, each hardware thread within each execution unit has a dedicated high-bandwidth register file and an associated independent thread state. In at least one embodiment, execution is clocked into a pipeline capable of performing integer operations, single and double precision floating point operations, SIMD branch performance, logical operations, transcendental operations, and various other operations. Multiple are issued per hit. In at least one embodiment, while waiting for data from memory or one of the shared functions, dependent logic within execution units 3107 and/or 3108 waits until the requested data is returned. to sleep. In at least one embodiment, hardware resources may be devoted to processing other threads while the waiting thread is asleep. For example, in at least one embodiment, during delays associated with vertex shader operations, an execution unit may execute a pixel shader, a fragment shader, or another type of shader program that includes a different vertex shader.In at least one embodiment, each execution unit of execution units 3107 and/or 3108 operates on an array of data elements. In at least one embodiment, the number of data elements is the "execution size" or number of channels for the instruction. In at least one embodiment, an execution channel is a logical unit of execution related to data element access, masking, and flow control within an instruction. In at least one embodiment, the number of channels may be independent of the number of physical arithmetic logic units (ALUs) or floating point units (FPUs) for a particular graphics processor. In at least one embodiment, execution units 3107 and/or 3108 may support integer and floating point data types.In at least one embodiment, the execution unit instruction set includes SIMD instructions. In at least one embodiment, various data elements may be stored in registers as packed data types, and execution units process various elements based on the data size of the elements. For example, in at least one embodiment, when operating with a 256-bit wide vector, 256 bits of the vector are stored in registers and the execution unit stores four separate 64-bit packed data elements (quadwords). QW (Quad-Word) sized data elements), 8 separate 32-bit packed data elements (Double Word (DW) sized data elements), 16 separate 16-bit packed data elements (DW) sized data elements). It operates on vectors as data elements (word (W) sized data elements) or as 32 separate 8-bit data elements (byte (B) sized data elements). However, in at least one embodiment, different vector widths and register sizes are contemplated.In at least one embodiment, one or more execution units are combined, such as execution unit 3107A fused with execution unit 3108A into fused execution unit 3109A, to provide thread control logic (3111A-3111N) common to the fused EU. fusion execution units 3109A-3109N. In at least one embodiment, multiple EUs can be merged into an EU group. In at least one embodiment, each EU in the fused EU group runs a separate SIMD hardware thread, with the number of EUs in the fused EU group potentially differing according to various embodiments. It can be configured as In at least one embodiment, different SIMD widths, including but not limited to SIMD8, SIMD16, and SIMD32, can be implemented per EU. In at least one embodiment, each fused graphics execution unit 3109A-3109N includes at least two execution units. For example, in at least one embodiment, fused execution unit 3109A includes first EU 3107A, second EU 3108A, and thread control logic 3111A common to first EU 3107A and second EU 3108A. In at least one embodiment, thread control logic 3111A controls threads executing in fused graphics execution unit 3109A to direct each EU within fused execution units 3109A-3109N to use a common instruction pointer register. to be able to runIn at least one embodiment, one or more internal instruction caches (eg, 3106) are included in thread execution logic 3100 to cache thread instructions for execution units. In at least one embodiment, one or more data caches (eg, 3112) are included for caching thread data during thread execution. In at least one embodiment, sampler 3110 is included to perform texture sampling for 3D operations and media sampling for media operations. In at least one embodiment, sampler 3110 includes special texture or media sampling functionality to process texture or media data during the sampling process before providing the sampled data to the execution units.During execution, in at least one embodiment, the graphics and media pipeline sends thread start requests to thread execution logic 3100 via thread spawning and dispatch logic. In at least one embodiment, once the group of geometric objects has been processed and rasterized into pixel data, the pixel processor logic (eg, pixel shader logic, fragment shader logic, etc.) within shader processor 3102 Called to further compute the output information and cause the results to be written to the output surface (eg, color buffer, depth buffer, stencil buffer, etc.). In at least one embodiment, a pixel shader or fragment shader computes values for various vertex attributes to be interpolated between rasterized objects. In at least one embodiment, pixel processor logic within shader processor 3102 then executes a pixel shader program or fragment shader program with an application programming interface (API). In at least one embodiment, shader processor 3102 dispatches threads to execution units (eg, 3108A) via thread dispatcher 3104 to execute shader programs. In at least one embodiment, shader processor 3102 uses the texture sampling logic of sampler 3110 to access texture data in texture maps stored in memory. In at least one embodiment, arithmetic operations on texture data and input geometry data compute pixel color data for each geometry fragment, or truncate one or more pixels from further processing.In at least one embodiment, data port 3114 provides a memory access mechanism for thread execution logic 3100 to output processed data to memory for further processing in the graphics processor output pipeline. . In at least one embodiment, data port 3114 includes or is coupled to one or more cache memories (eg, data cache 3112) to store data for memory access via the data port. cache.As shown in FIG. 31B, in at least one embodiment, graphics execution unit 3108 includes an instruction fetch unit 3137, a general register file array (GRF) 3124, an architectural register file array (ARF) 3126, a thread arbiter 3122, a transmit unit 3130, a branch unit 3132, a set of SIMD floating point units (FPUs) 3134, and a set of dedicated integer SIMD ALUs 3135. In at least one embodiment, GRF 3124 and ARF 3126 include a set of general purpose and architectural register files associated with each concurrent hardware thread, which hardware thread is active in graphics execution unit 3108. may be In at least one embodiment, per-thread architectural state is maintained in ARF 3126 and data used during thread execution is stored in GRF 3124 . In at least one embodiment, the execution state of each thread, including instruction pointers to each thread, can be maintained in thread-private registers of the ARF 3126 .In at least one embodiment, graphics execution unit 3108 has an architecture that is a combination of Simultaneous Multi-Threading (SMT) and Interleaved Multi-Threading (IMT). In at least one embodiment, the architecture has a modular configuration that can be fine-tuned at design time based on the target number of concurrent threads and the number of registers per execution unit, where the execution unit's resources are allocated to multiple concurrent threads. is divided over the logic used to execute theIn at least one embodiment, graphics execution unit 3108 can co-issue multiple instructions, each of which may be a different instruction. In at least one embodiment, thread arbiter 3122 of graphics execution unit thread 3108 may dispatch instructions to one of send unit 3130, branch unit 3132, or SIMD FPU 3134 for execution. can. In at least one embodiment, each execution thread has access to 128 general purpose registers within the GRF 3124, where each register stores 32 bytes accessible as a SIMD 8-element vector of 32-bit data elements. can do. In at least one embodiment, each execution unit thread can access 4 kilobytes in GRF 3124, although embodiments are not so limited and other embodiments may require more or less resources. may be provided. In at least one embodiment, up to 7 threads can run concurrently, although the number of threads per execution unit can also vary depending on the embodiment. In at least one embodiment where 7 threads can access 4 kilobytes, the GRF 3124 can store a total of 28 kilobytes. In at least one embodiment, a flexible addressing mode allows multiple registers to be addressed together to build wider registers or to represent strided rectangular block data structures.In at least one embodiment, memory operations, sampler operations, and other long latency system communications are dispatched via a “send” instruction executed by message passing transmission unit 3130 . In at least one embodiment, branch instructions are dispatched to branch unit 3132 to facilitate SIMD divergence and eventual convergence.In at least one embodiment, graphics execution unit 3108 includes one or more SIMD floating point units (FPUs) 3134 for performing floating point operations. In at least one embodiment, FPU 3134 also supports integer arithmetic. In at least one embodiment, FPU 3134 may perform up to M 32-bit floating point (or integer) operations in SIMD, or up to 2M 16-bit integer or 16-bit floating point operations in SIMD. can be done. In at least one embodiment, at least one FPU provides extended math functions to support high throughput transcendental math functions and double precision 64-bit floating point. In at least one embodiment, there is also a set of 8-bit integer SIMD ALUs 3135, which may be specifically optimized to perform operations related to machine learning computations.In at least one embodiment, an array of multiple instances of graphics execution unit 3108 may be instantiated in a graphics sub-core group (eg, sub-slice). In at least one embodiment, execution unit 3108 can execute instructions across multiple execution channels. In at least one embodiment, each thread executing on graphics execution unit 3108 executes on a different channel.Inference and/or training logic 715 is used to perform inference and/or training operations associated with one or more embodiments. Details regarding inference and/or training logic 715 are provided herein in conjunction with FIGS. 7A and/or 7B. In at least one embodiment, some or all of inference and/or training logic 715 may be incorporated into thread execution logic 3100 . Further, in at least one embodiment, the inference and/or training operations described herein may be performed using logic other than that shown in FIG. 7A or 7B. In at least one embodiment, weight parameters are used to direct ALU threads of execution logic 3100 to execute one or more machine learning algorithms, neural network architectures, use cases, or training techniques described herein. It may be stored in constituent on-chip or off-chip memories and/or registers (shown or not shown).In at least one embodiment, at least one component shown or described with respect to FIGS. 31A and/or 31B may be used to implement the techniques and/or functionality described with respect to FIGS. used. In at least one embodiment, inference and/or training logic 715 includes and/or operates at least one aspect (eg, deep learning compiler 102, stream scheduler 110, memory allocator 112) described with respect to FIG. In at least one embodiment, inference and/or training logic 715 is a computer program representation of speculatively executable operations and/or instructions, such as those described with respect to one or more of FIGS. to train at least one untrained or partially trained neural network. In at least one embodiment, the inference and/or training logic is a computer program representation of speculatively executable operations and/or instructions, such as those described with respect to one or more of FIGS. to perform at least one inference operation. In at least one embodiment, thread execution logic 3100 of FIG. 31A and/or graphics execution unit 3108 of FIG. 31B are utilized to implement the techniques and/or functions described with respect to FIGS. be.FIG. 32 illustrates a parallel processing unit (“PPU”) 3200 according to at least one embodiment. In at least one embodiment, PPU 3200 is comprised of machine-readable code that, when executed by PPU 3200, causes PPU 3200 to perform some or all of the processes and techniques described throughout this disclosure. In at least one embodiment, PPU 3200 is a multi-threaded processor implemented on one or more integrated circuit devices to execute computer readable instructions (also called machine readable instructions or simply instructions) in multiple threads. It utilizes multithreading as a latency-hiding technique designed to process in parallel. In at least one embodiment, a thread refers to a thread of execution, instantiating a set of instructions configured to be executed by PPU 3200 . In at least one embodiment, PPU 3200 performs three-dimensional (“3D”) graphics processing to generate two-dimensional (“2D”) image data for display on a display device, such as a liquid crystal display (“LCD”) device. A graphics processing unit (“GPU”) configured to implement a graphics rendering pipeline for processing graphics data. In at least one embodiment, PPU 3200 is utilized to perform computations such as linear algebra operations and machine learning operations. FIG. 32 shows an exemplary parallel processor for illustrative purposes only and should be interpreted as a non-limiting example of processor architectures contemplated within the scope of this disclosure, and additional It should be understood that any suitable processor may be utilized to perform and/or replace it.In at least one embodiment, the one or more PPUs 3200 are configured to accelerate High Performance Computing (“HPC”), data center, and machine learning applications. In at least one embodiment, the PPU 3200 is configured to accelerate deep learning systems and applications including, but not limited to: autonomous vehicle platforms, deep learning, precision speech, images, text recognition systems, intelligent・Video analysis, molecular simulation, drug discovery, disease diagnosis, weather forecasting, big data analysis, astronomy, molecular dynamics simulation, financial modeling, robotics, factory automation, real-time language translation, online search optimization and personalization User recommendations, etc.In at least one embodiment, PPU 3200 includes, without limitation, input/output (“I/O”) unit 3206, front end unit 3210, scheduler unit 3212, work distribution unit 3214, hub 3216, crossbar ( 3220 , one or more general processing clusters (“GPC”) 3218 , and one or more partition units (“memory partition units”) 3222 . In at least one embodiment, PPU 3200 is connected to a host processor or other PPUs 3200 via one or more high speed GPU interconnects (“GPU interconnects”) 3208 . In at least one embodiment, PPU 3200 is connected to a host processor or other peripheral devices via system bus 3202 . In at least one embodiment, PPU 3200 is connected to local memory comprising one or more memory devices (“memory”) 3204 . In at least one embodiment, memory device 3204 includes, without limitation, one or more dynamic random access memory (“DRAM”) devices. In at least one embodiment, one or more DRAM devices may be and/or can be configured as a high bandwidth memory (“HBM”) subsystem with multiple DRAM dies stacked within each device. There may be.In at least one embodiment, high-speed GPU interconnect 3208 may refer to a wire-based, multi-lane communication link that is used by the system to scale and to operate one or more central processing units ( ("CPU") to support cache coherence between the PPUs 3200 and the CPUs, and CPU mastering. In at least one embodiment, data and/or commands are transferred by high-speed GPU interconnect 3208 via hub 3216 to one or more copy engines, video encoders, video decoders, power management units, and to/from other units of PPU 3200, such as other components that may not be explicitly shown in the .In at least one embodiment, I/O unit 3206 is configured to send and receive communications (eg, commands, data) from a host processor (not shown in FIG. 32) via system bus 3202 . In at least one embodiment, I/O unit 3206 communicates with a host processor either directly through system bus 3202 or through one or more intermediate devices, such as memory bridges. In at least one embodiment, I/O unit 3206 may communicate via system bus 3202 with one or more other processors, such as one or more of PPUs 3200 . In at least one embodiment, I/O unit 3206 implements a Peripheral Component Interconnect Express (“PCIe”) interface to enable communication over a PCIe bus. In at least one embodiment, I/O unit 3206 implements interfaces for communicating with external devices.In at least one embodiment, I/O unit 3206 decodes packets received over system bus 3202 . In at least one embodiment, at least some packets represent commands configured to cause PPU 3200 to perform various actions. In at least one embodiment, I/O unit 3206 sends decoded commands to various other units of PPU 3200 that are specified by the commands. In at least one embodiment, commands are sent to front end unit 3210 and/or hub 3216 or (not explicitly shown in FIG. 32) one or more copy engines, video encoders, video • sent to other units of PPU 3200 such as decoders, power management units, etc.; In at least one embodiment, I/O unit 3206 is configured to route communications between various logical units of PPU 3200 .In at least one embodiment, a program executed by a host processor encodes a command stream in a buffer that provides workload to PPU 3200 for processing. In at least one embodiment, a workload includes instructions and data to be processed by those instructions. In at least one embodiment, a buffer is an area in memory that is accessible (e.g., write/read) by both the host processor and PPU 3200, and the host interface unit communicates with the system bus via I/O unit 3206. It may be configured to access buffers in system memory coupled to system bus 3202 via memory requests sent via 3202 . In at least one embodiment, the host processor writes the command stream to a buffer and then sends a pointer to the start of the command stream to PPU 3200, which causes front end unit 3210 to cause one or more It receives pointers to command streams, manages one or more command streams, reads commands from the command streams, and forwards commands to various units of the PPU 3200 .In at least one embodiment, the front end unit 3210 is coupled to a scheduler unit 3212 that configures various GPCs 3218 to process tasks defined by one or more command streams. In at least one embodiment, the scheduler unit 3212 is configured to track state information associated with various tasks managed by the scheduler unit 3212, where the state information includes which GPC 3218 the task is assigned to, It may indicate whether the task is active or inactive, the priority level associated with the task, and so on. In at least one embodiment, scheduler unit 3212 manages execution of multiple tasks on one or more of GPCs 3218 .In at least one embodiment, scheduler unit 3212 is coupled to work distribution unit 3214 configured to dispatch tasks for execution on GPC 3218 . In at least one embodiment, work distribution unit 3214 tracks the number of scheduled tasks received from scheduler unit 3212, and work distribution unit 3214 maintains a pending task pool and an active task pool for each of GPCs 3218. Manage your pool. In at least one embodiment, the pending task pool comprises a number of slots (eg, 32 slots) containing tasks assigned to be processed by a particular GPC 3218 , and the active task pool is processed by the GPC 3218 . A number of slots (e.g., 4 slots) for tasks that are actively being processed are provided so that when one of the GPCs 3218 completes executing a task, it is removed from the GPC 3218's active task pool. Eliminated, another task from the pending task pool is selected and scheduled to run on GPC 3218 . In at least one embodiment, when an active task is idle on the GPC 3218, such as while waiting for data dependencies to be resolved, the active task is removed from the GPC 3218 and placed in the pending task pool. , during which another task from the pending task pool is selected and scheduled to run on GPC 3218 .In at least one embodiment, work distribution unit 3214 communicates with one or more GPCs 3218 via Xbar 3220 . In at least one embodiment, Xbar 3220 is an interconnection network that couples many of the units of PPU 3200 to other units of PPU 3200 and is configured to couple work distribution unit 3214 to a particular GPC 3218. It is possible. In at least one embodiment, one or more other units of PPU 3200 may also be connected to X-bar 3220 via hub 3216 .In at least one embodiment, tasks are managed by scheduler unit 3212 and dispatched to one of GPCs 3218 by work distribution unit 3214 . In at least one embodiment, GPC 3218 is configured to process tasks and generate results. In at least one embodiment, the results may be consumed by other tasks within GPC 3218 , routed to different GPCs 3218 via Xbar 3220 , or stored in memory 3204 . In at least one embodiment, results can be written to memory 3204 via partition unit 3222, which provides a memory interface for reading and writing data to/from memory 3204. Implement. In at least one embodiment, the results can be sent to another PPU or CPU via the high speed GPU interconnect 3208. In at least one embodiment, the PPU 3200 includes U partition units 3222 equal to the number of separate individual memory devices 3204 coupled to the PPU 3200, as described in further detail herein in conjunction with FIG. including, but not limited to.In at least one embodiment, the host processor executes a driver kernel that enables one or more applications running on the host processor to schedule operations for execution on the PPU 3200. • It implements a programming interface (API). In at least one embodiment, multiple compute applications are executed concurrently by PPU 3200, and PPU 3200 provides isolation, quality of service (“QoS”), and independent address spaces for multiple compute applications. I will provide a. In at least one embodiment, the application generates instructions (eg, in the form of API calls) that cause the driver kernel to generate one or more tasks for execution by PPU 3200, and the driver kernel is processed by PPU 3200. outputs tasks to one or more streams that are In at least one embodiment, each task has one or more groups of associated threads, which may be referred to as warps. In at least one embodiment, a warp comprises multiple associated threads (eg, 32 threads) that can execute in parallel. In at least one embodiment, cooperating threads may refer to multiple threads that contain instructions to perform tasks and exchange data via shared memory. In at least one embodiment, threads and cooperating threads are described in further detail in conjunction with FIG.Inference and/or training logic 715 is used to perform inference and/or training operations associated with one or more embodiments. Details regarding inference and/or training logic 715 are provided herein in conjunction with FIGS. 7A and/or 7B. In at least one embodiment, a deep learning application processor is used to train machine learning models, such as neural networks, to predict or infer information provided to PPU 3200 . In at least one embodiment, the deep learning application processor is used to infer or predict information based on a trained machine learning model (e.g., neural network) that has been trained by another processor or system or by the PPU 3200. used. In at least one embodiment, PPU 3200 may be used to perform one or more of the neural network use cases described herein.In at least one embodiment, at least one component shown or described with respect to Figure 32 is used to implement the techniques and/or functionality described with respect to Figures 1-6. In at least one embodiment, inference and/or training logic 715 includes and/or operates at least one aspect (eg, deep learning compiler 102, stream scheduler 110, memory allocator 112) described with respect to FIG. In at least one embodiment, inference and/or training logic 715 is a computer program representation of speculatively executable operations and/or instructions, such as those described with respect to one or more of FIGS. to train at least one untrained or partially trained neural network. In at least one embodiment, the inference and/or training logic is a computer program representation of speculatively executable operations and/or instructions, such as those described with respect to one or more of FIGS. to perform at least one inference operation. In at least one embodiment, parallel processing unit 3200 of FIG. 32 is utilized to implement the techniques and/or functionality described with respect to FIGS. 1-6.FIG. 33 illustrates a general purpose processing cluster (“GPC”) 3300 according to at least one embodiment. In at least one embodiment, GPC 3300 is GPC 3218 of FIG. In at least one embodiment, each GPC 3300 includes, without limitation, several hardware units for processing tasks, each GPC 3300 includes, without limitation, a pipeline manager 3302, pre-raster computation unit (“preROP”: pre-raster operations unit) 3304, raster engine 3308, work distribution crossbar (“WDX”) 3316, memory management unit (“MMU”) 3318, one or more data Data Processing Clusters (“DPC”) 3306, and any suitable combination of parts.In at least one embodiment, the operation of GPC 3300 is controlled by pipeline manager 3302 . In at least one embodiment, pipeline manager 3302 manages the configuration of one or more DPCs 3306 to process tasks assigned to GPCs 3300 . In at least one embodiment, pipeline manager 3302 configures at least one of one or more DPCs 3306 to implement at least a portion of a graphics rendering pipeline. In at least one embodiment, DPC 3306 is configured to execute vertex shader programs on programmable streaming multi-processor (“SM”) 3314 . In at least one embodiment, pipeline manager 3302 is configured, in at least one embodiment, to route packets received from work distribution units to appropriate logical units within GPC 3300 , with some packets routed to preROP 3304 . and/or Raster Engine 3308 , and other packets may be routed to DPC 3306 to be processed by Primitive Engine 3312 or SM 3314 . In at least one embodiment, pipeline manager 3302 configures at least one of DPCs 3306 to implement a neural network model and/or a computing pipeline.In at least one embodiment, preROP unit 3304, in at least one embodiment, applies data generated by raster engine 3308 and DPC 3306 to the raster operations ( ROP) unit. In at least one embodiment, preROP unit 3304 is configured to perform color blending optimization, organize pixel data, perform address translation, and other operations. In at least one embodiment, raster engine 3308 includes, but is not limited to, a number of fixed function hardware units configured to perform various raster operations in at least one embodiment. includes, without limitation, setup engines, coarse raster engines, culling engines, clipping engines, fine raster engines, tile coalescing engines, and any suitable combination thereof. In at least one embodiment, the setup engine receives the transformed vertices and generates plane equations associated with the geometric primitives defined by the vertices, and the plane equations are sent to the coarse raster engine to generate the primitives. coverage information (e.g., the x,y coverage mask for the tile) is generated for , and the output of the coarse raster engine is sent to a culling engine, where fragments associated with primitives that fail the z-test are culled; It is sent to the clipping engine, where fragments outside the view frustum are clipped. In at least one embodiment, fragments that pass clipping and culling are passed to a fine raster engine to generate attributes for pixel fragments based on plane equations generated by the setup engine. In at least one embodiment, the output of raster engine 3308 includes fragments to be processed by any suitable entity, such as by a fragment shader implemented within DPC 3306 .In at least one embodiment, each DPC 3306 included in GPC 3300 includes, without limitation, an M-Pipe Controller (“MPC”) 3310, a Primitive Engine 3312, one or more SMs 3314, and these Including any suitable combination. In at least one embodiment, MPC 3310 controls the operation of DPC 3306 to route packets received from Pipeline Manager 3302 to appropriate units within DPC 3306 . In at least one embodiment, packets associated with vertices are routed to a primitive engine 3312 configured to fetch vertex attributes associated with the vertices from memory and, in contrast, packets associated with shader programs. Packets may be sent to SM 3314 .In at least one embodiment, SM 3314 includes, without limitation, a programmable streaming processor configured to process tasks represented by several threads. In at least one embodiment, the SM 3314 is multi-threaded, configured to execute multiple threads from a particular group of threads (e.g., 32 threads) simultaneously, and uses single instruction multiple data (SIMD) An architecture is implemented in which each thread in a group of threads (warp) is configured to process different data sets based on the same instruction set. In at least one embodiment, all threads within a thread group execute a common set of instructions. In at least one embodiment, the SM 3314 implements a Single Instruction Multiple Thread (SIMT) architecture, where each thread of a thread group is configured to process different data sets based on a common set of instructions. However, individual threads within a thread group are allowed to diverge during execution. In at least one embodiment, a program counter, call stack, and execution state are maintained for each warp to allow concurrency between warps and serial execution within a warp when threads within a warp diverge. become. In another embodiment, program counters, call stacks, and execution state are maintained for each individual thread to allow equal concurrency among all threads, within warps, and between warps. In at least one embodiment, execution state is maintained for each individual thread, and threads executing common instructions may converge and execute in parallel to be more efficient. At least one embodiment of the SM3314 is described in further detail herein.In at least one embodiment, MMU 3318 provides an interface between GPC 3300 and a memory partition unit (eg, partition unit 3222 of FIG. 32), and MMU 3318 performs virtual to physical address translation, memory Provides protection and arbitration of memory requests. In at least one embodiment, MMU 3318 provides one or more translation lookaside buffers (“TLBs”) for performing translations from virtual addresses to physical addresses in memory.Inference and/or training logic 715 is used to perform inference and/or training operations associated with one or more embodiments. Details regarding inference and/or training logic 715 are provided herein in conjunction with FIGS. 7A and/or 7B. In at least one embodiment, a deep learning application processor is used to train machine learning models, such as neural networks, to predict or infer information provided to GPC 3300 . In at least one embodiment, GPC 3300 is used to infer or predict information by another processor or system or based on a trained machine learning model (eg, neural network) that has been trained by GPC 3300. In at least one embodiment, GPC 3300 may be used to perform one or more of the neural network use cases described herein.In at least one embodiment, at least one component shown or described with respect to Figure 33 is used to implement the techniques and/or functionality described with respect to Figures 1-6. In at least one embodiment, inference and/or training logic 715 includes and/or operates at least one aspect (eg, deep learning compiler 102, stream scheduler 110, memory allocator 112) described with respect to FIG. In at least one embodiment, inference and/or training logic 715 is a computer program representation of speculatively executable operations and/or instructions, such as those described with respect to one or more of FIGS. to train at least one untrained or partially trained neural network. In at least one embodiment, the inference and/or training logic is a computer program representation of speculatively executable operations and/or instructions, such as those described with respect to one or more of FIGS. to perform at least one inference operation. In at least one embodiment, general purpose processing cluster 3300 of FIG. 33 is utilized to implement the techniques and/or functionality described in connection with FIGS. 1-6.FIG. 34 illustrates a memory partition unit 3400 of a parallel processing unit (“PPU”) according to at least one embodiment. In at least one embodiment, partition unit 3400 includes, without limitation, raster operation (“ROP”) unit 3402, level two (“L2”) cache 3404, memory interface 3406, and any suitable combination thereof. including. In at least one embodiment, memory interface 3406 is coupled to memory. In at least one embodiment, memory interface 3406 may implement a 32-, 64-, 128-, 1024-bit data bus, etc. for high speed data transfer. In at least one embodiment, the PPU incorporates U memory interfaces 3406, where U is a positive integer, one memory interface 3406 per partition unit 3400 pair, where partition unit 3400 is connected to a corresponding memory device. For example, in at least one embodiment, the PPU includes up to Y of memory devices.In at least one embodiment, memory interface 3406 implements a high bandwidth memory second generation (“HBM2”) memory interface, where Y is equal to half of U. In at least one embodiment, the HBM2 memory stack is located in a physical package with a PPU to provide substantial power and area savings over conventional GDDR5 SDRAM systems. In at least one embodiment, each HBM2 stack includes, without limitation, four memory dies, Y=4, and each HBM2 stack has eight channels of two 128-bit channels per die. , and a data bus width of 1024 bits. In at least one embodiment, the memory supports Single-Error Correcting Double-Error Detecting (“SECDED”) error correction code (“ECC”) to protect data. In at least one embodiment, ECC may provide greater reliability for computing applications that are prone to data corruption.In at least one embodiment, the PPU implements a multi-level memory hierarchy. In at least one embodiment, memory partition unit 3400 supports unified memory to provide a single unified virtual address space for central processing unit (“CPU”) and PPU memory; • Enables sharing of data between systems. In at least one embodiment, the frequency with which a PPU accesses memory located on other processors is tracked to ensure that memory pages are moved to the physical memory of PPUs that are accessing the pages more frequently. to In at least one embodiment, the high-speed GPU interconnect 3208 supports address translation services to allow PPUs direct access to the CPU's page tables, providing full access to CPU memory by the PPUs. .In at least one embodiment, a copy engine transfers data between multiple PPUs or between a PPU and a CPU. In at least one embodiment, the copy engine can generate a page error for an address that is not mapped to the page table, and then memory partition unit 3400 responds to the page error by mapping the address to the page table. , after which the copy engine performs the transfer. In at least one embodiment, memory is pinned (eg, page-unmovable) for copy engine operations across multiple processors, effectively reducing available memory. In at least one embodiment, if there is a hardware page fault, the address can be passed to the copy engine regardless of whether the memory page is resident and the copy process is transparent.According to at least one embodiment, data from memory 3204 of FIG. 32 or other system memory is fetched by memory partition unit 3400 and stored in L2 cache 3404, which is stored on-chip. located and shared between various GPCs. In at least one embodiment, each memory partition unit 3400 includes, without limitation, at least a portion of the L2 cache associated with the corresponding memory device. In at least one embodiment, lower level caches are implemented in various units within the GPC. In at least one embodiment, each SM 3314 of FIG. data is fetched and stored in each of the L1 caches for processing by the SM3314 functional units. In at least one embodiment, L2 cache 3404 is coupled to memory interface 3406 and X-bar 3220 shown in FIG.In at least one embodiment, ROP unit 3402 performs graphics raster operations involving pixel color, such as color compression, pixel blending, and the like. ROP unit 3402 , in at least one embodiment, implements depth testing in conjunction with raster engine 3308 to receive depths of sample locations associated with pixel fragments from the culling engine of raster engine 3308 . In at least one embodiment, the depth is tested against the corresponding depth in the depth buffer of sample locations associated with the fragment. In at least one embodiment, once the fragment passes the depth test for the sample location, ROP unit 3402 updates the depth buffer and sends the result of the depth test to raster engine 3308 . It will be appreciated that the number of partition units 3400 may differ from the number of GPCs, so each ROP unit 3402 may, in at least one embodiment, be coupled to each of the GPCs. In at least one embodiment, ROP unit 3402 tracks packets received from different GPCs to determine if results generated by ROP unit 3402 are to be routed through Xbar 3220 .FIG. 35 illustrates a streaming multiprocessor (“SM”) 3500, according to at least one embodiment. In at least one embodiment, SM 3500 is SM of FIG. In at least one embodiment, the SM 3500 includes, without limitation, an instruction cache 3502, one or more scheduler units 3504, a register file 3508, one or more processing cores ("cores") 3510, one or more a plurality of special function units (“SFU”) 3512, one or more load/store units (“LSU” load/store units) 3514, an interconnection network 3516, shared memory/level 1 (“L1 ”) cache 3518, and/or any suitable combination thereof.In at least one embodiment, the work distribution unit dispatches tasks for execution on a general processing cluster (“GPC”) of parallel processing units (“PPUs”), each task being assigned to a specific data processing cluster within the GPC. (“DPC”) and if the task is associated with a shader program, the task is distributed to one of the SMs 3500. In at least one embodiment, scheduler unit 3504 receives tasks from the work distribution unit and manages instruction scheduling for one or more thread blocks assigned to SM 3500 . In at least one embodiment, scheduler unit 3504 schedules thread blocks for execution as warps of parallel threads, where each thread block is distributed to at least one warp. In at least one embodiment, each warp executes a thread. In at least one embodiment, the scheduler unit 3504 manages multiple different thread blocks to distribute warps to the different thread blocks and then dispatch instructions from multiple different ganged groups during each clock cycle. Dispatch to various functional units (eg, processing core 3510, SFU 3512, and LSU 3514).In at least one embodiment, a coordination group refers to a programming model for organizing a group of communicating threads that allows developers to express the granularity at which threads communicate, making it richer and more efficient. allows the expression of parallel decompositions. In at least one embodiment, the ganged launch API supports synchronization between thread blocks so that parallel algorithms can be executed. In at least one embodiment, the application of the traditional programming model provides a single simple construct for synchronizing cooperating threads: a barrier across all threads in a thread block (e.g., syncthreads() function). . However, in at least one embodiment, programmers can define thread groups that are smaller than the granularity of a thread block, synchronize within the defined groups, and provide higher granularity in the form of functional interfaces across the collective groups. Performance, design flexibility, and software reuse may be enabled. In at least one embodiment, interlocking groups allow programmers to explicitly define groups of threads at sub-block (i.e., as large as a single thread) granularity and multi-block granularity and be able to perform collective actions such as synchronization on multiple threads. In at least one embodiment, the programming model supports clean composition across software boundaries so that library and utility functions can be safely synchronized within their local context without having to make assumptions about convergence. can. In at least one embodiment, the interlocking group primitives allow new patterns of interlocking parallelism, including without limitation producer-consumer parallelism, opportunistic parallelism, and global synchronization across a grid of thread blocks. to enable.In at least one embodiment, dispatch unit 3506 is configured to send instructions to one or more functional units, and scheduler unit 3504 sends two different instructions from a common warp each clock cycle. includes, but is not limited to, two dispatch units 3506 that enable dispatching within. In at least one embodiment, each scheduler unit 3504 includes a single dispatch unit 3506 or additional dispatch units 3506 .In at least one embodiment, each SM 3500 includes, without limitation, a register file 3508 that provides a set of registers for the functional units of the SM 3500 in at least one embodiment. In at least one embodiment, register file 3508 is split between each functional unit such that each functional unit is allocated a dedicated portion of register file 3508 . In at least one embodiment, register files 3508 are split between the different warps being executed by SM 3500, and register files 3508 provide temporary storage for operands connected to the datapaths of functional units. . In at least one embodiment, each SM 3500 includes, without limitation, multiple L processing cores 3510, where L is a positive integer. In at least one embodiment, each SM 3500 includes, without limitation, multiple (eg, 128 or more) individual processing cores 3510 . In at least one embodiment, each processing core 3510 is a fully pipelined single precision, double precision and/or mixed Including without limitation precision processing units. In at least one embodiment, the floating point arithmetic logic unit implements the IEEE 754-2008 standard for floating point arithmetic. In at least one embodiment, processing cores 3510 include, without limitation, 64 single precision (32-bit) floating point cores, 64 integer cores, 32 double precision (64-bit) floating point cores, and 8 contains tensor cores.A tensor core is configured to perform matrix operations according to at least one embodiment. In at least one embodiment, one or more tensor cores are included in processing core 3510 . In at least one embodiment, the tensor cores are configured to perform deep learning matrix operations, such as convolution operations for neural network training and inference. In at least one embodiment, each tensor core operates on a 4×4 matrix and performs a matrix multiply and accumulate operation D=A×B+C, where A, B, C, and D are 4×4 matrices.In at least one embodiment, matrix multiplication inputs A and B are 16-bit floating point matrices and sum matrices C and D are 16-bit floating point or 32-bit floating point matrices. In at least one embodiment, the tensor core operates on 16-bit floating point input data with 32-bit floating point sums. In at least one embodiment, a 16-bit floating-point multiplication uses 64 operations, resulting in a full-precision product that is then multiplied by other intermediate products of a 4x4x4 matrix multiplication. is added using 32-bit floating point addition with Tensor cores are used, in at least one embodiment, to perform much larger two-dimensional or even higher-dimensional matrix operations built from these smaller elements. In at least one embodiment, APIs such as the CUDA9C++ API expose special matrix load, matrix multiply-add, and matrix store operations for efficient use of tensor cores from CUDA-C++ programs. In at least one embodiment, at the CUDA level, the warp level interface assumes a matrix of size 16x16 across all 32 threads of warp.In at least one embodiment, each SM 3500 includes without limitation M SFUs 3512 that perform special functions (eg, attribute evaluation, reciprocal square root, etc.). In at least one embodiment, SFU 3512 includes, without limitation, a tree traversal unit configured to traverse hierarchical tree data structures. In at least one embodiment, SFU 3512 includes, without limitation, a texture unit configured to perform texture map filtering operations. In at least one embodiment, the texture unit loads a texture map (e.g., a 2D array of texels) from memory and sample texture maps to sampled for use in shader programs executed by the SM3500. It is configured to generate a texture value with In at least one embodiment, texture maps are stored in shared memory/level 1 cache 3518 . In at least one embodiment, the texture unit implements texture operations such as filtering operations using mip maps (eg, texture maps with different levels of detail), according to at least one embodiment. In at least one embodiment, each SM 3500 includes, without limitation, two texture units.Each SM 3500 , in at least one embodiment, includes without limitation N LSUs 3514 that implement load and store operations between shared memory/L1 cache 3518 and register file 3508 . In at least one embodiment, interconnection network 3516 connects each functional unit to register file 3508 and LSU 3514 to register file 3508 and shared memory/L1 cache 3518 . In at least one embodiment, interconnection network 3516 is a crossbar that connects any functional unit to any register in register file 3508 and connects LSU 3514 to register file 3508 and shared memory/L1 cache. It may be configured to connect to 3518 memory locations.In at least one embodiment, shared memory/L1 cache 3518 is, in at least one embodiment, an array of on-chip memory that enables data storage and communication between SM 3500 and primitive engines and between threads of SM 3500. is. In at least one embodiment, Shared Memory/L1 Cache 3518 comprises, without limitation, 128 KB of storage capacity and is on the path from SM 3500 to the partition unit. In at least one embodiment, shared memory/L1 cache 3518 is used to cache reads and writes in at least one embodiment. In at least one embodiment, one or more of shared memory/L1 cache 3518, L2 cache, and memory is secondary storage.In at least one embodiment, combining data cache and shared memory functionality into a single memory block improves performance for both types of memory access. In at least one embodiment, the capacity is used as a cache or available for use by programs that do not use shared memory, such that textures and loads are cached when shared memory is configured to use half the capacity. /Store operations can use the remaining capacity. According to at least one embodiment, integration within shared memory/L1 cache 3518 allows shared memory/L1 cache 3518 to serve as a high throughput conduit for streaming data while simultaneously providing high bandwidth and low Allows you to provide latency access to frequently reused data. In at least one embodiment, when configured for general-purpose parallel computing, a simpler configuration can be used compared to graphics processing. In at least one embodiment, the fixed function graphics processing unit is bypassed to create a much simpler programming model. In a general-purpose parallel computing configuration, the work distribution unit allocates and distributes thread blocks directly to the DPCs in at least one embodiment. In at least one embodiment, the threads in the block execute a common program using a unique thread ID in their computations to ensure that each thread produces unique results, and the SM3500 is used to execute the program to perform computations, communicate between threads using shared memory/L1 cache 3518, use LSU 3514 to read global memory through shared memory/L1 cache 3518 and the memory partition unit, Write. In at least one embodiment, when configured for general-purpose parallel computing, SM 3500 writes commands that scheduler unit 3504 can use to launch new work on the DCP.In at least one embodiment, the PPU is a desktop computer, laptop computer, tablet computer, server, supercomputer, smart phone (e.g., wireless handheld device), personal digital assistant ("PDA"). , digital cameras, vehicles, head-mounted displays, portable electronic devices, and the like. In at least one embodiment, the PPU is embodied in a single semiconductor substrate. In at least one embodiment, the PPU includes an additional PPU, memory, a reduced instruction set computer (“RISC”) CPU, a memory management unit (“MMU”), a digital-to-analog converter (“DAC”). included in a system-on-chip (“SoC”) along with one or more other devices such as an analog converter).In at least one embodiment, a PPU may be included in a graphics card that includes one or more memory devices. In at least one embodiment, the graphics card may be configured to interface with a PCIe slot on the motherboard of the desktop computer. In at least one embodiment, the PPU may be an integrated graphics processing unit (“iGPU”) included in the motherboard's chipset.Inference and/or training logic 715 is used to perform inference and/or training operations associated with one or more embodiments. Details regarding inference and/or training logic 715 are provided herein in conjunction with FIGS. 7A and/or 7B. In at least one embodiment, a deep learning application processor is used to train a machine learning model, such as a neural network, to predict or infer information provided to the SM3500. In at least one embodiment, the SM3500 is used to infer or predict information by another processor or system or based on a trained machine learning model (eg, neural network) that has been trained by the SM3500. In at least one embodiment, the SM 3500 may be used to perform one or more of the neural network use cases described herein.In at least one embodiment, at least one component shown or described with respect to Figure 35 is used to implement the techniques and/or functionality described with respect to Figures 1-6. In at least one embodiment, inference and/or training logic 715 includes and/or operates at least one aspect (eg, deep learning compiler 102, stream scheduler 110, memory allocator 112) described with respect to FIG. In at least one embodiment, inference and/or training logic 715 is a computer program representation of speculatively executable operations and/or instructions, such as those described with respect to one or more of FIGS. to train at least one untrained or partially trained neural network. In at least one embodiment, the inference and/or training logic is a computer program representation of speculatively executable operations and/or instructions, such as those described with respect to one or more of FIGS. to perform at least one inference operation. In at least one embodiment, SM 3500 of FIG. 35 is utilized to implement the techniques and/or functions described with respect to FIGS. 1-6.Embodiments are disclosed for a virtualized computing platform for advanced computing, such as image reasoning and image processing in medical applications. Without limitation, examples include radiography, magnetic resonance imaging (MRI), nuclear medicine, ultrasound, sonography, elastography, photoacoustic imaging, tomography, echocardiography, functional near-infrared spectroscopy, and It may also include magnetic particle imaging, or a combination thereof. In at least one embodiment, the virtualized computing platform and associated processes described herein are useful for, without limitation, forensic analysis, subsurface detection and imaging (e.g., oil exploration, archeology, paleontology etc.), geomorphology, oceanography, geology, osteology, meteorology, intelligence fields, or in object tracking and surveillance, sensor data processing (e.g., RADAR, SONAR, LIDAR, etc.), and/or genomics and gene sequencing. , may additionally or alternatively be used.Referring to FIG. 36, FIG. 36 is an example data flow diagram of a process 3600 for creating and deploying image processing and inference pipelines, according to at least one embodiment. In at least one embodiment, the process 3600 includes imaging devices, processing devices, genomics devices, genetic sequencing devices at one or more facilities 3602, such as medical facilities, hospitals, healthcare institutions, clinics, research or diagnostic laboratories. It may be deployed for use with devices, radiation devices, and/or other types of devices. In at least one embodiment, processor 3600 may be deployed to perform genomics analysis and inference on sequencing data. Examples of genomic analysis that can be performed using the systems and processes described herein include, without limitation, variant calling, mutation detection, and quantification of gene expression.In at least one embodiment, process 3600 may be performed within training system 3604 and/or within induction system 3606 . In at least one embodiment, training system 3604 is used to train, deploy, and implement machine learning models (e.g., neural networks, object detection algorithms, computer vision algorithms, etc.) for use with deployment system 3606. may be performed. In at least one embodiment, deployment system 3606 may be configured to offload processing and computing resources between distributed computing environments to reduce infrastructure requirements at facility 3602 . In at least one embodiment, the deployment system 3606 selects and customizes virtual instruments for use with an imaging device (e.g., MRI, CT scan, X-ray, ultrasound, etc.) or sequencing device at the facility 3602; May provide a streamlined platform for implementation. In at least one embodiment, a virtual machine is a device for performing one or more processing operations on imaging data generated by an imaging device, sequencing device, radiation device, and/or other type of device. It may also include software defined applications. In at least one embodiment, one or more applications in the pipeline may use or call services (eg, inference, virtualization, computation, AI, etc.) of deployment system 3606 during application execution.In at least one embodiment, some of the applications used in the advanced processing and inference pipeline may use machine learning models or other AI to perform one or more processing steps. In at least one embodiment, the machine learning model is generated on data 3608 (such as imaging data) generated at facility 3602 (and stored on one or more image archive and communication systems (PACS) servers at facility 3602). may be trained at facility 3602 using imaging or sequencing data 3608 from one or more other facilities (e.g., different hospitals, laboratories, clinics, etc.) or a combination thereof. In at least one embodiment, training system 3604 may be used to provide applications, services, and/or other resources for generating working and deployable machine learning models for deployment system 3606 .In at least one embodiment, model registry 3624 may be backed by an object storage capable of supporting versioning and object metadata. In at least one embodiment, the object storage is accessible, e.g., from within a cloud platform, through a compatible application programming interface (API) of the cloud storage (e.g., cloud 3726 of FIG. 37) and good too. In at least one embodiment, machine learning models in model registry 3624 may be uploaded, listed, modified, or deleted by a developer or partner of the system interacting with the API. In at least one embodiment, the API may provide access to methods that allow a user with appropriate entitlements to associate a model with an application, thereby making it possible to run containerized instances of the application. As part, it will allow the model to run.In at least one embodiment, the training pipeline 3704 (Fig. 37) is in situations where the facility 3602 is training its own machine learning model or has an existing machine learning model that needs to be optimized or updated. may include situations where In at least one embodiment, imaging data 3608 generated by imaging devices, sequencing devices, and/or other types of devices may be received. In at least one embodiment, once imaging data 3608 is received, AI-assisted annotation 3610 is used to assist in generating annotations corresponding to imaging data 3608 to be used as ground truth data for machine learning models. may be used. In at least one embodiment, the AI-assisted annotation 3610 may include one or more machine learning models (eg, convolutional neural networks (CNNs)), which may be of a particular type (eg, from a particular device). It may be trained to generate annotations corresponding to imaging data 3608 and/or specific types of anomalies within imaging data 3608 . In at least one embodiment, the AI-assisted annotation 3610 may then be used directly to generate ground truth data, or (e.g., by researchers, clinicians, physicians, scientists, etc.) annotation tools may be adjusted or fine-tuned using In at least one embodiment, in some instances labeled clinic data 3612 (e.g., annotations provided by clinicians, physicians, scientists, technicians, etc.) are used to train machine learning models. It may be used as ground truth data. In at least one embodiment, AI-assisted annotations 3610, labeled clinic data 3612, or a combination thereof may be used as ground truth data for training a machine learning model. In at least one embodiment, the trained machine learning model may be referred to as output model 3616 and may be used by deployment system 3606 described herein.In at least one embodiment, training pipeline 3704 (FIG. 37) is a machine learning pipeline for use by facility 3602 in performing one or more processing tasks for one or more applications in deployment system 3606. In need of a model, facility 3602 may not currently have such a machine learning model (or may not have an optimized, efficient, or effective model for such purposes). may include situations where In at least one embodiment, an existing machine learning model may be selected from model registry 3624 . In at least one embodiment, model registry 3624 may contain machine learning models trained to perform a variety of different inference tasks on imaging data. In at least one embodiment, the machine learning models in model registry 3624 may have been trained on imaging data from a facility different from facility 3602 (eg, a remote facility). In at least one embodiment, the machine learning model may be trained on imaging data from one location, two locations, or any number of locations. In at least one embodiment, when training on imaging data from a particular location, the training may occur at that location, or at least in a manner that protects the confidentiality of the imaging data or may be done in a manner that restricts data from being transferred off-premises (eg, to comply with HIPPA regulations, privacy regulations). In at least one embodiment, a machine learning model may be added to model registry 3624 once the model has been trained, or partially trained, at one location. In at least one embodiment, the machine learning model may then be retrained or updated at any number of other facilities, and the retrained or updated model may be made available in model registry 3624. . In at least one embodiment, machine learning models, which may then be selected from model registry 3624, may be referred to as output models 3616, and may be used in deployment system 3606 to provide information about one or more applications of the deployment system. may perform one or more processing tasks for .In at least one embodiment, training pipeline 3704 (FIG. 37) is a machine learning pipeline for use by facility 3602 in performing one or more processing tasks for one or more applications in deployment system 3606. In need of a model, facility 3602 may not currently have such a machine learning model (or may not have an optimized, efficient, or effective model for such purposes). It can be used in scenarios involving situations where In at least one embodiment, the machine learning model selected from the model registry 3624 is evaluated based on population, genetic variance, robustness of the training data used to train the machine learning model, anomaly diversity of the training data. , and/or other issues with training data, may not have been fine-tuned or optimized for imaging data 3608 generated at facility 3602 . In at least one embodiment, AI-assisted annotation 3610 is used to assist in generating annotations corresponding to imaging data 3608 to be used as ground truth data for retraining or updating a machine learning model. may be In at least one embodiment, labeled clinic data 3612 (e.g., annotations provided by clinicians, physicians, scientists, engineers, etc.) is used as ground truth data for training a machine learning model. may be used. In at least one embodiment, retraining or updating the machine learning model may be referred to as model training 3614. In at least one embodiment, model training 3614, such as AI-assisted annotations 3610, labeled clinic data 3612, or a combination thereof, is used as ground truth data to retrain or update the machine learning model. mayIn at least one embodiment, deployment system 3606 may include software 3618, services 3620, hardware 3622, and/or other components, features, and functions. In at least one embodiment, deployment system 3606 may include a software “stack” by which software 3618 may be built on top of services 3620, using services 3620 to perform some or all processing tasks. , and services 3620 and software 3618 build on and use hardware 3622 to perform processing, storage, and/or other computing tasks for deployment system 3606. mayIn at least one embodiment, software 3618 may include any number of different containers, where each container may perform instantiation of an application. In at least one embodiment, each application may perform one or more processing tasks (e.g., inference, object detection, feature detection, segmentation, image enhancement, calibration, etc.) of advanced processing and inference pipelines. good. In at least one embodiment, a device-generated There may be any number of containers capable of performing data processing tasks on imaging data 3608 (or other types of data such as those described herein). In at least one embodiment, the advanced processing and inference pipelines (e.g., digital imaging and communications in medicine (DICOM) data, radiology information system (RIS) data, clinical information system (CIS) data, remote procedure call (RPC) data, data substantially conforming to a representational state transfer (REST) interface, data substantially conforming to a file-based interface, and/or raw data. Receive and configure imaging data for use by each container and/or for use by facility 3602 after processing through a pipeline (for storage and display at facility 3602), converting the output back into data of the type It may be defined based on a selection of different containers that are desired or required to process the imaging data 3608 in addition to the containers to be used. In at least one embodiment, the combination of containers within software 3618 (e.g., comprising a pipeline) may be referred to as a virtual machine (described in more detail herein), which virtual machines are services 3620 and Hardware 3622 may be utilized to perform some or all processing tasks of applications instantiated in containers.In at least one embodiment, the data processing pipeline responds to inference requests (e.g., requests from users of deployment system 3606, such as clinicians, physicians, radiologists, etc.) using DICOM, RIS, CIS, REST compliant, RPC , raw, and/or other formats (eg, imaging data 3608). In at least one embodiment, the input data is one or more images, videos generated by one or more imaging devices, sequencing devices, radiation devices, genomics devices, and/or other types of devices. , and/or other data representations. In at least one embodiment, data may undergo pre-processing as part of a data processing pipeline to prepare the data for processing by one or more applications. In at least one embodiment, post-processing may be performed on the output of one or more inference tasks or other processing tasks of the pipeline to prepare the output data for subsequent applications, and/or Output data may be prepared for transmission and/or use by a user (eg, in response to an inference request). In at least one embodiment, the inference task may be performed by one or more machine learning models, such as trained or deployed neural networks, which may include output model 3616 of training system 3604. good.In at least one embodiment, the tasks of a data processing pipeline may be encapsulated in containers, each of which is a separate, fully functional instantiation of an application and a virtualized computing environment that can reference machine learning models. In at least one embodiment representing , a container or application may be published to a private (e.g., restricted-access) area of a container registry (described in more detail herein) and trained or installed The completed models may be stored in model registry 3624 and associated with one or more applications. In at least one embodiment, an image of an application (e.g., an image of a container) may be available in a container registry, and when selected from the container registry by a user for introduction into the pipeline, the image is , may be used to create a container for instantiating an application for use on a user's system.In at least one embodiment, a developer (e.g., software developer, clinician, physician, etc.) develops an application (e.g., as a container) to perform image processing and/or inference on supplied data. , published and stored. In at least one embodiment, developing, publishing, and/or storing (e.g., to ensure that developed applications and/or containers are system compliant or compatible with ) may be performed using a software development kit (SDK) associated with the system. In at least one embodiment, the developed application is locally (e.g., at the first facility, first data from one institution) may be tested. In at least one embodiment, a DICOM object can contain anywhere from one to hundreds of images or other types of data, and because of the variation in data, the developer can specify the input DICOM May be responsible for managing data extraction and preparation (eg, setting configuration for the application, building pre-processing into the application, etc.). In at least one embodiment, once validated by system 3700 (eg, accuracy, safety, patient privacy, etc.), an application may be selected by a user (eg, hospital, clinic, laboratory, healthcare provider, etc.). and/or implementably made available at the container registry to perform one or more processing tasks on the data at the user's premises (eg, the second premises).In at least one embodiment, the developer may then share the application or container over a network for access and use by users of the system (eg, system 3700 of FIG. 37). In at least one embodiment, completed and validated applications or containers may be stored in a container registry and associated machine learning models may be stored in model registry 3624 . In at least one embodiment, a requesting entity (e.g., a user of a healthcare facility) issuing an inference or image processing request browses the container registry and/or the model registry 3624 to view applications, containers, datasets, machine learning, etc. One may search for models, etc., select the desired combination of elements for inclusion in the data processing pipeline, and submit an imaging processing request. In at least one embodiment, the request may include input data (and, in some instances, associated patient data) necessary to execute the request and/or and/or selection of an application and/or machine learning model that In at least one embodiment, the request may then be passed to one or more components of deployment system 3606 (eg, cloud) for processing of the data processing pipeline. In at least one embodiment, processing by installation system 3606 may include referencing selected elements (eg, applications, containers, models, etc.) from container registry and/or model registry 3624 . In at least one embodiment, once the results are produced by the pipeline, they may be returned to the user for viewing (e.g., locally, in a viewing application suite running on a workstation or terminal on the premises). may be viewed). In at least one embodiment, a radiologist may receive results from a data processing pipeline that includes any number of applications and/or containers, where the results are for anomaly detection in X-rays, CT scans, MRIs, etc. may includeIn at least one embodiment, services 3620 may be utilized to assist processing or execution of applications or containers in the pipeline. In at least one embodiment, services 3620 may include computational services, artificial intelligence (AI) services, visualization services, and/or other types of services. In at least one embodiment, services 3620 may provide functionality common to one or more applications in software 3618, whereby functionality is provided to services that can be called or utilized by the applications. It can be abstracted. In at least one embodiment, the functionality provided by services 3620 may be performed dynamically and more efficiently while simultaneously allowing applications (eg, using parallel computing platform 3730 (FIG. 37)) may be scaled well by allowing the to process data in parallel. In at least one embodiment, rather than requiring each application that shares the same functionality provided by service 3620 to have its own instance of service 3620, service 3620 may be shared among various applications. . In at least one embodiment, the service may include, as non-limiting examples, an inference server or engine that may be used to perform detection or segmentation tasks. In at least one embodiment, a model training service may be included that can provide machine learning model training and/or retraining functionality. In at least one embodiment, a data enhancement service that can provide extraction, resizing, scaling, and/or other enhancements of GPU-accelerated data (e.g., DICOM, RIS, CIS, REST-compliant, RPC, raw, etc.) Further may be included. In at least one embodiment, a visualization service is used that can apply image rendering effects such as ray tracing, rasterization, denoising, sharpening, etc. to two-dimensional (2D) and/or three-dimensional (3D) A sense of realism may be added to the model of In at least one embodiment, virtual instrument services may be included that provide beamforming, segmentation, inference, imaging, and/or support for other applications in the virtual instrument pipeline.In at least one embodiment, one or more machine learning associated applications for anomaly detection (e.g., tumors, developmental abnormalities, scarring, etc.) when services 3620 include AI services (e.g., inference services) The model may be executed by calling (as an API call) an inference service (eg, an inference server) to execute the machine learning model, or its processing, as part of the application execution. In at least one embodiment, if another application includes one or more machine learning models for the segmentation task, a machine for performing one or more of the processing operations associated with the segmentation task. An application may call an inference service to run a learning model. In at least one embodiment, software 3618 implementing advanced processing and inference pipelines including segmentation and anomaly detection applications each calls the same inference service to perform one or more inference tasks. can be rationalized.In at least one embodiment, hardware 3622 is a GPU, CPU, graphics card, AI/deep learning system (eg, AI supercomputer such as NVIDIA's DGX supercomputer system), cloud platform, or a combination thereof. may include In at least one embodiment, different types of hardware 3622 may be used to provide efficient and dedicated support for software 3618 and services 3620 of installation system 3606 . In at least one embodiment, to improve the efficiency, accuracy, and effectiveness of image processing, image reconstruction, segmentation, MRI examinations, stroke or heart attack (e.g., real-time) detection, rendering image quality, etc. The use of GPU processing for processing locally (eg, at facility 3602) may be implemented within AI/deep learning systems, cloud systems, and/or other processing components of deployment system 3606. In at least one embodiment, the facility may include imaging devices, genomics devices, sequencing devices, and/or other types of devices on premises, which utilize GPUs to analyze the subject's anatomy. Imaging data representing the target tissue may be generated.In at least one embodiment, software 3618 and/or services 3620 may be optimized for GPU processing for deep learning, machine learning, and/or high performance computing as non-limiting examples. In at least one embodiment, at least a portion of the computing environment of deployment system 3606 and/or training system 3604 is GPU optimized in one or more supercomputers or high performance computing systems in a data center. It may also be implemented using software (eg, a combination of hardware and software in NVIDIA's DGX system). In at least one embodiment, the data sensor may be compliant with HIPAA provisions, such that the reception, processing, and transmission of imaging data and/or other patient data is secure with respect to patient data privacy. In at least one embodiment, hardware 3622 may include any number of GPUs, which may be called upon to perform parallel processing of data as described herein. In at least one embodiment, the cloud platform may further include GPU processing for GPU-optimized execution of deep learning tasks, machine learning tasks, or other computing tasks. In at least one embodiment, a cloud platform (e.g., NVIDIA's NGC) integrates AI/deep learning supercomputers (e.g., provided by NVIDIA's DGX system) and/or GPU-optimized software into hardware abstractions and It may be implemented using it as a platform for scaling. In at least one embodiment, the cloud platform integrates an application container clustering system or orchestration system (e.g., KUBERNETES) for multiple GPUs to enable seamless scaling and load balancing. good too.In at least one embodiment, at least one component shown or described with respect to Figure 36 is used to implement the techniques and/or functionality described with respect to Figures 1-6. In at least one embodiment, training system 3604 and/or deployment system 3606 include at least one aspect described with respect to FIG. 1 (e.g., deep learning compiler 102, stream scheduler 110, memory allocator 112) and/or move. In at least one embodiment, training system 3604 uses computer program representations that illustrate speculatively executable operations and/or instructions, such as those described with respect to one or more of FIGS. , train at least one untrained or partially trained neural network. In at least one embodiment, installation system 3606 uses computer program representations that illustrate speculatively executable operations and/or instructions, such as those described with respect to one or more of FIGS. , performs at least one inference operation.FIG. 37 is a system diagram illustrating an example system 3700 for creating and installing an imaging installation pipeline in accordance with at least one embodiment. In at least one embodiment, system 3700 may be used to implement process 3600 of FIG. 36 and/or other processes including advanced processing and inference pipelines. In at least one embodiment, system 3700 may include training system 3604 and induction system 3606 . In at least one embodiment, training system 3604 and introduction system 3606 may be implemented using software 3618, services 3620, and/or hardware 3622 as described herein.In at least one embodiment, system 3700 (eg, training system 3604 and/or introduction system 3606) may be implemented in a cloud computing environment (eg, cloud 3726). In at least one embodiment, system 3700 may be implemented locally with respect to a healthcare service facility or as a combination of cloud and local computing resources. In at least one embodiment, in embodiments where cloud computing is implemented, one or more configurations of system 3700 that provide processing that is not compliant with HIPAA and/or other data handling and privacy regulations or laws Patient data may be separated from the elements or not processed by them. In at least one embodiment, access to cloud 3726 APIs may be restricted to authorized users via established security measures or protocols. In at least one embodiment, the security protocol may include a web token, which may be signed by a service of authentication (eg, AuthN, AuthZ, Gluecon, etc.) and has appropriate authorization. may In at least one embodiment, a virtual machine's API (described herein) or other instantiation of system 3700 may be restricted to a set of public IPs that have been inspected or authorized for interaction.In at least one embodiment, the various components of system 3700 include, but are not limited to, local area networks (LAN) and/or wide area networks (WAN) via wired and/or wireless communication protocols. Any of a variety of different types of networks may be used to communicate with each other. In at least one embodiment, communication between the facility and the components of system 3700 (eg, to send inference requests, receive results of inference requests, etc.) is via one or more data buses, wireless data • May be communicated via protocols (Wi-Fi), wired data protocols (eg, Ethernet), and the like.In at least one embodiment, training system 3604 may run a training pipeline 3704 similar to that described herein with respect to FIG. In at least one embodiment, when one or more machine learning models are to be used by deployment system 3606 in deployment pipeline 3710, training pipeline 3704 is used to generate one or more (e.g., prior trained) may be trained or retrained and/or one or more of the pretrained models 3706 may be implemented (eg, without requiring retraining or updating). In at least one embodiment, as a result of training pipeline 3704, output model 3616 may be generated. In at least one embodiment, the training pipeline 3704 includes (e.g., a DICOM (DICOM transforming or adapting imaging data (or other input data) using adapter 3702A; AI-assisted annotation 3610; labeling or annotating imaging data 3608 to generate labeled clinic data 3612; , model training 3614, model training, retraining, or updating, and/or other processing steps. In at least one embodiment, different training pipelines 3704 may be used for different machine learning models used by deployment system 3606 . In at least one embodiment, a training pipeline 3704 similar to the first example described with respect to FIG. 36 may be used for the first machine learning model, and a training pipeline 3704 similar to the second example described with respect to FIG. A pipeline 3704 may be used for the second machine learning model, and a training pipeline 3704 similar to the third example described with respect to FIG. 36 may be used for the third machine learning model. In at least one embodiment, any combination of tasks within training system 3604 may be used, depending on what is required for each respective machine learning model. In at least one embodiment, one or more of the machine learning models may already be trained and ready for deployment such that the machine learning models do not undergo any processing by the training system 3604. Well, it may be implemented by installation system 3606 .In at least one embodiment, output model 3616 and/or pretrained model 3706 may include any type of machine learning model depending on implementation or embodiment. In at least one embodiment, without limitation, the machine learning models used by system 3700 include linear regression, logistic regression, decision trees, support vector machines (SVM), Naive Bayes, k-nearest neighbors (k- nearest neighbor: Knn), k-means clustering, random forests, dimensionality reduction algorithms, gradient boosting algorithms, neural networks (e.g., autoencoders, convolution, recursion, perceptrons, long/short-term memory (LSTM), Hopfield , Boltzmann, deep belief, deconvolution, adversarial generation, liquid state machines, etc.) and/or other types of machine learning models.In at least one embodiment, the training pipeline 3704 may include AI-assisted annotation as described in more detail herein with respect to at least FIG. 40B. In at least one embodiment, labeled clinic data 3612 (eg, conventional annotations) may be generated by any number of techniques. In at least one embodiment, the label or other annotation is a drawing program (e.g., an annotation program), a computer aided design (CAD) program, a labeling program, another suitable for generating annotations or labels for ground truth. It may be generated in a type program and/or handwritten in some instances. In at least one embodiment, the ground truth data may be synthetically generated (eg, generated from computer models or renderings) or realistically generated (eg, real-world may be designed and generated from data), may be machine automated (e.g., feature analysis and learning may be used to extract features from data and then generate labels), and may be annotated by humans. (eg, a labeler or annotation expert may define the location of the label), and/or combinations thereof. In at least one embodiment, for each instance of imaging data 3608 (or other type of data used by the machine learning model), there may be corresponding ground truth data generated by training system 3604 . In at least one embodiment, AI-assisted annotation may be performed as part of deployment pipeline 3710 in addition to or instead of AI-assisted annotation included in training pipeline 3704 . In at least one embodiment, system 3700 may include a multi-tiered platform that includes diagnostic application (or other type of application) software capable of performing one or more medical imaging and diagnostic functions. Layers (eg, software 3618) may also be included. In at least one embodiment, system 3700 may be communicatively coupled (eg, via an encrypted link) to a PACS server network at one or more facilities. In at least one embodiment, system 3700 converts data (eg, DICOM data, RIS data, raw data, CIS data, REST compliant data, RPC data, raw data, etc.) from a PACS server (eg, DICOM adapter 3702, or RIS, CIS, REST compliant, RPC, raw, etc.) and configured to reference it to train machine learning models, deploy machine learning models, image processing, Actions such as inferences and/or other actions may be performed.In at least one embodiment, the software layer may be implemented as a secure, encrypted, and/or authenticated API through which an application or container communicates with an external environment (e.g., facility 3602). may be invoked (eg, called) from In at least one embodiment, the applications may then call or execute one or more services 3620 to perform computational, AI, or visualization tasks associated with the respective application, software 3618 and /or Services 3620 may utilize hardware 3622 to perform processing tasks in an efficient and efficient manner.In at least one embodiment, installation system 3606 may execute installation pipeline 3710 . In at least one embodiment, the installation pipeline 3710 may include any number of applications, including the AI-assisted annotations described above, generated by imaging devices, sequencing devices, genomics devices, etc. It may be applied continuously, non-continuously, or otherwise to the imaging data (and/or other types of data). In at least one embodiment, the installation pipeline 3710 for an individual device includes a virtual machine for the device (e.g., virtual ultrasound machine, virtual CT scan machine, virtual sequencing machine, etc.), as described herein. may be called In at least one embodiment, there may be more than one installation pipeline 3710 per device, depending on the information required for data generated by the device. In at least one embodiment, there may be a first introduction pipeline 3710 if anomaly detection is required for the MRI machine, and a second introduction pipeline 3710 if image enhancement is required for the output of the MRI machine. A pipeline 3710 may be present.In at least one embodiment, applications available to installation pipeline 3710 may include any application that can be used to perform processing tasks on imaging data or other data from a device. In at least one embodiment, image enhancement, segmentation, reconstruction, anomaly detection, object detection, feature detection, treatment planning, dosimetry, beam planning (or other radiation treatment procedures), and/or other analysis, image processing , or inference tasks may be performed by different applications. In at least one embodiment, deployment system 3606 may define the structure of each application so that users of deployment system 3606 (e.g., medical facilities, training centers, clinics, etc.) can understand the structure and The application may be adapted for implementation within each facility of In at least one embodiment, an application for image reconstruction may be selected for inclusion in the installation pipeline 3710, but the type of data produced by the imaging device will depend on the type of data used within the application. can be different. In at least one embodiment, a DICOM adapter 3702B (and/or a DICOM reader), or another type of data adapter or reader (eg, RIS, CIS, REST compliant, RPC, raw, etc.) It may be used to transform data into a form usable by applications within installation system 3606 . In at least one embodiment, access to DICOM, RIS, CIS, REST compliant, RPC, raw, and/or other types of data libraries enables arbitrary convolution, color correction, sharpness, gamma, and/or or may be accumulated and pre-processed, including decoding, extracting, and/or performing other extensions. In at least one embodiment, the DICOM, RIS, CIS, REST-compliant, RPC, and/or raw data may be unordered, with a pre-pass being performed to organize and sort the collected data. good too. Since, in at least one embodiment, various applications may share common image operations, some embodiments use a data augmentation library (eg, as one of services 3620) to implement these operations. may be accelerated. In at least one embodiment, parallel computing platform 3730 may be used to GPU accelerate these processing tasks to avoid the bottlenecks of conventional processing approaches that rely on CPU processing.In at least one embodiment, an image reconstruction application may include processing tasks involving the use of machine learning models. In at least one embodiment, a user may wish to use their own machine learning model or select a machine learning model from model registry 3624 . In at least one embodiment, users may implement their own machine learning models or select machine learning models to include in their applications to perform processing tasks. In at least one embodiment, the application may be selectable and customizable, and by defining the structure of the application, installation and implementation of the application for specific users is presented as a more seamless user experience. . In at least one embodiment, by leveraging other features of system 3700, such as services 3620 and hardware 3622, deployment pipeline 3710 can be made even more user-friendly, allowing for easier integration. and produce more accurate, efficient, and timely results.In at least one embodiment, the deployment system 3606 may include user interfaces 3714 (eg, graphical user interfaces, web interfaces, etc.) that select applications for inclusion in the deployment pipeline 3710. to configure, modify or change an application or its parameters or structure, use and interact with the installation pipeline 3710 during setup and/or installation, and/or otherwise interact with the installation system 3606. may In at least one embodiment, although not shown with respect to training system 3604, user interface 3714 (or a different user interface) trains or retrains in training system 3604 to select models for use in installation system 3606. It may be used to select models and/or otherwise interact with training system 3604 .In at least one embodiment, pipeline manager 3712 in addition to application orchestration system 3728 is used to interact between applications or containers in deployment pipeline 3710 and services 3620 and/or hardware 3622 . may be managed. In at least one embodiment, pipeline manager 3712 is configured to facilitate application-to-application interactions, application-to-service 3620 interactions, and/or application or service-to-hardware 3622 interactions. good too. In at least one embodiment, although shown to be included in software 3618, this is not intended to be limiting, and in some instances (such as that shown in FIG. 38) pipeline manager 3712 may be included in services 3620. In at least one embodiment, application orchestration system 3728 (eg, Kubernetes, DOCKER, etc.) may include a container orchestration system that coordinates, manages, scales, and deploys applications. can be grouped into containers as logical units for In at least one embodiment, by associating applications (e.g., reconstructed applications, segmented applications, etc.) from the deployment pipeline 3710 with individual containers, each application can be deployed within a self-contained environment (e.g., at the kernel level). can be implemented to improve speed and efficiency.In at least one embodiment, each application and/or container (or image thereof) may be developed, modified, and deployed individually (e.g., a first user or developer develops a first application, A second user or developer may develop, modify, and deploy the second application independently of the first user or developer), thereby creating another application or container. It is possible to concentrate and pay attention to the tasks of one application and/or container without being distracted by other tasks. In at least one embodiment, communication and coordination between different containers or applications may be assisted by pipeline manager 3712 and application orchestration system 3728 . In at least one embodiment, the application orchestration system 3728 and/or Pipeline manager 3712 can facilitate communication between each of the applications or containers and sharing of resources among them. In at least one embodiment, one or more of the applications or containers in deployment pipeline 3710 can share the same services and resources such that application orchestration system 3728 provides Services or resources may be orchestrated, load balanced, and shared among them. In at least one embodiment, a scheduler may be used to track application or container resource requirements, current or planned usage of these resources, and resource availability. In at least one embodiment, the scheduler may thus allocate resources to different applications and distribute resources among applications taking into account system requirements and availability. In some instances, the scheduler (and/or other components of the application orchestration system 3728) determine the quality of service (QoS) (e.g., whether to perform real-time processing or deferred processing). Resource availability and distribution may be determined based on constraints imposed on the system (eg, user constraints), such as urgency requiring data output to determine.In at least one embodiment, services 3620 utilized and shared by applications or containers of deployment system 3606 may include computational services 3716, AI services 3718, visualization services 3720, and/or other types of services. In at least one embodiment, an application may call (eg, execute) one or more of services 3620 to perform processing operations for the application. In at least one embodiment, compute services 3716 may be utilized by applications to perform supercomputing or other high performance computing (HPC) tasks. In at least one embodiment, to substantially concurrently process data via one or more of the applications and/or substantially concurrently process one or more tasks of an application: Parallel processing may be performed utilizing compute services 3716 (eg, using parallel computing platform 3730). In at least one embodiment, parallel computing platform 3730 (eg, NVIDIA's CUDA) may enable general purpose computing (GPGPU) on a GPU (eg, GPU 3722). In at least one embodiment, the software layer of parallel computing platform 3730 may provide virtual instruction sets and access to the parallel computing elements of the GPU for executing computational kernels. In at least one embodiment, parallel computing platform 3730 may include memory, which in some embodiments is shared between multiple containers and/or between different processing tasks within a container. may In at least one embodiment, multiple containers and/or multiple processes within a container use the same data from a shared segment of memory of the parallel computing platform 3730 (e.g., multiple different stages of an application). , or when multiple applications process the same information), inter-process communication (IPC) calls may be generated. In at least one embodiment, rather than making copies of data and moving the data to different locations in memory (e.g., read/write operations), the same data in the same location in memory can be used by any number of processing tasks. (eg, at the same time, at different times, etc.). In at least one embodiment, when data is used to generate new data as a result of processing, this information of the new location of the data may be stored in and shared among various applications. In at least one embodiment, the location of the data and the location of the updated or modified data may be part of the definition of how the payload is understood within the container.In at least one embodiment, AI service 3718 runs an inference service to run a machine learning model associated with the application (eg, tasked with performing one or more processing tasks of the application). In at least one embodiment, AI service 3718 may be utilized to generate machine learning models (e.g., CNN, etc.) for segmentation, reconstruction, object detection, feature detection, classification, and/or other inference tasks. AI system 3724 may be utilized to perform neural networks of In at least one embodiment, the application of the installation pipeline 3710 uses one or more of the output model 3616 from the training system 3604 and/or other models of the application to process imaging data (e.g., DICOM data, Inference may be performed on RIS data, CIS data, REST compliant data, RPC data, raw data, etc.). In at least one embodiment, two or more instances of inference using application orchestration system 3728 (eg, scheduler) may be available. In at least one embodiment, the first category is for high-priority/low-latency high-priority/low-latency applications that can achieve higher service level agreements, such as for performing reasoning about emergency urgent requests or for radiologists at diagnosis. Can contain routes. In at least one embodiment, the second category may include normal priority routes that may be used for non-urgent requests or when analysis may be performed at a later time. In at least one embodiment, application orchestration system 3728 may distribute resources (eg, services 3620 and/or hardware 3622) based on priority paths for different reasoning tasks of AI services 3718.In at least one embodiment, shared storage may be attached to AI services 3718 within system 3700 . In at least one embodiment, shared storage may act as a cache (or other type of storage device) and may be used to process inference requests from applications. In at least one embodiment, when an inference request is submitted, the request may be received by a set of API instances of deployment system 3606, and one or more instances (e.g., for best fit, load (for balancing purposes, etc.) to process the request. In at least one embodiment, to process the request, the request may be entered into a database, the machine learning model may be identified from the model registry 3624 if not already in cache, and the verification step may be , the appropriate machine learning model may be ensured to be loaded in a cache (eg, shared storage) and/or a copy of the model may be stored in the cache. In at least one embodiment, if the application has not yet run, or if there are not enough instances of the application, a scheduler (eg, pipeline manager 3712) is used to run the application referenced in the request. may be activated. In at least one embodiment, an inference server may be started if the inference server for executing the model has not already been started. In at least one embodiment, any number of inference servers may be launched per model. In at least one embodiment, in a pull model where the inference servers are clustered, models may be cached whenever load balancing is advantageous. In at least one embodiment, an inference server may be statically loaded on a corresponding distributed server.In at least one embodiment, inference may be performed using an inference server running within a container. In at least one embodiment, an instance of an inference server may be associated with a model (and optionally multiple versions of the model). In at least one embodiment, when a request is received to perform inference on a model, and if no instance of the inference server exists, a new instance may be loaded. In at least one embodiment, when starting an inference server, a model may be passed to the inference server so that the same container can be used to serve different models as long as the inference server is running as a different instance. may beIn at least one embodiment, inference requests may be received for a given application while the application is running, even if the container (e.g., hosting the instance of the inference server) is loaded (if not already loaded). Well, the start procedure may be called. In at least one embodiment, the preprocessing logic of the container may load, decode, and/or perform any additional preprocessing on the input data (eg, using the CPU and/or GPU). In at least one embodiment, once the data is prepared for inference, the container may perform inference on the data as needed. In at least one embodiment, this may involve a single inference call for one image (eg, hand x-ray), or may require inference for hundreds of images (eg, chest CT). In at least one embodiment, the application may summarize the results before completion, which include, without limitation, a single confidence score, pixel-level segmentation, voxel-level segmentation, visualization or text to summarize the findings. In at least one embodiment, different models or applications may be assigned different priorities. For example, some models have a priority of real-time (TAT less than 1 minute) and some models have a low priority (eg, TAT less than 10 minutes). In at least one embodiment, model execution time may be measured from the requesting facility or entity, and may include partner network traversal time in addition to execution against the inference service.In at least one embodiment, the migration of requests between service 3620 and the inference application may be hidden behind a software development kit (SDK) and robust transport provided through queues. In at least one embodiment, requests are queued via the API for individual application/tenant ID combinations, and the SDK pulls the requests from the queue and gives the requests to the application. In at least one embodiment, the name of the queue may be provided in the environment in which the SDK picks up the request. In at least one embodiment, asynchronous communication via queues may be useful because the communication allows any instance of the application to pick up work when it becomes available. . In at least one embodiment, the results may be sent back through a queue so that no data is lost. In at least one embodiment, the highest priority work can proceed to a queue with most instances of the application connected to the queue, while the lowest priority work has one instance connected to the queue. Queues can also provide the ability to segment work, as tasks can be advanced to a queue to process tasks in the order they were received. In at least one embodiment, the application may run on a GPU-accelerated instance created in cloud 3726 and the inference service may perform inference on the GPU.In at least one embodiment, visualization services 3720 may be utilized to generate visualizations for viewing the output of applications and/or deployment pipelines 3710 . In at least one embodiment, GPU 3722 may be utilized by visualization service 3720 to generate visualizations. In at least one embodiment, rendering effects such as ray tracing may be implemented by visualization service 3720 to produce higher quality visualizations. In at least one embodiment, visualization may include, without limitation, 2D image rendering, 3D volume rendering, 3D volume reconstruction, 2D tomography slices, virtual reality viewing, augmented reality viewing, and the like. In at least one embodiment, a virtualized environment may be used to create a virtual interactive display or environment (eg, virtual environment) for users of the system to interact with. In at least one embodiment, visualization services 3720 may include internal visualizers, cinematics, and/or other rendering or image processing capabilities or functions (eg, ray tracing, rasterization, internal optics, etc.). .In at least one embodiment, hardware 3622 may include GPU 3722, AI system 3724, cloud 3726, and/or any other hardware used to run training system 3604 and/or introduction system 3606. . In at least one embodiment, GPU 3722 (eg, NVIDIA's TESLA and/or QUADRO GPU) may include any number of GPUs, including computational services 3716, AI services 3718, visualization services 3720, other Services and/or may be used to perform processing tasks of any feature or function of software 3618 . For example, with respect to the AI service 3718, pre-processing may be performed on the imaging data (or other types of data used by the machine learning model) using the GPU 3722, and on the output of the machine learning model Post-processing may be performed and/or inference may be performed (eg, machine learning models may be run). In at least one embodiment, cloud 3726 , AI system 3724 , and/or other components of system 3700 may use GPU 3722 . In at least one embodiment, cloud 3726 may include a GPU-optimized platform for deep learning tasks. In at least one embodiment, AI system 3724 may use a GPU, and cloud 3726, or at least a portion of which is tasked with deep learning or inference, executes using one or more AI systems 3724. may be Thus, although hardware 3622 is shown as separate components, this is not intended to be limiting and any component of hardware 3622 may be combined with any other component of hardware 3622. may be used by them.In at least one embodiment, AI system 3724 may include a dedicated computing system (e.g., supercomputer or HPC) configured for inference, deep learning, machine learning, and/or other artificial intelligence tasks. . In at least one embodiment, AI system 3724 (eg, NVIDIA's DGX) may include GPU-optimized software (eg, software stack), which may include CPU, RAM, storage, and/or other In addition to components, features, or functions, it may be performed using multiple GPUs 3722 . In at least one embodiment, one or more AI systems 3724 may be implemented in cloud 3726 (eg, in a data center) to perform some or all of the AI-based processing tasks of system 3700. .In at least one embodiment, cloud 3726 may include GPU-accelerated infrastructure (eg, NVIDIA's NGC), which provides a GPU-optimized platform for performing processing tasks of system 3700. good too. In at least one embodiment, cloud 3726 may include AI system 3724 (eg, as a hardware abstraction and scaling platform) for performing one or more of the AI-based tasks of system 3700 . In at least one embodiment, cloud 3726 may utilize multiple GPUs and be integrated with application orchestration system 3728 to enable seamless scaling and load balancing between applications and services 3620 . In at least one embodiment, the cloud 3726 is responsible for running at least some of the services 3620 of the system 3700, including the computational services 3716, AI services 3718, and/or visualization services 3720 described herein. good too. In at least one embodiment, the cloud 3726 may run large and small batch inference (e.g., running NVIDIA's Tensor RT) and accelerated parallel computing APIs and platforms 3730 (e.g., NVIDIA's CUDA) , may run an application orchestration system 3728 (e.g., KUBERNETES), and may provide graphics rendering APIs and platforms (e.g., ray trays for producing high-quality cinematics). racing, 2D graphics, 3D graphics, and/or other rendering techniques) and/or may provide other functionality for system 3700.In at least one embodiment, to protect patient confidentiality (e.g., if patient data or records are to be used off-premises), cloud 3726 may include a registry, such as a deep learning container registry. good. In at least one embodiment, the registry may store containers for instantiation of applications that can perform pre-processing, post-processing, or other processing tasks on patient data. In at least one embodiment, the cloud 3726 may receive data including patient data as well as sensor data in containers, perform the requested processing only on the sensor data in those containers, and then Output and/or visualization of results to appropriate parties and/or devices (e.g., premises used for visualization or diagnosis) without the need to extract, store, or otherwise access patient data, either. medical device). In at least one embodiment, patient data confidentiality is protected in compliance with HIPAA and/or other data regulations.In at least one embodiment, at least one component shown or described with respect to Figure 37 is used to implement the techniques and/or functionality described with respect to Figures 1-6. In at least one embodiment, training system 3704 and/or deployment system 3706 include at least one aspect described with respect to FIG. 1 (e.g., deep learning compiler 102, stream scheduler 110, memory allocator 112) and/or move. In at least one embodiment, the training system 3704 uses a computer program representation of speculatively executable operations and/or instructions, such as those described with respect to one or more of FIGS. , train at least one untrained or partially trained neural network. In at least one embodiment, the installation system 3706 uses a computer program representation of speculatively executable operations and/or instructions, such as those described with respect to one or more of FIGS. , performs at least one inference operation. In at least one embodiment, at least one component of hardware 3722 includes at least one aspect described with respect to FIG. 1 (eg, deep learning compiler 102, stream scheduler 110, memory allocator 112) and/or move.FIG. 38 includes an illustration of an example installation pipeline 3710A for processing imaging data, in accordance with at least one embodiment. In at least one embodiment, system 3700, and specifically introduction system 3606, may be used to customize, update, and/or integrate introduction pipeline 3710A into one or more production environments. In at least one embodiment, the deployment pipeline 3710A of FIG. 38 is a non-initiated implementation of the deployment pipeline 3710A that can be custom defined by a particular user (or team of users) at an institution (e.g., hospital, clinic, laboratory, research environment, etc.). Includes limited examples. In at least one embodiment, to define an installation pipeline 3710A for a CT scanner 3802, a user creates one or more applications that perform specific functions or tasks on imaging data generated by the CT scanner 3802. may be selected from, for example, a container registry. In at least one embodiment, applications may be applied to installation pipeline 3710A as containers that can utilize services 3620 and/or hardware 3622 of system 3700 . Additionally, the installation pipeline 3710A may include additional processing tasks or applications that may be implemented to prepare data for use by the application (eg, DICOM adapter 3702B and DICOM reader 3806 may be included in the installation pipeline 3710A). to prepare data for use by CT reconstruction 3808, organ segmentation 3810, etc.). In at least one embodiment, the introduction pipeline 3710A may be customized or selected for consistent introduction, one-time use, or another frequency or interval. In at least one example, a user may wish to perform CT reconstruction 3808 and organ segmentation 3810 for some subjects at a particular interval, thus introducing pipeline 3710A over that time period. good too. In at least one embodiment, for each request from system 3700, the user may select the application of processing that the user wishes to perform on the data for that request. In at least one embodiment, the introduction pipeline 3710A may be adjusted at any interval, and due to the adaptability and scalability of the container structure within system 3700, this may be a seamless process.In at least one embodiment, the introduction pipeline 3710A of FIG. 38 may include a CT scanner 3802 that generates imaging data of the patient or subject. In at least one embodiment, imaging data from CT scanner 3802 may be stored in PACS server 3804 associated with the facility housing CT scanner 3802 . In at least one embodiment, PACS server 3804 may include software and/or hardware components that may directly interface with imaging modalities (eg, CT scanner 3802) at the facility. In at least one embodiment, DICOM adapter 3702B may enable transmission and reception of DICOM objects using the DICOM protocol. In at least one embodiment, DICOM adapter 3702B may assist in preparing or configuring DICOM data from PACS server 3804 for use by installation pipeline 3710A. In at least one embodiment, once DICOM data is processed through DICOM adapter 3702B, pipeline manager 3712 may route the data through import pipeline 3710A. In at least one embodiment, DICOM reader 3806 may extract image files and any associated metadata from DICOM data (eg, raw sinogram data shown in visualization 3816A). In at least one embodiment, extracted working files may be stored in a cache for faster processing by other applications in installation pipeline 3710A. In at least one embodiment, once the DICOM reader 3806 has finished retrieving and/or storing data, a completion signal may be communicated to the pipeline manager 3712 . In at least one embodiment, pipeline manager 3712 may then start or call one or more other applications or containers in installation pipeline 3710A.In at least one embodiment, once data (eg, raw sinogram data) is available for processing by the CT reconstruction 3808 application, the CT reconstruction 3808 application and/or container may be executed. In at least one embodiment, CT reconstruction 3808 reads the raw sinogram data from cache, reconstructs an image file (eg, shown in visualization 3816B) from the raw sinogram data, and stores the resulting image file as May be stored in cache. In at least one embodiment, upon completion of the rebuild, pipeline manager 3712 may be notified that the rebuild task is complete. In at least one embodiment, once the reconstruction is complete and the reconstructed image files are stored in a cache (or other storage device), the organ segmentation 3810 application and/or container will run the pipeline manager 3712 may be triggered. In at least one embodiment, the organ segmenter 3810 application and/or container reads the image file from cache and normalizes or converts the image file into a format suitable for inference (e.g., the image file as an input to a machine learning model). resolution) and perform inference on the normalized image. In at least one embodiment, applications and/or containers of organ segmentation 3810 may rely on services 3620, pipeline manager 3712 and/or Application orchestration system 3728 may facilitate use of services 3620 by applications and/or containers of organ segmentation 3810 . In at least one embodiment, applications and/or containers, eg, organ segmentation 3810, may utilize AI services 3718 to perform inference on normalized images, which AI services 3718 may perform hardware 3622 (eg, AI system 3724) may be utilized to perform AI services 3718. In at least one embodiment, the result of the inference may be a mask file (eg, shown in visualization 3816C), which may be stored in cache (or other storage device).In at least one embodiment, a signal may be generated to pipeline manager 3712 when an application processing DICOM data and/or data extracted from DICOM data has completed processing. In at least one embodiment, Pipeline Manager 3712 then executes DICOM Writer 3812 to read the results from cache (or other storage device) and writes the results to DICOM for use by the user at the facility that generated the request. It may be packaged in a format (eg, as DICOM output 3814). In at least one embodiment, DICOM output 3814 may then be sent to DICOM adapter 3702B to prepare DICOM output 3814 for storage on PACS server 3804 (eg, for viewing by a DICOM viewer at the facility). . In at least one embodiment, visualizations 3816B and 3816C may be generated and made available to a user for diagnostic, research, and/or other purposes in response to reconstruction and segmentation requests.Although shown as serial applications in the introduction pipeline 3710A, the CT reconstruction 3808 and organ segmentation 3810 applications may be processed in parallel in at least one embodiment. In at least one embodiment, if the applications have no dependencies on each other and the data is available for each application (eg, after the DICOM reader 3806 has extracted the data), the applications can be read at the same time, substantially at the same time. , or may be partially overlapped. In at least one embodiment, when two or more applications request similar services 3620, the scheduler of system 3700 is used to load balance computational or processing resources among the various applications and distribute them to You can disperse. In at least one embodiment, in some embodiments parallel computing platform 3730 is used to perform parallel processing for the application in order to reduce runtime of deployment pipeline 3710A and provide real-time results. may beIn at least one embodiment, referring to FIGS. 39A-39B, deployment system 3606 performs different functions such as image processing, segmentation, enhancement, AI, visualization, and inference with imaging devices (e.g., CT scanners, X line machines, MRI machines, etc.), sequencing devices, genomics devices, and/or other types of devices. In at least one embodiment, the system 3700 may enable creation and provisioning of a virtual machine that may include a software-defined installation pipeline 3710, which is a device-generated raw/ It may receive raw input data and output processed/reconstructed data. In at least one embodiment, deployment pipelines 3710 (e.g., 3710A and 3710B) representing virtual machines implement intelligence into the pipeline, such as by utilizing machine learning models, to provide containerized reasoning support to the system. may be provided to In at least one embodiment, a virtual machine may run any number of containers each containing an instance of an application. In at least one embodiment, the deployment pipeline 3710 representing the virtual machine may be static (eg, container and/or application configured), such as when real-time processing is desired, while others In an example of , a container and/or application for a virtual appliance may be selected (eg, per request) from a pool of applications or resources (eg, within a container registry).In at least one embodiment, system 3700 may be implemented, for example, in a computing system installed adjacent to or otherwise in communication with a facility's radiology machines, imaging devices, and/or other types of devices. It may be instantiated or executed as one or more virtual machines on the facility premises. However, in at least one embodiment, an on-premises installation may be performed within the device itself (e.g., a computing system integrated with the imaging device), at a local data center (e.g., an on-premises data center), and/or in the cloud. It may also be instantiated or executed in an environment (eg cloud 3726). In at least one embodiment, deployment system 3606 operating as a virtual machine may be instantiated by a supercomputer or other HPC system in some instances. In at least one embodiment, the on-premises installation enables the use of broadband (e.g., via high-throughput local communication interfaces such as RF over Ethernet) for real-time processing. In at least one embodiment, real-time or near-real-time processing is achieved when the virtual instrument supports ultrasound devices or other imaging modalities where immediate visualization is expected and required for accurate diagnosis and analysis. can be particularly useful. In at least one embodiment, the cloud computing architecture can perform dynamic bursts to cloud computing service providers or other compute clusters when local demand exceeds on-premise capacity or capabilities. can. In at least one embodiment, the cloud architecture, once implemented, may be adapted to train neural networks or other machine learning models described herein with respect to training system 3604 . In at least one embodiment, when the training pipeline is in place, the machine learning model may continuously learn and improve as it processes additional data from the devices it supports. In at least one embodiment, the virtual machine may be continuously improved using additional data, new data, existing machine learning models, and/or new or updated machine learning models.In at least one embodiment, the computing system may include some or all of the hardware 3622 described herein, the hardware 3622 being a computing device located within, coupled to, and proximal to the device. • May be distributed in any of a number of ways, including as part of a device, within a facility's local data center, and/or within the cloud 3726; In at least one embodiment, deployment system 3606 and associated applications or containers are created in software (e.g., as separate containerized instances of applications) such that virtual machine behavior, operation, and configuration, and virtual machine The output produced by can be modified or customized as desired without having to alter or modify the raw output of the device the virtual instrument supports.In at least one embodiment, at least one component shown or described with respect to Figure 38 is used to implement the techniques and/or functionality described with respect to Figures 1-6. In at least one embodiment, at least one component shown or described with respect to FIG. 38 includes at least one aspect described with respect to FIG. ) containing and/or moving. In at least one embodiment, at least one component shown or described with respect to FIG. 38 is a speculatively executable operation and/or A computer program representation of the instructions is used to perform at least one speculative operation.FIG. 39A includes an example data flow diagram of a virtual instrument supporting an ultrasound device, according to at least one embodiment. In at least one embodiment, installation pipeline 3710B may utilize one or more of services 3620 of system 3700 . In at least one embodiment, deployment pipelines 3710B and services 3620 may utilize system hardware 3622 either locally or in the cloud 3726 . In at least one embodiment, not shown, process 3900 may be facilitated by pipeline manager 3712 , application orchestration system 3728 , and/or parallel computing platform 3730 .In at least one embodiment, process 3900 may include receiving imaging data from ultrasound device 3902 . In at least one embodiment, the imaging data may be stored in a PACS server in DICOM format (or other formats such as RIS, CIS, REST compliant, RPC, raw, etc.) and a virtual instrument for the ultrasound device 3902 (e.g. may be received by the system 3700 for processing through a selected or customized delivery pipeline 3710 as virtual ultrasound). In at least one example, imaging data may be received directly from an imaging device (eg, ultrasound device 3902) or processed by a virtual machine. In at least one embodiment, a transducer or other signal converter communicatively coupled between the imaging device and the virtual machine converts signal data produced by the imaging device into image data that can be processed by the virtual machine. may be converted. In at least one embodiment, raw data and/or image data may be applied to DICOM reader 3806 to extract data for use by applications or containers in installation pipeline 3710B. In at least one embodiment, the DICOM reader 3806 is a service 3620 (e.g., one of the computational services 3716) for extracting, resizing, rescaling, and/or otherwise preparing data for use by an application or container. As one), a data extension library 3914 (eg, NVIDIA's DALI®) may be utilized.In at least one embodiment, once the data is prepared, a reconstruction 3906 application and/or container may be executed to reconstruct the data from the ultrasound device 3902 into an image file. In at least one embodiment, after or concurrently with reconstruction 3906, the application and/or container of detection 3908 may be used for anomaly detection, object detection, feature detection, and/or other detection tasks on the data. may be executed. In at least one embodiment, image files generated during reconstruction 3906 may be used during detection 3908 to identify anomalies, objects, features, and the like. In at least one embodiment, detection 3908 applications may utilize inference engine 3916 (eg, as one of AI services 3718) to perform inferences on data to generate detections. In at least one embodiment, one or more machine learning models (eg, from training system 3604) may be executed or called by the detection 3908 application.In at least one embodiment, once reconstruction 3906 and/or detection 3908 are complete, the data output from these applications and/or containers is used to create a visualization 3912 (e.g., gray scale) displayed on a workstation or display terminal. A visualization 3910 such as a scaled output) may be generated. In at least one embodiment, the visualization allows a technician or other user to visualize the results of the introduction pipeline 3710B to the ultrasound device 3902. In at least one embodiment, visualization 3910 may be performed by utilizing rendering component 3918 of system 3700 (eg, one of visualization services 3720). In at least one embodiment, rendering component 3918 may run a 2D, OpenGL, or ray tracing service to generate visualization 3912 .In at least one embodiment, at least one component shown or described with respect to Figure 39A is used to implement the techniques and/or functionality described with respect to Figures 1-6. In at least one embodiment, at least one component shown or described with respect to FIG. 39A includes at least one aspect described with respect to FIG. ) containing and/or moving. In at least one embodiment, at least one component shown or described with respect to FIG. 39A is a speculatively executable operation and/or A computer program representation of the instructions is used to perform at least one speculative operation.FIG. 39B includes an example data flow diagram for a virtual instrument supporting a CT scanner, according to at least one embodiment. In at least one embodiment, installation pipeline 3710C may utilize one or more of services 3620 of system 3700 . In at least one embodiment, deployment pipeline 3710C and services 3620 may utilize system hardware 3622 either locally or in cloud 3726 . In at least one embodiment, not shown, process 3920 may be facilitated by pipeline manager 3712 , application orchestration system 3728 , and/or parallel computing platform 3730 .In at least one embodiment, process 3920 may include CT scanner 3922 generating raw data that is received by DICOM reader 3806 (eg, directly, through PACS server 3804, after processing, etc.). may In at least one embodiment, the virtual CT (instantiated by installation pipeline 3710C) is used to monitor the patient (eg, patient motion detection AI 3926) and/or control the exposure of CT scanner 3922 (eg, exposure control AI 3924). a first real-time pipeline for adjusting or optimizing (using). In at least one embodiment, one or more of the applications (eg, 3924 and 3926) may utilize services 3620, such as AI services 3718. In at least one embodiment, the output of the application (or container) of the exposure control AI 3924 and/or the application (container) of the patient motion detection AI 3926 is used as feedback to the CT scanner 3922 and/or technician to determine the exposure (or CT Other settings of the scanner 3922) may be adjusted and/or the patient may be told not to move too much.In at least one embodiment, installation pipeline 3710C may include a non-real-time pipeline for analyzing data generated by CT scanner 3922 . In at least one embodiment, the second pipeline includes an application and/or container of CT Reconstruction 3808, an application and/or container of Coarse Detection AI 3928 (e.g., when certain results are detected by Coarse Detection AI 3928 ) applications and/or containers of Precision Detection AI 3932, applications and/or containers of Visualization 3930, and applications of DICOM Writer 3812 (and/or other data typewriters such as RIS, CIS, REST compliant, RPC, raw, etc.) and/or a container. In at least one embodiment, raw data generated by CT scanner 3922 may be passed through a pipeline of installation pipelines 3710C (instantiated as virtual CT machines) to generate results. In at least one embodiment, results from the DICOM writer 3812 may be sent for display and/or sent to a PACS server for later retrieval, analysis, or display by a technician, practitioner, or other user. 3804 may be stored.In at least one embodiment, at least one component shown or described with respect to Figure 39B is used to implement the techniques and/or functionality described with respect to Figures 1-6. In at least one embodiment, at least one component shown or described with respect to FIG. 39B includes at least one aspect described with respect to FIG. ) containing and/or moving. In at least one embodiment, at least one component shown or described with respect to FIG. 39B is a speculatively executable operation and/or A computer program representation of the instructions is used to perform at least one speculative operation.FIG. 40A shows a data flow diagram of a process 4000 for training, retraining, or updating a machine learning model, according to at least one embodiment. In at least one embodiment, process 4000 may be performed using system 3700 of FIG. 37 as a non-limiting example. In at least one embodiment, process 4000 may utilize services 3620 and/or hardware 3622 of system 3700 described herein. In at least one embodiment, the refined model 4012 generated by process 4000 may be executed by deployment system 3606 for one or more containerized applications in deployment pipeline 3710 .In at least one embodiment, model training 3614 uses new training data (eg, new input data such as customer dataset 4006 and/or new ground truth data associated with the input data) to develop an initial model. This may include retraining or updating 4004 (eg, pre-trained models). In at least one embodiment, to retrain or update the initial model 4004, the output or loss layers of the initial model 4004 may be reset, deleted, and/or updated or new output or loss Layers may be substituted. In at least one embodiment, the initial model 4004 may have previously fine-tuned parameters left over from previous training (e.g., weights and/or biases), thereby training or retraining 3614 does not take as long or require as much processing as training a model from scratch. In at least one embodiment, during model training 3614, by having the output or loss layers of the initial model 4004 reset or replaced, the parameters are applied to the new customer data set 4006 (eg, image data 3608 of FIG. 36). may be updated or recalibrated for new datasets based on loss calculations associated with the accuracy of the output or loss layer in generating predictions for .In at least one embodiment, pre-trained models 3706 may be stored in a data store or registry (eg, model registry 3624 of FIG. 36). In at least one embodiment, pre-trained model 3706 may have been trained, at least in part, at one or more facilities different from the facility performing process 4000 . In at least one embodiment, pre-trained models 3706 are trained on-premises using customer or patient data generated on-premises to protect the privacy and rights of patients, subjects, and customers at different institutions. may have been In at least one embodiment, pre-trained model 3706 may be trained using cloud 3726 and/or other hardware 3622, while privacy-protected sensitive patient data may be stored in cloud 3726 (or other off-premises hardware), may not be used by, or may be inaccessible to, any component. In at least one embodiment, when pre-trained model 3706 is trained using patient data from more than one center, pre-trained model 3706 is trained individually for each center before training another center. may be trained on patient or customer data from In at least one embodiment, the customer or patient data is free from privacy concerns (e.g., by waiver for experimental use) or the customer or patient data is included in public datasets. Pre-trained model 3706 may be trained on-premises and/or off-premises, such as in a data center or other cloud computing infrastructure, using customer or patient data from any number of facilities, such as when may beIn at least one embodiment, when selecting an application to use in the deployment pipeline 3710, the user can also select the machine learning model that will be used with the particular application. In at least one embodiment, the user may not have a model to use, so the user may select a pre-trained model 3706 to use with the application. In at least one embodiment, the trained model 3706 produces accurate results for the customer data set 4006 at the user's facility (e.g., based on patient diversity, demographics, type of medical imaging device used, etc.) may be optimized to In at least one embodiment, prior to introducing pre-trained model 3706 into deployment pipeline 3710 for use with an application, pre-trained model 3706 is updated, re-trained, and re-trained for use at each facility. /or may be fine-tuned.In at least one embodiment, a user may select a pre-trained model 3706 to be updated, re-trained, and/or fine-tuned, and the pre-trained model 3706 instructs the system 3604 within process 4000 to It may be referred to as an initial model 4004 to train. In at least one embodiment, customer dataset 4006 (e.g., imaging data, genomics data, sequencing data, or other types of data generated by devices at the facility) is used for initial model 4004 (limited Model training 3614 may be performed to generate refined model 4012 . In at least one embodiment, ground truth data corresponding to customer dataset 4006 may be generated by training system 3604 . In at least one embodiment, the ground truth data may be generated, at least in part, by clinicians, scientists, physicians, practitioners at the institution (eg, as labeled clinic data 3612 in FIG. 36). good.In at least one embodiment, AI-assisted annotation 3610 may be used in some instances to generate ground truth data. In at least one embodiment, the AI-assisted annotations 3610 (e.g., implemented using the AI-assisted annotation SDK) utilize machine learning models (e.g., neural networks) to suggest or predict about the customer dataset. may generate ground truth data that In at least one embodiment, user 4010 may use an annotation tool within a user interface (graphical user interface (GUI)) on computing device 4008 .In at least one embodiment, user 4010 may interact with the GUI via computing device 4008 to edit or fine-tune annotations or automated annotations. In at least one embodiment, polygon editing features may be used to move polygon vertices to more precise or fine-tuned locations.In at least one embodiment, once customer dataset 4006 has associated ground truth data, the ground truth data (e.g., from AI-assisted annotation, manual labeling, etc.) is used during model training 3614. may be processed to generate a refined model 4012 . In at least one embodiment, the customer dataset 4006 may be applied to the initial model 4004 any number of times, and the ground truth data may be applied to the refined model 4012 until an acceptable level of accuracy is achieved. It may be used to update the parameters of the initial model 4004. In at least one embodiment, once the refined model 4012 is generated, the refined model 4012 uses one or more installations at the facility to perform one or more processing tasks on the medical imaging data. It may be introduced within pipeline 3710 .In at least one embodiment, refined model 4012 may be uploaded to pre-trained models 3706 in model registry 3624 to be selected by another facility. In at least one embodiment, this process may be completed at any number of facilities, whereby the refined model 4012 is further refined any number of times on new data sets to produce a more universal model. mayIn at least one embodiment, at least one component shown or described with respect to Figure 40A is used to implement the techniques and/or functionality described with respect to Figures 1-6. In at least one embodiment, at least one component shown or described with respect to FIG. 40A includes at least one aspect described with respect to FIG. ) containing and/or moving. In at least one embodiment, at least one component shown or described with respect to FIG. 40A (e.g., one or more components of model training system 4004) may be used with respect to one or more of FIGS. A computer program representation of speculatively executable operations and/or instructions, as described, is used to perform at least one training operation.FIG. 40B is a diagram of an example client-server architecture for enhancing an annotation tool with a pre-trained annotation model, according to at least one embodiment. In at least one embodiment, AI-assisted annotation tool 4036 may be instantiated based on client-server architecture 4032 . In at least one embodiment, the imaging application's annotation tool 4036 may assist, for example, a radiologist in identifying organs and abnormalities. In at least one embodiment, the imaging application allows the user 4010 to identify a few extreme points on a particular organ of interest in a raw image 4034 (eg, a 3D MRI or CR scan) as a non-limiting example. A software tool may be included to assist and receive auto-annotated results for all 2D slices of a particular organ. In at least one embodiment, the results may be stored in a data store as training data 4038 and may be used (eg, without limitation) as ground truth data for training. In at least one embodiment, when the computing device 4008 sends extrema points for AI-assisted annotation 3610, a deep learning model, for example, may receive this data as input and identify segmented organs or abnormalities. may return the inference result of In at least one embodiment, a pre-instantiated annotation tool, such as AI-assisted annotation tool 4036B of FIG. It may be extended by making an API call (eg, API call 4044 ) to a server, such as assistance server 4040 . In at least one embodiment, the annotation model registry includes pre-trained models 4042 (e.g., machine learning models such as deep learning models) that have been pre-trained to perform AI-assisted annotation on specific organs or anomalies. may be stored. In at least one embodiment, these models may be further updated using training pipeline 3704 . In at least one embodiment, pre-installed annotation tools may be improved over time as new labeled clinic data 3612 is added.Inference and/or training logic 715 is used to perform inference and/or training operations associated with one or more embodiments. Details regarding inference and/or training logic 715 are provided herein in conjunction with FIGS. 7A and/or 7B.In at least one embodiment, at least one component shown or described with respect to Figure 40B is used to implement the techniques and/or functionality described with respect to Figures 1-6. In at least one embodiment, at least one component shown or described with respect to FIG. 40B includes at least one aspect described with respect to FIG. ) containing and/or moving. In at least one embodiment, at least one component shown or described with respect to FIG. 37B (eg, AI-assisted annotation tool 3736 and/or annotation assistance server 4040) is a At least one speculative operation is performed using computer program representations indicative of speculatively executable operations and/or instructions, such as those described with respect to.At least one embodiment of the disclosure may be described in light of the following clauses. 1. A processor comprising one or more circuits for implementing one or more instructions identified by a compiler to be performed speculatively in parallel. 2. The one or more instructions were identified to be performed speculatively in parallel by a compiler based at least in part on identifying a copy operation, and the one or more circuits are , the processor of clause 1 to implement one or more instructions based at least in part on receiving commands from another processor. 3. The instructions are based at least in part on identifying copy operations between the parallel processing unit and the host computer system and labeling safe operations following the one or more identified copy operations. 3. The processor of any one of clauses 1-2, identified as being speculatively implemented in parallel by a compiler. 4. 4. The processor of any one of clauses 1-3, wherein the instructions include extended live ranges for variables used by operations associated with the instructions identified as being speculatively performed in parallel. 5. Clauses 1- 5. The processor of any one of 4. 6. 6. The processor of any one of clauses 1-5, wherein the instruction is part of a while loop. 7. 7. The processor of any one of clauses 1-6, wherein the instructions implement part of an inference operation using a recursive neural network. 8. one or more processors for implementing one or more instructions identified by the compiler as being speculatively performed in parallel; and one or more processors for storing the one or more instructions. A system, comprising: a memory; 9. of clause 8, wherein the instructions are identified as being speculatively performed in parallel by the compiler based at least in part on identifying a copy operation from the parallel processing unit to the host computer system. system. 10. Instructions identified as being speculatively performed in parallel by a compiler based at least in part on finding one or more conditional branches in a representation of a computer program using a neural network The system of any one of Clauses 8-9, wherein the system is 11. The one or more processors are the first one or more processors and the system for invoking one or more instructions for execution by the first one or more processors 11. The system of any one of clauses 8-10, further comprising a second one or more processors. 12. The one or more processors are the first one or more processors, and the system for invoking one or more instructions for execution by the first one or more processors Further comprising a second one or more processors, the second one or more processors copying the value via a conditional copy operation preceding the one or more instructions in the representation of the computer program. 12. The system of any one of Clauses 8-11, wherein in response to receiving it will cease to speculatively activate instructions. 13. of Clauses 8-12, wherein the instruction is identified as being speculatively implemented in parallel by the compiler based at least in part on labeling operations that are safe to be speculatively implemented. any one of the systems. 14. The instructions are speculated in parallel by the compiler based at least in part on searching the representation of the computer program for copy operations and identifying operations following the copy operations that are safe to be speculatively performed. The system of any one of Clauses 8-13, which has been identified as being implemented systematically. 15. 15. The system of any one of clauses 8-14, wherein the instructions are part of a while loop implementing part of an inference operation using a neural network. 16. A method comprising implementing one or more instructions identified by a compiler to be speculatively implemented in parallel. 17. Instructions are speculated in parallel by the compiler based at least in part on identifying actions that do not change random state, overwrite outputs, use signal instructions, or use wait instructions. The method of clause 16, which has been identified as being practiced systematically. 18. Instructions identified to be performed speculatively in parallel by a compiler based at least in part on identifying a conditional branch and selecting a path from multiple paths following the conditional branch 18. The method of any one of clauses 16-17, wherein 19. 19. The method of any one of clauses 16-18, wherein the instructions are identified to be performed speculatively in parallel by the compiler based at least in part on identifying copy operations. 20. 20. The method of any one of clauses 16-19, wherein the instructions include extended live ranges for variables used in speculatively-implemented operations. 21. The instructions were identified to be performed speculatively in parallel by a compiler based at least in part on identifying a copy operation, and the instructions are part of an inference operation using a neural network. 21. The method of any one of clauses 16-20, implementing 22. of instructions that, when performed by one or more processors, cause one or more processors to at least identify one or more instructions that will be speculatively performed in parallel; A machine-readable medium that stores a set. 23. A set of instructions, when implemented by one or more processors, is based at least in part on identifying copy operations between a parallel processing unit and a host computer system in a representation of a computer program. 23. The machine-readable medium of clause 22, further causing the one or more processors to identify the instructions as being speculatively executed in parallel. 24. when the set of instructions is performed by the one or more processors, further causing the one or more processors to at least identify operations following the safe-to-execute copy operation, clauses 22- 24. The machine-readable medium of any one of Clause 23. 25. Further causing the one or more processors to label at least operations that are safe to be speculatively performed when the set of instructions is performed by the one or more processors, clauses 22- 24. The machine-readable medium of any one of Clause 24. 26. A set of instructions, when performed by one or more processors, is at least labeled as operations that are speculatively safe and labeled as speculatively safe. 26. The machine-readable medium of any one of clauses 22-25, further causing the one or more processors to extend the live range of variables associated with the operations performed. 27. searching a computer program representation for copy operations between at least a graphics processing unit and a host computer system when the set of instructions are performed by one or more processors; 27. The machine-readable medium of any one of clauses 22-26, further causing the one or more processors to: , copy operations, and identify operations that are safe to be speculatively performed. 28. The set of instructions, when performed by one or more processors, at least one or more of extending the live range of variables associated with operations identified as being performed speculatively in parallel. 28. The machine-readable medium of any one of clauses 22-27 further causing the processor of clauses 22-27 to perform. 29. The set of instructions, when executed by one or more processors, finds conditional branches in a representation of a computer program based at least in part on identifying copy operations in the representation of the computer program. selecting a path from a plurality of paths following the conditional branch; and identifying instructions in the selected path that are speculatively safe. the machine-readable medium of any one of clauses 22-28. 30. based at least in part on performing one or more speculative operations using a representation of a computer program containing one or more instructions identified by a compiler to be performed speculatively in parallel; a computer vision system including one or more processors for identifying one or more trajectories of a corresponding one or more objects; A vehicle comprising one or more of a propulsion system, a directional control system, and a vehicle operator notification system for performing one or more actions based on. 31. The one or more processors comprises one or more first processors in the host computer system and one or more second processors in the parallel processing unit, one or more second 31. The vehicle of clause 30, wherein the processor of clause 30 will speculatively execute instructions based at least in part on receiving commands from a host computer system that launches kernels containing instructions on parallel processing units. 32. Any one of clauses 30-31, wherein the one or more instructions are identified as speculatively implemented by the compiler based at least in part on identifying a copy operation vehicle. 33. Any of clauses 30-32, wherein the one or more instructions are identified as being speculatively performed by the compiler based at least in part on labeling safe operations Vehicle of item 1. 34. 34. The vehicle of any one of clauses 30-33, wherein the instructions include extended live ranges for variables used by operations associated with the instructions identified as being speculatively performed in parallel. 35. 35. The vehicle of any one of Clauses 30-34, wherein the instructions implement part of an inference operation using a recurrent neural network. 36. A processor comprising one or more circuits for identifying one or more instructions to be speculatively executed in parallel. 37. One or more circuits are speculatively implemented in parallel based at least in part on identifying copy operations between a parallel processing unit in a representation of a computer program and a host computer system; 37. The processor of clause 36, which will identify the instruction as being. 38. The processor of any one of clauses 36-37, wherein the one or more circuits are further to identify at least operations following safe-to-execute copy operations. 39. The processor of any one of clauses 36-38, wherein the one or more circuits will further label at least operations that are safe to be performed speculatively. 40. The one or more circuits are further associated with at least labeling operations that are speculatively safe and operations labeled as speculatively safe and extending the live range of the variable. 41. The one or more circuits further search the representation of the computer program for copy operations between at least the graphics processing unit and the host computer system; , and identifying operations that are safe to be performed speculatively. 42. Any one of clauses 36-41, wherein the one or more circuits further extend the live range of variables associated with at least the operations identified as being performed speculatively in parallel. Term processor. 43. The one or more circuits further at least:finding a conditional branch in a representation of a computer program based at least in part on identifying a copy action in the representation of the computer program; selecting a path from a plurality of paths following the conditional branch; and identifying instructions in the selected path that are safe for speculative execution. 44. one or more processors for identifying one or more instructions as to be performed speculatively in parallel; and one or more memories for storing the one or more instructions; system. 45. One or more processors speculatively in parallel based at least in part on identifying copy operations between a parallel processing unit in a representation of a computer program and a host computer system; 44. The system of clause 44 that will identify the order as being implemented. 46. The system of any one of clauses 44-45, wherein the one or more processors will at least identify operations following safe-to-execute copy operations. 47. The system of any one of clauses 44-46, wherein the one or more processors will at least label operations that are safe to be performed speculatively. 48. The one or more processors at least label operations that are speculatively safe and the variables associated with the operations labeled as speculatively safe. extending the live range. 49. One or more processors searching a representation of a computer program for at least copy operations between a graphics processing unit and a host computer system; 49. The system of any one of clauses 44-48, wherein the system of any one of clauses 44-48 is to identify operations that are safe to be implemented statically. 50. The method of any one of clauses 44-49, wherein the one or more processors will at least extend the live scope of variables associated with operations identified as being speculatively performed in parallel. system. 51. One or more processors find conditional branches in a representation of a computer program based at least in part on identifying copy actions in the representation of the computer program; 51. The system of any one of clauses 44-50, wherein selecting a path from a plurality of subsequent paths and identifying instructions in the selected path that are safe to be speculatively performed. 52. A method comprising identifying one or more instructions as to be performed speculatively in parallel. 53. wherein identifying instructions as being speculatively performed in parallel is based, at least in part, on identifying copy operations between a parallel processing unit and a host computer system in a representation of the computer program; 52 ways. 54. 54. The method of any one of clauses 52-53, wherein identifying instructions to be performed speculatively in parallel comprises identifying operations following a safe-to-execute copy operation. 55. 55. The method of any one of clauses 52-54, further comprising labeling operations that are safe to be performed speculatively. 56. a clause further comprising labeling operations that are speculatively safe; and extending the live range of variables associated with the operations labeled as speculatively safe. The method of any one of 52-55. 57. searching a representation of a computer program for copy operations between a graphics processing unit and a host computer system; and identifying operations that are speculatively safe following the copy operations. 57. The method of any one of clauses 52-56, further comprising the steps of: 58. 58. The method of any one of clauses 52-57, further comprising extending the live range of variables associated with operations identified as being performed speculatively in parallel. 59. finding a conditional branch in the representation of the computer program based at least in part on identifying a copy action in the representation of the computer program; selecting a path from a plurality of paths following the conditional branch; and identifying instructions in the selected path that are safe for speculative execution.In at least one embodiment, a single semiconductor platform may refer to a single, single semiconductor-based integrated circuit or chip. In at least one embodiment, the multi-chip module may be used with enhanced connectivity to simulate on-chip operation, replacing traditional central processing unit ("CPU") and bus implementations. significantly improve utilization. In at least one embodiment, various modules may also be installed separately from the semiconductor platform or in various combinations with the semiconductor platform as desired by the user.In at least one embodiment, referring back to FIG. 13, computer programs in the form of machine readable and executable code or computer control logic algorithms are stored in main memory 1304 and/or secondary storage. The computer programs, when executed by one or more processors, enable system 1300 to perform various functions in accordance with at least one embodiment. In at least one implementation, memory 1304, storage, and/or any other storage are possible examples of computer-readable media. In at least one embodiment, secondary storage includes floppy disk drives, magnetic tape drives, compact disk drives, digital versatile disk (“DVD”) drives, recording devices. , Universal Serial Bus (“USB”), hard disk drives and/or removable storage drives representing flash memory and the like. In at least one embodiment, the architectures and/or functionality of the various previous figures are integrated circuits, chipsets (e.g., , a group of integrated circuits designed to function and be marketed as units for performing related functions), and/or in the context of any suitable combination of integrated circuits.In at least one embodiment, the architecture and/or functionality of the various preceding figures are implemented in the context of general purpose computer systems, circuit board systems, game console systems dedicated to entertainment purposes, special purpose systems, and the like. In at least one embodiment, computer system 1300 includes a desktop computer, laptop computer, tablet computer, server, supercomputer, smart phone (e.g., wireless handheld device), personal digital assistant (" PDAs"), digital cameras, vehicles, head-mounted displays, portable electronic devices, mobile phone devices, televisions, workstations, game consoles, embedded systems, and/or any other type of form of logic. can be taken.In at least one embodiment, parallel processing system 1312 includes, without limitation, multiple parallel processing units (“PPUs”) 1314 and associated memory 1316 . In at least one embodiment, PPU 1314 is connected to host processors or other peripheral devices via interconnects 1318 and switches 1320 or multiplexers. In at least one embodiment, parallel processing system 1312 distributes computational tasks across PPUs 1314, such as as part of distributing computational tasks across thread blocks of multiple graphics processing units (“GPUs”). , can be parallelizable. In at least one embodiment, memory is shared and accessible (eg, for read and/or write access) across some or all of PPU 1314 , although such shared memory is local memory residing on PPU 1314 . • There may be a performance penalty for memory and register usage. In at least one embodiment, the operation of PPUs 1314 is synchronized by using commands such as _syncthreads(), where all threads in a block (e.g., working across multiple PPUs 1314) Reach a certain execution point in your code.Other variations are within the scope of this disclosure. Accordingly, while the disclosed techniques are susceptible to various modifications and alternative constructions, certain illustrative examples thereof have been shown in the drawings and have been described above in detail. However, there is no intention to limit the disclosure to the particular disclosed form or forms, but to the contrary all modifications, alternative It is intended to cover configurations and equivalents.The use of the terms "a" and "an" and "the" and similar denoting terms in the context of describing the disclosed embodiments (especially in the context of the claims below) may be used herein to It should be construed as encompassing both the singular and the plural, and not as a definition of terms, unless the text indicates otherwise or the context clearly contradicts. The terms "comprising," "having," "including," and "containing," unless otherwise noted, are open-ended terms ("including but not limited to (meaning "without limitation"). "Connected", when unqualified, refers to a physical connection, partially or wholly contained within, attached to, or connected to each other, even if there is something intervening. Interpreted as spliced. Reciting ranges of values herein is incorporated into the specification as if each separate value were individually recited herein, unless stated otherwise herein. Unless otherwise specified, it is merely intended to serve as a shorthand method of referring individually to each separate value falling within the range. In at least one embodiment, use of the term "set" (e.g., "set of items") or "subset" refers to one or more members, unless otherwise indicated or contradicted by context. should be interpreted as a non-empty set with Further, unless stated otherwise or contradicted by context, the term "subset" of a corresponding set does not necessarily refer to an exact subset of the corresponding set, rather that the subset and the corresponding set are equivalent. good too.Combined terms, such as phrases of the form "at least one of A, B, and C" or "at least one of A, B, and C," unless specifically stated otherwise, or Unless explicitly contradicted by context, commonly used to indicate that an item, term, etc. is A or B or C, or any non-empty subset of a set of A and B and C. understood in the context of For example, in the illustrative example of a set having three members, the conjunctive phrases "at least one of A, B, and C" and "at least one of A, B, and C" are: {A}, {B}, {C}, {A,B}, {A,C}, {B,C}, {A,B,C}. Thus, such conjunctions do not generally imply that certain embodiments require the presence of each of at least one A, at least one B, and at least one C. Further, unless stated otherwise or contradicted by context, the term "plurality" refers to the state of being plural (e.g., "a plurality of items" refers to multiple items). items)). In at least one embodiment, the number of items in the plurality is at least two, but may be more if explicitly or otherwise indicated by context. Further, unless stated otherwise or otherwise apparent from the context, the phrase "based on" means "based at least in part" and does not mean "based solely on." .The operations of processes described herein may be performed in any suitable order unless otherwise indicated herein or otherwise clearly contradicted by context. In at least one embodiment, a process, such as a process described herein (or variations and/or combinations thereof), executes under the control of one or more computer systems made up of executable instructions, as code (e.g., executable instructions, one or more computer programs, or one or more applications) that is collectively executed by hardware on one or more processors, or by combinations thereof Implemented. In at least one embodiment, the code is stored on a computer-readable storage medium, eg, in the form of a computer program comprising instructions executable by one or more processors. In at least one embodiment, the computer-readable storage medium excludes transitory signals (e.g., propagating transitory electrical or electromagnetic transmissions), but non-transitory data storage within transitory signal transceivers. A non-transitory computer-readable storage medium that contains circuits (eg, buffers, caches, and queues). In at least one embodiment, the code (e.g., executable code or source code) is stored on a set of one or more non-transitory computer-readable storage media, which storage media are stored in a computer system. Executable instructions are stored (or executable instructions ). In at least one embodiment, the set of non-transitory computer-readable storage media comprises a plurality of non-transitory computer-readable storage media, each non-transitory computer-readable storage medium of the plurality of non-transitory computer-readable storage media One or more of the non-transitory storage media are devoid of all code, but multiple non-transitory computer-readable storage media collectively store all code. In at least one embodiment, the executable instructions are executed such that different instructions are executed by different processors, e.g., a non-transitory computer-readable storage medium stores the instructions and a main central processing unit ("CPU ”) executes some instructions, and the graphics processing unit (“GPU”) executes other instructions. In at least one embodiment, different components of the computer system have separate processors, and different processors execute different subsets of instructions.In at least one embodiment, an arithmetic logic unit is a set of combinatorial logic circuits that take one or more inputs and produce a result. In at least one embodiment, arithmetic logic units are used by processors to implement mathematical operations such as addition, subtraction, or multiplication. In at least one embodiment, arithmetic logic units are used to implement logical operations such as logical AND/OR or XOR. In at least one embodiment, the arithmetic logic unit is stateless and consists of physical switch components, such as semiconductor transistors, arranged to form logic gates. In at least one embodiment, the arithmetic logic unit may operate internally as a stateful logic circuit with an associated clock. In at least one embodiment, the arithmetic logic units may be constructed as asynchronous logic circuits with no internal state maintained in associated register sets. In at least one embodiment, the arithmetic logic unit combines operands stored in one or more registers of the processor to produce an output that can be stored by the processor in another register or memory location. used by the processor.In at least one embodiment, as a result of processing instructions retrieved by the processor, the processor presents one or more inputs or operands to the arithmetic logic unit to convert the opcodes provided to the inputs of the arithmetic logic unit. causes the arithmetic logic unit to produce a result based at least in part on the . In at least one embodiment, the instruction code provided by the processor to the ALU is based, at least in part, on instructions executed by the processor. In at least one embodiment, combinatorial logic in the ALU processes inputs and produces outputs that are placed on buses within the processor. In at least one embodiment, the processor controls a destination register, memory location, output device, or output bus output storage such that results generated by the ALU are sent to desired locations by clocking the processor. - Select a location.Within the scope of this application, the term Arithmetic Logic Unit or ALU is used to refer to any computational logic circuit that processes operands and produces a result. For example, as used herein, the term ALU can refer to a floating point unit, DSP, tensor core, shader core, coprocessor, or CPU.Thus, in at least one embodiment, a computer system is configured to implement one or more services that singly or collectively perform the operations of the processes described herein, such computer system , consists of applicable hardware and/or software that enables the execution of operations. Moreover, a computer system implementing at least one embodiment of the present disclosure is a single device, and in another embodiment is a distributed computer system comprising multiple devices that operate in different ways, which A distributed computer system performs the operations described herein so that no single device performs all the operations.Any examples provided herein, or the use of exemplary language (e.g., "such as"), are intended only to better clarify the embodiments of the present disclosure, unless stated otherwise. , are not intended to limit the scope of the present disclosure. No language in the specification should be construed as indicating any non-claimed element as essential to the practice of the disclosure.All references, including publications, patent applications, and patents, cited in this specification are hereby incorporated by reference in their entirety, as if each individual reference was specifically indicated to be incorporated by reference. incorporated herein by reference to the same extent as if it were.In the specification and claims, the terms "coupled" and "connected" may be used along with their derivatives. It should be understood that these terms may not be intended as synonyms for each other. Rather, in particular instances, "connected" or "coupled" are used to indicate that two or more elements are in direct or indirect physical or electrical contact with each other. good too. "Coupled" may also mean that two or more elements are not in direct contact with each other, but still engage or interact with each other.Unless specifically stated otherwise, terms such as "processing," "computing," "calculating," or "determining" throughout the specification refer to data stored in the registers and/or memory of a computing system. data represented as physical quantities, such as electronic, as physical quantities in a computing system's memory, registers, or other such information storage, transmission, or display device Refers to the act and/or process of a computer or computing system, or similar electronic computing device, that manipulates and/or transforms into other data that is similarly represented.Similarly, the term "processor" means any device that processes electronic data from registers and/or memory and transforms that electronic data into other electronic data that can be stored in registers and/or memory; Or it may refer to a portion of a device. As non-limiting examples, a "processor" may be a CPU or GPU. A "computing platform" may comprise one or more processors. As used herein, a "software" process may include software and/or hardware entities such as tasks, threads, and intelligent agents that perform work over time. Each process may also refer to multiple processes for executing instructions serially or in parallel, either continuously or intermittently. In at least one embodiment, "system" and "method" are interchangeable herein so long as the system can embody one or more methods and the method may be considered a system. used forReference may be made herein to obtaining, acquiring, receiving analog or digital data, or inputting them into a subsystem, computer system, or computer-implemented machine. In at least one embodiment, the process of obtaining, retrieving, receiving, or inputting analog or digital data is various, such as receiving the data as a parameter of a function call or a call to an application programming interface. can be realized in some way. In at least one embodiment, the process of obtaining, acquiring, receiving, or inputting analog or digital data can be accomplished by transferring data through serial or parallel interfaces. In at least one embodiment, the process of obtaining, obtaining, receiving, or inputting analog or digital data is accomplished by transferring the data from the providing entity to the obtaining entity over a computer network. be able to. It can also refer, in at least one embodiment, to providing, outputting, transmitting, sending, or presenting analog or digital data. In various examples, a process that provides, outputs, transmits, sends, or presents analog or digital data is an input or output parameter of a function call, an application programming interface, or an interprocess communication mechanism. It can be realized by transferring data as parameters.Although the description herein describes exemplary implementations of the described techniques, other architectures may be used to implement the described functionality and are within the scope of this disclosure. intended to be within Further, although a specific distribution of roles may be defined for purposes of explanation, various functions and roles may be distributed and divided in different ways depending on the circumstances.Furthermore, while the subject matter has been described in language specific to structural features and/or methodological acts, the claimed subject matter in the appended claims is not necessarily limited to the specific features or acts described. should be understood. Rather, the specific features and acts are disclosed as example forms of implementing the claims. |
The invention discloses hardware load hardening for speculative side-channel attacks. Embodiments of methods and apparatuses for hardware load hardening are disclosed. In an embodiment, a processor includes safe logic, data forwarding hardware, and data fetching hardware. The safe logic is to determine whether a load is safe. The data forwarding hardware is to, in response to a determination thatthe load is safe, forward data requested by the load. The data fetching logic is to fetch the data requested by the load, regardless of the determination that the load is safe. |
1.A processor for hardware load enhancement, including:Safety logic, used to determine whether the loading is safe;Data forwarding hardware for forwarding the data requested by the loading in response to the determination of the loading safety; andThe data fetching hardware is used for fetching the data requested by the loading regardless of the determination of the loading safety.2.3. The processor of claim 1, wherein the data forwarding hardware is further configured to prevent forwarding of the data in response to a determination that the loading is unsafe.3.The processor of claim 1, wherein the data forwarding hardware includes a load queue.4.The processor of claim 1, wherein the data fetching hardware includes a miss queue.5.The processor of claim 1, wherein the security logic is configured to determine whether the loading is safe based on information from a reservation station or an out-of-order execution cluster.6.8. The processor of claim 1, further comprising a translation backup buffer for storing address translations that are performed in response to the load regardless of the determination of the safety of the load.7.The processor of claim 1, wherein the safety logic is to determine that the load is safe when the load is no longer speculative.8.The processor of claim 1, wherein the load is executed in response to a load instruction.9.8. The processor of claim 8, wherein the security logic is to determine that the load is safe when the load instruction is ready to be retired.10.The processor of claim 1, wherein the data is to be forwarded to one or more dependent instructions.11.The processor of claim 1, wherein the load is suppressed in response to a determination that the speculative execution of the load is on the wrong path.12.The processor of claim 1, wherein the loading is performed in response to branch prediction.13.The processor of claim 12, wherein the safety logic is to determine that the load is safe when the condition of the branch prediction is satisfied.14.A method for hardening hardware loading, including:Determine whether it is safe to load;In response to determining that the loading is unsafe, preventing the forwarding of the data requested by the loading; andRegardless of the determination that the loading is unsafe, the data requested by the loading is fetched.15.The method of claim 14, further comprising: in response to determining that the loading is safe, forwarding the data.16.The method of claim 14, further comprising: regardless of the determination that the loading is unsafe, performing address translation and storing the result in the translation back-up buffer.17.The method of claim 14, wherein the loading is on a speculative execution path.18.The method of claim 17, further comprising:Determine that the speculative execution path is wrong; andIn response to determining that the speculative execution path is wrong, suppress the loading.19.A system for hardening hardware loading, including:System memory; andProcessor, including:Safety logic, used to determine whether the loading is safe;Data forwarding hardware for forwarding the data requested by the loading in response to the determination of the loading safety; andData fetching hardware is used to fetch the data requested by the load regardless of the determination of the load safety, wherein the data will be fetched from the system memory.20.The system of claim 19, wherein the data forwarding hardware is further configured to prevent the forwarding of the data in response to the determination that the loading is unsafe. |
Hardware loading hardening for speculative side channel attacksTechnical fieldThe technical field relates generally to computers, and more specifically to computer system security.Background techniqueComputer systems may be vulnerable to attempts by attackers to obtain confidential, private or secret confidential information. For example, attacks such as Spectre and Meltdown utilize the speculative and out-of-order execution capabilities of the processor to analyze illegally read data through a side channel.Description of the drawingsThe present invention is illustrated in the accompanying drawings by way of example rather than limitation. In the drawings, similar reference signs indicate similar elements, among which:Figure 1 illustrates an example of public gadgets and public primitives;FIG. 2 illustrates preventing information from being speculatively consumed through an access instruction to prevent information from being transmitted through a side channel.Figure 3 is a block diagram of the processor pipeline and cache hierarchy that can be used to execute load instructions;4 is a block diagram of a processor pipeline and a cache hierarchy including enhanced support for hardware loading according to an embodiment of the present invention;Fig. 5 is a flowchart of a method for hardware load strengthening according to an embodiment of the present invention;6A is a block diagram illustrating both an exemplary in-order pipeline according to an embodiment of the present invention and an out-of-order issue/execution pipeline of exemplary register renaming;6B is a block diagram illustrating both an exemplary embodiment of an in-order architecture core to be included in a processor and an exemplary register renaming out-of-order issue/execution architecture core according to an embodiment of the present invention;Figure 7 is a block diagram of a processor that may have more than one core, may have an integrated memory controller, and may have an integrated graphics device according to an embodiment of the present invention;Figures 8-11 are block diagrams of exemplary computer architectures;Figure 8 shows a block diagram of a system according to an embodiment of the present invention;Figure 9 is a block diagram of a first more specific exemplary system according to an embodiment of the present invention;Figure 10 is a block diagram of a second more specific exemplary system according to an embodiment of the present invention;FIG. 11 is a block diagram of a system on chip (SoC) according to an embodiment of the present invention;Fig. 12 is a block diagram comparing using a software instruction converter to convert binary instructions in a source instruction set into binary instructions in a target instruction set according to an embodiment of the present invention.Detailed waysIn the following description, numerous specific details are explained. However, it should be understood that the embodiments of the present invention can also be practiced without these specific details. In other instances, well-known circuits, structures and technologies are not shown in detail, so as not to obscure the understanding of this description.References in the specification to "one embodiment", "an embodiment", "exemplary embodiment", etc. indicate that the described embodiment may include a specific structure, feature, or characteristic, but each embodiment may not necessarily include the specific The structure, characteristics or characteristics of. Furthermore, such phrases do not necessarily refer to the same embodiment. In addition, when a feature, structure, or characteristic is described in conjunction with an embodiment, it is considered that it is within the knowledge of those skilled in the art to influence such feature, structure, or characteristic in combination with other embodiments, whether explicitly described or not.Many processors and processor cores support performance-enhancing capabilities, such as cache operations, multithreaded operations, out-of-order execution, branch prediction, and speculative execution. Attackers have found multiple ways to use the power of these processors to illegally read data.For example, an attacker may deliberately attempt to read data (e.g., secret data) from a memory location that should not be readable (e.g., out of bounds). Reading may be allowed to continue speculatively until it is determined whether the access is out of bounds. The architectural correctness of the system can be ensured by not submitting any results until a determination is made, but speculative execution may cause the microarchitecture state of the processor to change before a determination is made, and an attacker may be able to perform side-channel analysis based on the processor’s The difference in the micro-architecture state is used to infer the value of the secret data. Many variations of this type of speculative attack are possible. In one case, an attacker might speculatively use secret data as part of the memory address and use timing analysis to determine which memory locations are being loaded into the cache to infer the value.Embodiments of the present invention include systems, methods, and devices that provide features or characteristics that may be desirable for use in various computer systems for various reasons, including reducing attacks based on speculation, side channel analysis, etc. Compared with alternative methods, reduce the vulnerability of such analysis at a lower cost in performance or other aspects; and/or improve security in general.Embodiments may provide load instructions or operations to be decoupled into two separate operations, which are prefetch operations that can be performed speculatively and can be delayed until the load instruction is no longer speculative Data forwarding operation. Embodiments can be expected to avoid the complexity and performance loss associated with software methods for mitigating side channel attacks.As discussed above, when the processor is executing on a speculative path, the speculative execution capabilities of the processor may make the processor vulnerable to exploitation. The speculative mechanism that causes the processor to start executing on the speculative path can be called a speculative primitive. Speculative primitives may make the processor vulnerable to exploitation, for example, because the processor may be used to determine whether a speculative path (e.g., branch prediction) is correct and/or allowable (e.g., boundary checking) before the conditions are resolved Start execution on this speculative path.Utilization can also use or rely on the open window gadget, which creates a sufficient delay before solving the speculation. For example, if the branch condition depends on the data to be loaded into the cache, execution on the speculative path may continue at least until the data is loaded.During speculative execution, the first instruction (referred to as the access instruction) can speculatively read the secret data, and the second instruction (referred to as the transfer instruction) can encode the secret data in the state of the processor, or with The processor or the operation of the processor may be affected in an observable (for example, by an attacker) manner. Together, these two instructions can be called a public gadget.Utilization can also use or rely on public primitives, which may be used by an attacker to receive information through a side channel after the information has been leaked and transmitted. FIG. 1 illustrates an example of the public gadget 110 executed in the context of the victim or the context of the attacker and the public primitive 120 executed in the context of the attacker. The public gadget 110 includes an access instruction 112 and a transfer instruction 114. The access instruction 112 reads secret data, and the transfer instruction 114 encodes the secret data into the microarchitecture state. Public primitives can receive secret data because micro-architectural state changes are visible to the software (for example, through timing and/or performance monitoring units).Embodiments of the present invention involve changing a processor core (for example, core 690 in FIG. 6, or any one of cores 702A-N in FIG. 7 or FIG. 11) or processor ( For example, the processor 700 in Fig. 7; any one of the processors 810 or 815 in Fig. 8; any one of the processors 970, 980, or 915 in Fig. 9 or Fig. 10; or the processor in Fig. 11 1110) to alleviate the vulnerability to such exploitation and/or attack. FIG. 2 illustrates preventing information read by an access instruction from being speculatively consumed and preventing the information from being transmitted through a side channel. As shown in Figure 2, if the information accessed through the access instruction 212 is not speculatively consumed, the information will not be transmitted through the side channel, no matter what the transmission instruction 214 or public primitives follow and/or try to be used .For example, when the access instruction is a load instruction to perform unauthorized memory access, any instruction may be used as a transfer instruction. The transfer instruction may be a load or store instruction that allows information to be transferred through a secret information data stream, as shown in the following pseudo code:Alternatively, the transfer instruction may be any instruction that allows information to be transferred through a secret-dependent control flow (for example, by changing the state of the instruction cache, by enabling the vector processing unit to be powered on and/or used), as shown in the following pseudocode:FIG. 3 is a block diagram of a processor pipeline (which may represent a part of the pipeline 600 in FIG. 6A) and a cache hierarchy that can be used to execute load instructions. By not dispatching speculative load instructions to the pipeline prevents these speculative load instructions from becoming available access instructions, but may have an undesirably large negative impact on performance. Therefore, embodiments of the present invention enable speculative load instructions to be executed in two separate operations: speculative cache data fetching operations and non-speculative data forwarding operations. The processor pipeline includes safety logic (e.g., safety logic 410 in FIG. 4, as described below) for determining whether the loading is speculative.4 is a block diagram of a processor pipeline (which may represent a part of the pipeline 600 in FIG. 6A) and a cache hierarchy including enhanced support for hardware loading according to an embodiment of the present invention. When the data requested by the load instruction misses in the level 1 (L1) cache 450, it is required to fetch the cache line including the data. The data fetching operation is decoupled from the data forwarding operation so that it can be performed speculatively. The speculative data fetch operation may also include looking up address translation in the translation lookaside buffer (TLB) 440. The data forwarding operation can be delayed until the loading is no longer speculative, or the data forwarding operation can be squashed if the speculation is on the wrong path.The security logic 410 may include hardware and/or logic for determining whether and when the data forwarding operation is safe. In various embodiments, the security logic 410 can determine that the data forwarding operation is safe when any one or any combination of the following conditions is true: the loading is no longer speculative; the loading can no longer be suppressed; all previous branches have been Is resolved (for example, when the speculation is due to branch prediction); the load is ready to be retired without any errors; the load is ready to be retired, despite errors. In various embodiments, the security logic 410 may make these determinations based on information from the reservation station or out-of-order execution cluster 420 and/or any hardware and/or logic that manages or involves out-of-order execution (eg, reorder buffers) .The security conditions as determined by the security logic 410 may be used by the load queue 430 that maintains the loading order and/or the miss queue 460 that manages data requests that are missed in the L1 450.When the safety condition is false, the load is blocked (for example, by the load queue 430) and the data requested by the load instruction will not be forwarded to the dependent instruction, regardless of whether the request is hit or missed in L1 450. However, if the request misses in L1 450, the execution requires fetch (e.g., by miss queue 460) to fetch data (e.g., from L2 cache 470, L3 cache 480, or system memory), and if the address of the data If there is a miss in the TLB, a page table walkthrough is performed, and the conversion is inserted in the TLB.Only when the security condition is true or becomes true, the data found in the L1 cache 450 is forwarded to the dependent instruction, and the data not found in the L1 cache 450 is fetched and forwarded to the dependent instruction.Therefore, the load instruction is converted into a data fetch operation that can be performed speculatively and a data forwarding operation that is not speculatively performed. Speculative data fetch operations may include the requested fetch of the requested data, including loading the cache line containing the data into the L1 cache and changing the cache coherency state if necessary, and performing address translation and loading the address translation to TLB. Therefore, unlike software or other methods in which a load instruction is not executed speculatively, once the load instruction is no longer speculative, the data requested by the load instruction is more likely to be available (for example, in the L1 cache). For forwarding.FIG. 5 is a flowchart of a method 500 of an example of a method for hardware loading hardening according to an embodiment of the present invention. Various method embodiments may include all or any of the actions shown in FIG. 5 in various combinations and sequences, with or without other actions not shown (including those related to the previous description or the following description action).In 510, the load instruction is received by the processor. In 512, the security logic determines whether the load is safe.In 520, in response to the determination that the loading is unsafe, data forwarding is blocked. In 522, it is determined whether the requested data is available (e.g., a hit to the L1 cache). In 524, in response to the determination that the requested data is unavailable, the requested fetch is performed. The method 500 returns from 522 (if it is determined that the data is available) and 524 (if it is determined that the data is not available) to 512 until it is determined that the load is safe (or the load is suppressed, not shown).In 532, in response to the determination of the load safety, it is determined whether the requested data is available (for example, a hit to the L1 cache). In 534, in response to the determination that the requested data is unavailable, the requested fetch is performed. In 536, in response to a determination that the requested data is available, the data is forwarded to a slave operation.Embodiments may include the ability to selectively enable and disable hardware load hardening, for example, to harden (e.g., convert into speculative data fetch operations and non-speculative data forwarding operations) only speculative safety-critical loading. Determining whether to enhance the loading operation can be based on: whether the loading attempts to access protected data, or whether the loading is otherwise not authorized or requires authorization that has not been obtained. The determination can be performed dynamically, thereby making full use of existing processor features (for example, in a memory execution unit), such as protected key technology. For example, the loading of requesting data from a protected page that does not have (or does not yet have) a key can be enhanced. In an embodiment, based on the desire to reduce vulnerability to specific exploits and/or attacks (eg, Ghost v1, assuming other technologies are used in other variants), selective activation may be used (eg, only for conditional branches) .Embodiments may include techniques that can utilize more aggressive prefetching to improve performance. For example, speculative prefetching related to loading may be triggered not only in response to an L1 miss, but also in response to an L1 hit under certain conditions. Any known technique can be used, including those used by hardware prefetchers, such as using a cache line hit as a trigger to prefetch the next sequential cache line. Embodiments may also include using and/or expanding the load queue to store prefetched data to reduce the likelihood that cache lines that are speculatively loaded will be evicted before the security logic determines that the load is safe.Embodiments may include compiler support for hardware load enhancement. For example, the compiler can identify critical loads (eg, loads that have long dependency chains or branch conditions) and insert prefetch instructions before them to reduce the performance impact of delays on forwarding data requested by these loads.In an embodiment, the processor may include security logic, data forwarding hardware, and data fetching hardware. The safety logic is used to determine whether the load is safe. The data forwarding hardware is used to forward the data requested by the load in response to the determination of the load safety. The data fetching logic is used to fetch the data requested by the load regardless of the safety determination of the load.The data forwarding hardware can also be used to prevent the forwarding of data in response to the determination that the loading is unsafe. The data forwarding hardware may include a loading queue. The data fetching hardware can include a miss queue. The safety logic can be used to determine whether the loading is safe based on information from the reservation station or the out-of-order execution cluster. The processor may further include a translation backup buffer for storing address translation, and address translation is performed in response to the load regardless of the determination that the load is safe. When the loading is no longer speculative, security logic can be used to determine that the loading is safe. The loading can be performed in response to a load instruction. When the load instruction is ready to be retired, the safety logic can be used to determine that the load is safe. Data can be forwarded to one or more subordinate commands. In response to the determination that the speculative execution of the load is on the wrong path, the load can be suppressed. The loading can be performed in response to branch prediction. When the condition of branch prediction is met, the safety logic can be used to determine that the load is safe.In an embodiment, the method may include determining whether the loading is safe; in response to determining that the loading is unsafe, preventing forwarding of the data requested by the loading; and regardless of the determination that the loading is unsafe, fetching the data requested by the loading .The method may further include: in response to determining that the loading is safe, forwarding the data. The method may further include: regardless of the determination that the loading is unsafe, performing address translation and storing the result in the translation backup buffer. The method may include the loading on a speculative execution path. The method may further include: determining a speculative execution path error; and in response to determining the speculative execution path error, suppressing the loading.In an embodiment, the system may include the system memory and the processor as described above, wherein the data may be retrieved from the system memory.In an embodiment, the device may include: means for determining whether the load is safe; means for forwarding the data requested by the load in response to the determination of the load safety; and means for taking out regardless of the determination of the load safety The device that loads the requested data.The data forwarding device may also be used to prevent the forwarding of data in response to the determination that the loading is unsafe. The data forwarding device may include a loading queue. The data fetching device may include a miss queue. The safety determination device may be used to determine whether the loading is safe based on information from a reservation station or an out-of-order execution cluster. The device may also include a translation backup buffer for storing address translation, which is executed in response to the load regardless of the determination of the safety of the load. The safety determination device may be used to determine that the loading is safe when the loading is no longer speculative. The loading can be performed in response to a load instruction. The safety determination device may be used to determine that the load is safe when the load instruction is ready to be retired. Data can be forwarded to one or more subordinate commands. In response to the determination that the speculative execution of the load is on the wrong path, the load can be suppressed. The loading can be performed in response to branch prediction. The safety determination device may be used to determine the load safety when the condition of branch prediction is satisfied.In an embodiment, an apparatus may include a data storage device that stores code that, when executed by a hardware processor, causes the hardware processor to perform any method disclosed herein. The device may be as described in the detailed description. The method can be as described in the detailed description.In an embodiment, a non-transitory machine-readable medium may store code that, when executed by a machine, causes the machine to perform a method including any of the methods disclosed herein.Exemplary core, processor and system architectureThe embodiments of the present invention have been described and depicted with reference to a processor, which may represent any one of many different processors in which the present invention is embodied in different ways and/or for different purposes. These processors and cores, for example, as described below, may include hardware such as caches and branch predictors, which improve performance, but may make the processor and/or core more vulnerable to defenses according to embodiments of the present invention. Analysis of the attack.For example, the implementation of the core in the processor in which the present invention can be embodied may include: a general-purpose ordered core intended for general-purpose computing; a high-performance general-purpose out-of-order core intended for general-purpose computing; Dedicated core for graphics and/or scientific (throughput) calculations. The implementation of the processor in which the present invention can be embodied can include: a central processing unit (CPU), including one or more general-purpose ordered cores intended for general-purpose computing and/or one or more general-purpose ordered cores intended for general-purpose computing Multiple general-purpose out-of-order cores; and coprocessors, including one or more dedicated cores intended primarily for graphics and/or scientific (throughput) calculations. Such different processors lead to different computer system architectures. These computer system architectures may include: coprocessors on a chip separate from the CPU; coprocessors in the same package as the CPU but on a separate die ; Coprocessors on the same die as the CPU (in this case, such coprocessors are sometimes called dedicated logic or dedicated cores, such as integrated graphics and/or scientific (throughput) ) Logic); and system-on-chip (SoC), which may include the described CPU (sometimes referred to as application core(s) or application processor(s)), the coprocessors and additional functions described above On the same die.An exemplary core architecture is described next, followed by an exemplary processor and computer architecture. Each processor may include one or more cores, where each core and/or combination of cores may be constructed and designed to execute one or more threads, processes, or other instruction sequences at different times. According to any one of a class of methods called Simultaneous (or Symmetric) Multithreading (SMT) or any other method, the core architecture and design techniques can prepare and/or support concurrent execution of multiple threads.In addition, as mentioned above and explained in more detail below, the embodiments of the present disclosure can be applied to any type of processor or processing element, including: general-purpose processors, server processors, or processing elements used in a server environment , Coprocessor (for example, security coprocessor), high-throughput MIC processor, GPGPU, accelerator (such as, for example, graphics accelerator or digital signal processing (DSP) unit, cryptographic accelerator, fixed function accelerator, machine learning accelerator, networking Accelerator, or computer vision accelerator), field programmable gate array, or any other processor or processing device. One or more processors can be implemented on one or more chips. The one or more processors may be part of one or more substrates, and/or may be implemented on the one or more substrates using any of a variety of process technologies (such as, for example, BiCMOS, CMOS, or NMOS). The processors and processing devices listed above and described herein are exemplary; as explained herein, the present disclosure is applicable to any processor or processing device.In addition, as described above and explained in more detail below, the embodiments of the present disclosure can be applied to processors or processing elements that use multiple instruction sets and instruction set architectures, including, for example, x86 instruction sets (optionally including Extension of the updated version); MIPS instruction set of MIPS Technologies, Sunnyvale, California; ARM instruction set of ARM Holdings, Sunnyvale, California (with optional additional extensions such as NEON); IBM’s " Power" instruction set or any other instruction set, including both RISC and CISC instruction sets. The instruction sets and instruction set architectures listed above and described herein are exemplary; as explained herein, the present disclosure is applicable to any instruction set or instruction set architecture.Exemplary nuclear architecture6A is a block diagram illustrating an exemplary in-order pipeline and an exemplary register renaming out-of-order issue/execution pipeline according to various embodiments of the present invention. 6B is a block diagram showing an exemplary embodiment of an in-order architecture core to be included in a processor and an exemplary register renaming out-of-order issue/execution architecture core according to various embodiments of the present invention. The solid-line boxes in FIGS. 6A-6B illustrate the ordered pipeline and the ordered cores, and the optional addition of the dashed boxes illustrates the register renaming, out-of-order issue/execution pipeline and cores. Considering that the order aspect is a subset of the disorder aspect, the disorder aspect will be described.In FIG. 6A, the processor pipeline 600 includes a fetch stage 602, a length decoding stage 604, a decoding stage 606, an allocation stage 608, a rename stage 610, a scheduling (also referred to as dispatch or release) stage 612, a register read/memory Read stage 614, execute stage 616, write back/memory write stage 618, exception handling stage 622, and commit stage 624.6B shows a processor core 690 that includes a front end unit 630 that is coupled to the execution engine unit 650, and both the front end unit 630 and the execution engine unit 650 are coupled to the memory unit 670. The core 690 may be a reduced instruction set computing (RISC) core, a complex instruction set computing (CISC) core, a very long instruction word (VLIW) core, or a hybrid or alternative core type. As yet another option, the core 690 may be a dedicated core, such as, for example, a network or communication core, a compression engine, a coprocessor core, a general-purpose computing graphics processing unit (GPGPU) core, a graphics core, and so on. For example, as explained above, the core 690 may be any item in the collection including: general-purpose processors, server processors, or processing elements for use in a server environment, co-processors (e.g., security co-processing Accelerators), high-throughput MIC processors, GPGPUs, accelerators (such as, for example, graphics accelerators or digital signal processing (DSP) units, cryptographic accelerators, fixed-function accelerators, machine learning accelerators, networking accelerators, or computer vision accelerators), on-site Program the gate array, or any other processor or processing device.The front-end unit 630 includes a branch prediction unit 632, which is coupled to the micro-operation cache 633 and the instruction cache unit 634, and the instruction cache unit 634 is coupled to the instruction translation lookaside buffer (TLB) 636, which is converted back The buffer 636 is coupled to the instruction fetch unit 638, and the instruction fetch unit 638 is coupled to the decoding unit 640. The decoding unit 640 (or decoder) can decode the instruction and generate one or more micro-operations, micro-code entry points, micro-operations, micro-code entry points, micro-operations that are decoded from the original instructions, or reflect the original instructions in other ways, or derived from the original instructions. Commands, other commands, or other control signals are output. Micro-operations, micro-code entry points, micro-instructions, etc. can be stored in at least the micro-operation cache 633. The decoding unit 640 can be implemented using various different mechanisms. Examples of suitable mechanisms include, but are not limited to, look-up tables, hardware implementations, programmable logic arrays (PLA), microcode read-only memory (ROM), etc. In one embodiment, the core 690 includes a microcode ROM or other medium that stores microcode for certain macroinstructions (for example, in the decoding unit 640, or otherwise in the front end unit 630). The micro-operation cache 633 and the decoding unit 640 are coupled to the rename/allocator unit 652 in the execution engine unit 650. In various embodiments, a micro-operation cache such as 633 may also or alternatively be referred to as operation cache, u-op cache, uop cache, or μop cache; and micro-operations may be referred to as micro-op, u-op , Uops and μop.The execution engine unit 650 includes a rename/allocator unit 652 that is coupled to a retirement unit 654 and a set 656 of one or more scheduler units. The scheduler unit(s) 656 represents any number of different schedulers, including reserved stations, central command windows, and so on. The scheduler unit(s) 656 is coupled to the physical register file unit(s) 658. Each of the physical register file unit(s) 658 represents one or more physical register files, where different physical register files store one or more different data types, such as scalar integer, scalar float Point, packed integer, packed floating point, vector integer, vector floating point, state (for example, the instruction pointer as the address of the next instruction to be executed), etc. In one embodiment, the physical register file unit(s) 658 includes a vector register unit, a write mask register unit, and a scalar register unit. These register units can provide architectural vector registers, vector mask registers and general registers. The physical register file unit(s) 658 is overlapped by the retirement unit 654 to illustrate various ways in which register renaming and out-of-order execution can be achieved (for example, using reorder buffer(s) and retirement register(s) Heap; use (multiple) future files, (multiple) history buffers, (multiple) retirement register files; use register map and register pool, etc.). The retirement unit 654 and the physical register file unit(s) 658 are coupled to the execution cluster(s) 660. The execution cluster(s) 660 includes a set 662 of one or more execution units and a set 664 of one or more memory access units. The execution unit 662 can perform various operations (for example, shift, addition, subtraction, multiplication) and can perform various data types (for example, scalar floating point, packed integer, packed floating point, vector integer, vector floating point). Although some embodiments may include multiple execution units dedicated to a particular function or set of functions, other embodiments may include only one execution unit or multiple execution units that all perform all functions. The scheduler unit(s) 656, the physical register file unit(s) 658, and the execution cluster(s) 660 are shown as possibly more than one, as some embodiments create separate data/operations for certain types of Pipeline (e.g., scalar integer pipeline, scalar floating point/compacted integer/compacted floating point/vector integer/vector floating point pipeline, and/or each has its own scheduler unit, physical register file unit(s), and/or The memory access pipeline of the execution cluster-and in the case of a separate memory access pipeline, certain embodiments are implemented in which only the execution cluster of the pipeline has the memory access unit(s) 664). It should also be understood that in the case of using separate pipelines, one or more of these pipelines may be issued/executed out of order, and the remaining pipelines may be ordered.The set of memory access units 664 is coupled to the memory unit 670, which includes a data TLB unit 672, which is coupled to a data cache unit 674, which is coupled to the second level (L2) high speed Cache unit 676. In an exemplary embodiment, the memory access unit 664 may include a load unit, a storage address unit, and a storage data unit, each of which is coupled to a data TLB unit 672 in the memory unit 670. The instruction cache unit 634 is also coupled to the second level (L2) cache unit 676 in the memory unit 670. The L2 cache unit 676 is coupled to one or more other levels of cache, and ultimately to the main memory.As an example, an exemplary register renaming out-of-order issue/execution core architecture can implement the pipeline 600 as follows: 1) instruction fetch 638 execution fetch stage 602 and length decoding stage 604; 2) decoding unit 640 executes decoding stage 606; 3) The rename/allocator unit 652 executes the allocation stage 608 and the rename stage 610; 4) (multiple) scheduler unit 656 executes the scheduling stage 612; 5) (multiple) physical register file unit 658 and memory unit 670 execute Register read/memory read stage 614; execution cluster 660 executes execution stage 616; 6) memory unit 670 and physical register file unit(s) 658 execute write back/memory write stage 618; 7) each unit may be involved The exception handling stage 622; and 8) the retirement unit 654 and the physical register file unit(s) 658 execute the commit stage 624.The core 690 can support one or more instruction sets (for example, the x86 instruction set (with some extensions that have been added with newer versions); the MIPS instruction set of MIPS Technologies, Sunnyvale, California; Sunnyvale, California The ARM instruction set (with optional additional extensions such as NEON), IBM's "Power" instruction set, or any other instruction set, including both RISC and CISC instruction sets) of ARM Holdings, Inc., including those described herein The (multiple) instructions. In one embodiment, core 690 includes logic to support compressed data instruction set extensions (eg, AVX, AVX2, AVX-512), thereby allowing compressed data to be used to perform operations used by many multimedia applications.It should be understood that the core can support multi-threading (execute two or more parallel operations or sets of threads), and the multi-threading can be accomplished in various ways, including time-division multi-threading, SMT ( For example, a single physical core provides a logical core for each of the threads that the physical core is multithreading at the same time), or a combination thereof (for example, time division fetching and decoding, and subsequent SMT such as hyperthreading technology) ).Although register renaming is described in the context of out-of-order execution, it should be understood that register renaming can be used in an in-order architecture. Although the illustrated embodiment of the processor also includes separate instruction and data cache units 634/674 and a shared L2 cache unit 676, alternative embodiments may have a single internal cache for both instructions and data , Such as, for example, the first level (L1) internal cache or multiple levels of internal cache. In some embodiments, the system may include a combination of internal cache and external cache external to the core and/or processor. Alternatively, all cache(s) can be external to the core and/or processor.Exemplary processor architectureFIG. 7 is a block diagram of a processor 700 that may have more than one core, may have an integrated memory controller, and may have an integrated graphics device according to an embodiment of the present invention. The solid-line frame in FIG. 7 illustrates a processor 700 having a single core 702A, a system agent 710, and a collection 716 of one or more bus controller units, and an optional addition of a dashed frame has multiple cores 702A-N , A collection 714 of one or more integrated memory controller units in the system agent unit 710 and an alternative processor 700 for dedicated logic 708.Therefore, different implementations of the processor 700 may include: 1) CPU, where dedicated logic 708 is integrated graphics and/or scientific (throughput) logic (which may include one or more cores), and cores 702A-N are one or Multiple general-purpose cores (for example, general-purpose ordered cores, general-purpose out-of-order cores, a combination of the two); 2) coprocessors, of which cores 702A-N are intended to be used mainly for graphics and/or science (throughput) A large number of dedicated cores; 3) a coprocessor, where the cores 702A-N are a large number of general-purpose ordered cores; and 4) a core 702A- representing any number of decomposed cores with separate input/output (I/O) blocks N. Therefore, the processor 700 may be a general-purpose processor, a server processor, or a processing element used in a server environment, a coprocessor (for example, a security coprocessor), a high-throughput MIC processor, a GPGPU, an accelerator (such as, for example, Graphics accelerator or digital signal processing (DSP) unit, cryptographic accelerator, fixed function accelerator, machine learning accelerator, networking accelerator, or computer vision accelerator), field programmable gate array, or any other processor or processing device. The processor can be implemented on one or more chips. The processor 700 may be part of one or more substrates, and/or may be implemented on one or more substrates using any of a variety of process technologies (such as, for example, BiCMOS, CMOS, or NMOS).The memory hierarchy includes one or more levels of cache within the core, a collection of one or more shared cache units 706, and an external memory (not shown) coupled to the collection 714 of integrated memory controller units. The set of shared cache units 706 may include one or more intermediate levels of cache, such as second level (L2), third level (L3), fourth level (L4) or other levels of cache, the last level Cache (LLC) and/or a combination of the above. Although in one embodiment, the ring-based interconnection unit 712 will integrate graphics logic 708 (integrated graphics logic 708 is an example of dedicated logic and is also referred to herein as dedicated logic), a collection of shared cache units 706, and a system proxy The unit 710/integrated memory controller unit(s) 714 are interconnected, but alternative embodiments may use any number of known techniques to interconnect such units. In one embodiment, coherency is maintained between one or more cache units 706 and cores 702A-N.In some embodiments, one or more cores 702A-N can be multi-threaded. The system agent 710 includes those components that coordinate and operate the cores 702A-N. The system agent unit 710 may include, for example, a power control unit (PCU) and a display unit. The PCU may be the logic and components required to adjust the power state of the cores 702A-N and the integrated graphics logic 708, or may include these logics and components. The display unit is used to drive one or more externally connected displays.The cores 702A-N may be homogeneous or heterogeneous in terms of architectural instruction set; that is, two or more cores of the cores 702A-N may be able to execute the same instruction set, while other cores may be able to execute the instruction Only a subset of the set or different instruction sets.Exemplary computer architectureFigures 8-11 are block diagrams of exemplary computer architectures. Known in the art for laptop devices, desktop computers, handheld PCs, personal digital assistants, engineering workstations, servers, network devices, network hubs, switches, embedded processors, digital signal processors (DSP), general-purpose processing Processor, server processor or processing element used in a server environment, coprocessor (e.g., security coprocessor), high-throughput MIC processor, GPGPU, accelerator (such as, for example, graphics accelerator, cryptographic accelerator, fixed function accelerator) , Machine learning accelerator, networking accelerator, or computer vision accelerator), field programmable gate array, or any other processor or processing device, graphics device, video game device, set-top box, microcontroller, cellular phone, portable media player, Other system designs and configurations for handheld devices and various other electronic devices are also suitable. Generally, a variety of systems or electronic devices capable of including a processor and/or other execution logic as disclosed herein are generally suitable.Referring now to FIG. 8, shown is a block diagram of a system 800 according to an embodiment of the present invention. The system 800 may include one or more processors 810, 815, which are coupled to the controller hub 820. In one embodiment, the controller hub 820 includes a graphics memory controller hub (GMCH) 890 and an input/output hub (IOH) 850 (which may be on separate chips); the GMCH 890 includes a memory and a graphics controller, and the memory 840 and Coprocessor 845 is coupled to the memory and graphics controller; IOH 850 couples I/O device 860 to GMCH 890. Alternatively, one or both of the memory and the graphics controller are integrated in the processor (as described herein), the memory 840 and the coprocessor 845 are directly coupled to the processor 810, and the controller hub 820 and the IOH 850 is in a single chip.The optionality of the additional processor 815 is indicated by a dashed line in FIG. 8. Each processor 810, 815 may include one or more of the processing cores described herein, and may be a certain version of the processor 700.The memory 840 may be, for example, dynamic random access memory (DRAM), phase change memory (PCM), or a combination of the two. For at least one embodiment, the controller hub 820 communicates with(s) via a multi-drop bus such as a front-side bus (FSB), a point-to-point interface such as a fast path interconnect (QPI), or a similar connection 895 810 and 815 communicate.In one embodiment, the coprocessor 845 is a dedicated processor (including, for example, general-purpose processors, server processors, or processing elements used in a server environment, coprocessors such as security coprocessors, high throughput MIC processor, GPGPU, accelerator (such as, for example, graphics accelerator or digital signal processing (DSP) unit, cryptographic accelerator, fixed function accelerator, machine learning accelerator, networking accelerator, or computer vision accelerator), field programmable gate array, or any Other processors or processing equipment). In one embodiment, the controller hub 820 may include an integrated graphics accelerator.There may be various differences between physical resources 810 and 815 in a series of quality metrics including architecture, micro-architecture, thermal and power consumption characteristics.In one embodiment, the processor 810 executes instructions that control general types of data processing operations. Embedded in these instructions can be coprocessor instructions. The processor 810 recognizes these coprocessor instructions as having a type that should be executed by the attached coprocessor 845. Therefore, the processor 810 issues these coprocessor instructions (or control signals representing the coprocessor instructions) to the coprocessor 845 on the coprocessor bus or other interconnections. The coprocessor(s) 845 accepts and executes the received coprocessor instructions.Referring now to FIG. 9, shown is a block diagram of a first more specific exemplary system 900 in accordance with an embodiment of the present invention. As shown in FIG. 9, the multi-processor system 900 is a point-to-point interconnection system, and includes a first processor 970 and a second processor 980 coupled via a point-to-point interconnection 950. Each of the processors 970 and 980 may be a certain version of the processor 700. In an embodiment of the present invention, the processors 970 and 980 are processors 810 and 815, respectively, and the coprocessor 938 is a coprocessor 845. In another embodiment, the processors 970 and 980 are the processor 810 and the coprocessor 845, respectively.The processors 970 and 980 are shown as including integrated memory controller (IMC) units 972 and 982, respectively. The processor 970 also includes point-to-point (P-P) interfaces 976 and 978 as part of its bus controller unit; similarly, the second processor 980 includes P-P interfaces 986 and 988. The processors 970, 980 can exchange information via a P-P interface 950 using point-to-point (P-P) interface circuits 978, 988. As shown in Figure 9, IMC 972 and 982 couple the processors to corresponding memories, namely memory 932 and memory 934, which may be parts of the main memory locally attached to the corresponding processors.The processors 970 and 980 can each exchange information with the chipset 990 via the respective P-P interfaces 952 and 954 of the point-to-point interface circuits 976, 994, 986, and 998. The chipset 990 can optionally exchange information with the coprocessor 938 via the high-performance interface 992. In one embodiment, the coprocessor 938 is a dedicated processor, such as, for example, a high-throughput MIC processor, a network or communication processor, a compression engine, a graphics processor, a GPGPU, an embedded processor, and so on.A shared cache (not shown) can be included in either processor, or external to the two processors but connected to these processors via a PP interconnect, so that if the processor is placed in a low power mode, nothing The local cache information of one or both processors can be stored in the shared cache.The chipset 990 may be coupled to the first bus 916 via the interface 996. In one embodiment, the first bus 916 may be a Peripheral Component Interconnect (PCI) bus or a bus such as PCI Express bus or another third-generation I/O interconnect bus, but the scope of the present invention is not limited thereto. .As shown in FIG. 9, various I/O devices 914 may be coupled to the first bus 916 along with a bus bridge 918 that couples the first bus 916 to the second bus 920. In one embodiment, such as general purpose processors, server processors, or processing elements used in a server environment, coprocessors (e.g., security coprocessors), high-throughput MIC processors, GPGPUs, accelerators (such as, for example, One or more additions to graphics accelerators or digital signal processing (DSP) units, cryptographic accelerators, fixed function accelerators, machine learning accelerators, networking accelerators, or computer vision accelerators), field programmable gate arrays, or any other processor or processing equipment The processor 915 is coupled to the first bus 916. In one embodiment, the second bus 920 may be a low pin count (LPC) bus. In one embodiment, various devices may be coupled to the second bus 920. These devices include, for example, a keyboard and/or mouse 922, a communication device 927, and a storage unit 928, such as a storage unit 928 that may include instructions/code and data 930. Disk drives or other mass storage devices. In addition, the audio I/O 924 may be coupled to the second bus 920. Note that other architectures are possible. For example, instead of the point-to-point architecture of FIG. 9, the system can implement a multi-branch bus or other such architectures.Referring now to FIG. 10, shown is a block diagram of a second more specific exemplary system 1000 in accordance with an embodiment of the present invention. Similar elements in FIGS. 9 and 10 use similar reference numerals, and some aspects of FIG. 9 are omitted from FIG. 10 to avoid confusing other aspects of FIG. 10.Figure 10 illustrates that the processors 970, 980 may include integrated memory and I/O control logic ("CL") 972 and 982, respectively. Therefore, the CL 972, 982 include integrated memory controller units and include I/O control logic. Figure 10 illustrates that not only the memories 932, 934 are coupled to the CL 972, 982, but the I/O device 1014 is also coupled to the control logic 972, 982. The conventional I/O device 1015 is coupled to the chipset 990.Referring now to FIG. 11, shown is a block diagram of SoC 1100 according to an embodiment of the present invention. Similar elements in FIG. 7 use similar reference numerals. In addition, the dashed box is an optional feature on more advanced SoCs. In FIG. 11, the interconnection unit(s) 1102 is coupled to: an application processor 1110, which includes a set of one or more cores 702A-N and a shared cache unit(s) 706, one or more cores The set 702A-N includes a cache unit 704A-N; a system proxy unit 710; a bus controller unit(s) 716; an integrated memory controller unit(s) 714; a collection of one or more coprocessors 1120, It may include integrated graphics logic, image processors, audio and video processors, general-purpose processors, server processors or processing elements used in server environments, security coprocessors, high-throughput MIC processors, GPGPUs, Accelerators (such as, for example, graphics accelerators or digital signal processing (DSP) units, cryptographic accelerators, fixed function accelerators, machine learning accelerators, networking accelerators, or computer vision accelerators), field programmable gate arrays, or any other processor or processing device ; A static random access memory (SRAM) unit 1130; a direct memory access (DMA) unit 1132; and a display unit 1140 for coupling to one or more external displays. In one embodiment, the coprocessor(s) 1120 includes a dedicated processor, such as, for example, a network or communication processor, compression engine, GPGPU, high-throughput MIC processor, or embedded processor, etc.in conclusionThe various embodiments of the mechanisms disclosed herein may be implemented in hardware, software, firmware, or a combination of such implementations. The embodiments of the present invention can be implemented as a computer program or program code executed on a programmable system including at least one processor (including, for example, a general-purpose processor, a server processor, or a processing element used in a server environment , Coprocessor (for example, security coprocessor), high-throughput MIC processor, GPGPU, accelerator (such as, for example, graphics accelerator or digital signal processing (DSP) unit, cryptographic accelerator, fixed function accelerator, machine learning accelerator, networking Accelerator, or computer vision accelerator), field programmable gate array, or any other processor or processing device), storage system (including volatile and nonvolatile memory and/or storage elements), at least one input device, and at least An output device.Program code (such as code 930 illustrated in FIG. 9) may be applied to input instructions to perform the functions described herein and generate output information. The output information can be applied to one or more output devices in a known manner. For the purposes of this application, a processing system includes any system having a processor such as, for example, a digital signal processor (DSP), a microcontroller, an application specific integrated circuit (ASIC), or a microprocessor.The program code can be implemented in a high-level process-oriented programming language or an object-oriented programming language to communicate with the processing system. If necessary, the program code can also be implemented in assembly language or machine language. In fact, the mechanism described in this article is not limited to the scope of any particular programming language. In any case, the language can be a compiled language or an interpreted language.One or more aspects of at least one embodiment may be implemented by representative instructions stored on a machine-readable medium, the instructions representing various logic in the processor, and the instructions, when read by a machine, cause the machine to manufacture To implement the logic of the techniques described in this article. Such representations called "IP cores" can be stored on a tangible machine-readable medium and can be supplied to various customers or production facilities to be loaded into the manufacturing machine that actually manufactures the logic or processor.Such machine-readable storage media may include, but are not limited to, non-transitory, tangible arrangements of articles manufactured or formed by machines or equipment, including storage media, such as hard disks; any other types of disks, including floppy disks, optical disks, compact disks Disk Read Only Memory (CD-ROM), Rewritable Compact Disk (CD-RW), and Magneto-Optical Disk; semiconductor devices such as read only memory (ROM), such as dynamic random access memory (DRAM) and static random access memory Access memory (SRAM) random access memory (RAM), erasable programmable read-only memory (EPROM), flash memory, electrically erasable programmable read-only memory (EEPROM); phase change memory (PCM); magnetic card or Optical card; or any other type of medium suitable for storing electronic instructions.Therefore, embodiments of the present invention also include non-transitory tangible machine-readable media containing instructions or design data, such as hardware description language (HDL), which defines the structures, circuits, devices, and processors described herein. And/or system characteristics. These embodiments are also called program products.The instructions to be executed by the processor core according to the embodiment of the present invention can be embodied in the "general vector friendly instruction format" described in detail below. In other embodiments, this type of format is not used but another instruction format is used. However, the following descriptions of write mask registers, various data transformations (mixing, broadcasting, etc.), addressing, etc. generally apply to the above (multiple Article) Description of the embodiment of the instruction. Additionally, exemplary systems, architectures, and pipelines are described in detail below. Instructions can be executed on such systems, architectures, and pipelines, but are not limited to those systems, architectures, and pipelines detailed.In some cases, the instruction converter can be used to convert instructions from a source instruction set to a target instruction set. For example, the instruction converter can transform instructions (for example, using static binary transformation, dynamic binary transformation including dynamic compilation), deform, emulate, or otherwise convert them into one or more other instructions to be processed by the core. The instruction converter can be implemented by software, hardware, firmware, or a combination thereof. The instruction converter may be on the processor, off the processor, or part on and part off the processor.Fig. 12 is a block diagram comparing using a software instruction converter to convert binary instructions in a source instruction set into binary instructions in a target instruction set according to an embodiment of the present invention. In the illustrated embodiment, the instruction converter is a software instruction converter, but alternatively, the instruction converter may be implemented by software, firmware, hardware, or various combinations thereof. FIG. 12 shows that an x86 compiler 1204 can be used to compile a program in the form of a high-level language 1202 to generate x86 binary code 1206 that can be natively executed by a processor 1216 having at least one x86 instruction set core. The processor 1216 with at least one x86 instruction set core means any processor that performs substantially the same function as an Intel processor with at least one x86 instruction set core by compatibly executing or otherwise processing the following items: 1) Intel The substantial part of the instruction set of the x86 instruction set core, or 2) an application that aims to run on an Intel processor with at least one x86 instruction set core in order to obtain substantially the same results as an Intel processor with at least one x86 instruction set core Or the object code version of other software. The x86 compiler 1204 represents a compiler operable to generate x86 binary code 1206 (for example, object code), which can be executed on a processor 1216 having at least one x86 instruction set core with or without additional link processing . Similarly, FIG. 12 shows that an alternative instruction set compiler 1208 can be used to compile programs in the form of high-level language 1202 to generate programs that can be generated by a processor 1214 that does not have at least one x86 instruction set core (e.g. The MIPS instruction set of MIPS Technology Corporation of Vail, and/or the processor that executes the core of the ARM instruction set of ARM Holdings, Sunnyvale, California) natively executes the alternate instruction set binary code 1210. The instruction converter 1212 is used to convert the x86 binary code 1206 into code that can be natively executed by the processor 1214 without an x86 instruction set core. The converted code is unlikely to be the same as the alternate instruction set binary code 1210, because an instruction converter capable of doing so is difficult to manufacture; however, the converted code will perform general operations and is composed of instructions from the alternate instruction set. Therefore, the instruction converter 1212 expresses software, firmware, hardware, or a combination thereof that allows a processor or other electronic device that does not have an x86 instruction set processor or core to execute the x86 binary code 1206 through simulation, simulation, or any other process.The operations in the flowchart may have been described with reference to the exemplary embodiments of other drawings. However, it should be understood that the operations of the flowchart can be performed by embodiments other than those of the present invention discussed with reference to other drawings, and the embodiments of the present invention discussed with reference to other drawings can be performed with reference to the process The operations discussed in the figure are different operations. In addition, although the flowchart in the drawings shows a specific order of operations performed by certain embodiments of the present invention, it should be understood that such order is exemplary (e.g., alternative embodiments may perform operations in a different order, Some operations can be combined, some operations can be overlapped, etc.).One or more parts of the embodiments of the present invention may be implemented using different combinations of software, firmware, and/or hardware. The embodiments can be implemented using electronic devices, which use machine-readable media (also referred to as computer-readable media) to store and (internally and/or through a network and other electronic devices) transmit code (the code consists of software instructions) , And sometimes referred to as computer program code or computer program) and/or data, machine-readable media such as machine-readable storage media (for example, magnetic disks, optical disks, read-only memory (ROM), flash memory devices, phase change memory), and A machine-readable transmission medium (also referred to as a carrier) (for example, an electric, optical, radio, acoustic or other form of propagated signal-such as a carrier wave, infrared signal). Therefore, an electronic device (for example, a computer) may include hardware and software, such as a collection of one or more processors coupled to one or more machine-readable storage media, and the machine may The read storage medium is used to store code for execution on the processor set and/or used to store data. For example, an electronic device may include a non-volatile memory containing a code, because the non-volatile memory can retain the code/data even when the electronic device is turned off (when power is removed), and when the electronic device is turned on The part of the code that will be executed by the processor(s) of that electronic device is typically copied from the slower non-volatile memory to the volatile memory of that electronic device (e.g., dynamic random access memory (DRAM) ), static random access memory (SRAM)). A typical electronic device also includes a collection of one or more physical network interfaces for establishing a network connection with other electronic devices (to transmit and/or receive codes and/or data using propagated signals).Although the present invention has been described with several embodiments, those skilled in the art will recognize that the present invention is not limited to the described embodiments, and the present invention can be practiced with modifications and changes within the spirit and scope of the appended claims. The description is therefore to be regarded as illustrative rather than restrictive. |
A processor is configured to operate in a modes which utilize segmentation and which do not utilize segmentation. The processor includes circuitry which is configured to detect and respond to mode and state changes. The circuitry is configured to determine whether a segmentation state of the processor changes in response to execution of a control transfer operation. If the segmentation state does not change as a result of the transfer instruction, execution of instructions may continue sequentially and a corresponding first check performed. However, if the segmentation state does change as a result of the transfer instruction, a flush of the pipeline is initiated prior to performing a corresponding second check. When a first mode of operation is detected a limit check may be performed, while a canonical check may performed when a second mode of operation is detected. A special register is defined which is configured to indicate changes in segmentation state subsequent to a control transfer operations. A read of the special register may then be performed in order to determine whether a state change is indicated. |
1. A method comprising:detecting a control transfer operation in a processor; determining whether an operating mode of the processor changes from a first mode to a second mode in response to execution of the transfer operation; performing a first check in response to detecting the operating mode is the first mode as a result of the transfer operation; and performing a second check in response to detecting the operating mode is the second mode as a result of the transfer operation; wherein the first check comprises a limit check and the second check comprises a canonical check; and flushing a pipeline of the processor in response to detecting the operating mode of the processor changes as a result of the operation. 2. The method of claim 1, wherein the control transfer operation comprises a far transfer instruction.3. The method of claim 2, wherein segmentation is enabled in the first mode and segmentation is not enabled in the second mode.4. The method of claim 3, wherein the second mode is a 64-bit addressing mode.5. The method of claim 1, further comprising reading a register configured to indicate a change in segmentation state as a result of said transfer operation.6. The method of claim 5, wherein said register is further configured to indicate a change in privilege level as a result of said transfer operation.7. The method of claim 5, wherein said register is a programmer invisible special register which is accessed via a special register bus.8. A processor comprising:a memory configured to store instructions; first circuitry configured to decode and detect a control transfer instruction; and second circuitry configured to: determine whether an operating mode of the processor changes from a first mode to a second mode in response to execution of the transfer operation; perform a first check in response to detecting the operating mode is the first mode as a result of the transfer operation; and perform a second check in response to detecting the operating mode is the second mode as a result of the transfer operation; and flush a pipeline of the processor in response to detecting the operating mode of the processor changes as a result of the operation; wherein the first check comprises a limit check and the second check comprises a canonical check. 9. The processor of claim 8, wherein the control transfer operation comprises a far transfer instruction.10. The processor of claim 9, wherein the second circuitry comprises a load/store unit.11. The processor of claim 10, wherein the load/store unit is configured to read a register which is configured to indicate a change in segmentation state as a result of the transfer operation.12. The processor of claim 11, wherein said register is further configured to indicate a privilege level subsequent to execution of the transfer operation.13. The processor of claim 11, further comprising a special register file, and wherein said register is a programmer invisible special register included in the special register file and is accessed via a special register bus.14. The processor of claim 8, wherein segmentation is enabled in the first mode and segmentation is not enabled in the second mode.15. The processor of claim 14, wherein the second mode is a 64-bit addressing mode.16. A computer system comprising:a main memory; and a processor coupled to the main memory, wherein said processor includes: an instruction cache configured to store instructions; a first circuit configured to detect a control transfer instruction; and a second circuit configured to: determine whether an operating mode of the processor changes from a first mode to a second mode in response to execution of the transfer operation; perform a first check in response to detecting the operating mode is the first mode as a result of the transfer operation; and perform a second check in response to detecting the operating mode is the second mode as a result of the transfer operation; and flush a pipeline of the processor in response to detecting the operating mode of the processor changes as a result of the operation; wherein the first check comprises a limit check and the second check comprises a canonical check. 17. The computer system of claim 16, wherein the control transfer instruction comprises a far transfer, and wherein segmentation is enabled in the first mode and segmentation is not enabled in the second mode.18. The computer system of claim 17, wherein the second mode comprises a 64-bit addressing mode. |
BACKGROUND OF THE INVENTION1. Field of the InventionThis invention is related to the field of processors and, more particularly, to efficiently detecting mode changes in a processor.2. Description of the Related ArtProcessor architectures often provide a variety of modes, typically programmable in configuration registers and/or memory locations read by the processor during operation. The selected mode generally controls the operation of certain aspects of the processor, as defined by the processor architecture. Other modes may cause different operation in those aspects.As the processor architecture evolves, it may be desirable to add new modes. Sometimes, as these new modes are added, it is difficult to reliably establish the mode during operation of a processor implementing the architecture. The difficulty may arise from interactions between the mode and other, previously defined modes, or may arise from a different definition of the previously defined modes when the newly defined mode is established. As the processor is transitioned from one mode to another, it is frequently necessary to minimize the activity occurring in the processor to eliminate undefined states from occurring as the mode change takes effect.Unfortunately, in some circumstances, it may be impossible to eliminate the undefined states. In such cases, the newly defined mode may not be implementable (limiting the ability of the processor architecture to be extended), or one of the previously defined modes may have to be changed or eliminated (which may reduce compatibility with previous processors which implemented the architecture). Also, many of the combinations of the newly defined mode and the previously defined modes may not be useful, but supporting all of the combinations may complicate implementation of the processor architecture. Complicating the implementation merely to allow all combinations of newly defined modes and previously defined modes is undesirable.Still further, particular functions performed by certain operations in the previously existing mode of operation may not be desirable or appropriate while operating in a new mode. Rather, new functions may be defined and performed by these operations while operating in the new mode. Consequently, it would be desirable to efficiently detect these mode changes and respond accordingly.SUMMARY OF THE INVENTIONThe problems outlined above are in large part solved by a processor as described herein. The processor generates a mode indication based on two or more other indications. The mode indication is indicative of whether or not a particular mode is active in the processor. Each indication is stored in a storage location which is addressable via a different instruction. By generating the mode indication based on the values of the two or more indications, undefined states in which the mode is active and the two or more indications are not in defined states for that mode may be eliminated. Furthermore, undesirable (e.g. non-useful) combinations of indications while the mode is active may also be avoided.In one embodiment, a long mode in which a 64 bit operating mode is selectable in addition to 32 bit and 16 bit modes may be activated via a long mode active indication. The long mode active indication may be generated by the processor, and may indicate that long mode is active if paging is enabled and a long mode enable indication indicates that long mode is enabled. In this manner, long mode may be activated after paging is enabled (with a set of long mode page tables indicated by the page table base address). Additionally, long mode may only be active when paging is enabled, eliminating a state of the processor in which long mode is active but paging is disabled.Broadly speaking, a method and an apparatus are contemplated which comprises a memory configured to store instructions; first circuitry configured to decode and detect a control transfer instruction; and second circuitry configured to detect and respond to mode and state changes. In general, the second circuitry is configured to determine whether a segmentation state of the processor changes in response to execution of the transfer operation. If the segmentation state does not change as a result of the transfer instruction, then execution of instructions may continue sequentially and a corresponding check performed. However, if the segmentation state does change as a result of the transfer instruction, a flush of the pipeline is initiated prior to performing a corresponding check.In one particular embodiment, a special register is defined which is configured to indicate changes in segmentation state subsequent to a far transfer operation. A read of the special register may then be performed in order to determine whether a state change is indicated. Further, when a first mode of operation is detected a limit check may be performed, while a canonical check may be performed when a second mode of operation is detected.BRIEF DESCRIPTION OF THE DRAWINGSOther objects and advantages of the invention will become apparent upon reading the following detailed description and upon reference to the accompanying drawings in which:FIG. 1 is a block diagram of one embodiment of a processor.FIG. 2 is a block diagram of one embodiment of a segment descriptor for 32/64 mode.FIG. 3 is a block diagram of one embodiment of a segment descriptor for compatibility mode.FIG. 4 is a block diagram of operation in compatibility mode and in legacy mode according to one embodiment of the processor shown in FIG. 1.FIG. 5 is a table illustrating one embodiment of operating modes as a function of segment descriptor and control register values.FIG. 6 is a table illustrating one embodiment of the use of instruction prefixes to override default operating modes.FIG. 7 is a block diagram of one embodiment of a register.FIG. 8 is a block diagram illustrating one embodiment of generation of a mode indicator.FIG. 9 is a table illustrating one embodiment of consistency checks.FIG. 10 is a flowchart illustrating one embodiment of entering long mode.FIG. 11 is a flowchart illustrating one embodiment of exiting long mode.FIG. 12 is a flowchart illustrating one embodiment of an interpreter.FIG. 13 is a flowchart illustrating one embodiment of a translator.FIG. 14 is a block diagram illustrating one embodiment of mapping non-native architected state.FIG. 15 is a block diagram illustrating a second embodiment of mapping normative architected state.FIG. 16 is a block diagram illustrating a third embodiment of mapping non-native architected state.FIG. 17 illustrates one embodiment which utilizes gates.FIG. 18 shows one embodiment of a call gate descriptor.FIG. 19 is a block diagram illustrating one embodiment of a processor.FIG. 20 shows one embodiment of a method for detecting a state change.FIG. 21 is a block diagram of one embodiment of a carrier medium.FIG. 22 is a block diagram of one embodiment of a computer system including the processor shown in FIG. 1.FIG. 23 is a block diagram of another embodiment of a computer system including the processor shown in FIG. 1.While the invention is susceptible to various modifications and alternative forms, specific embodiments thereof are shown by way of example in the drawings and will herein be described in detail. It should be understood, however, that the drawings and detailed description thereto are not intended to limit the invention to the particular form disclosed, but on the contrary, the intention is to cover all modifications, equivalents and alternatives falling within the spirit and scope of the present invention as defined by the appended claims.DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTSTurning now to FIG. 1, a block diagram illustrating one embodiment of a processor 10 is shown. Other embodiments are possible and contemplated. In the embodiment of FIG. 1, processor 10 includes an instruction cache 12, an execution core 14, a data cache 16, an external interface unit 18, a memory management unit (MMU) 20, and a register file 22. In the illustrated embodiment, MMU 20 includes a set of segment registers 24, a first control register 26, a second control register 28, a local descriptor table register (LDTR) 30, a global descriptor table register (GDTR) 32, and a page table base address register (CR3) 34. Instruction cache 12 is coupled to external interface unit 18, execution core 14, and MMU 20. Execution core 14 is further coupled to MMU 20, register file 22, and data cache 16. Data cache 16 is further coupled to MMU 20 and external interface unit 18. External interface unit 18 is further coupled to MMU 20 and to an external interface.Processor 10 may employ a processor architecture compatible with the x86 architecture (also known as the IA-32 architecture) and including additional architectural features to support 64 bit processing. More particularly, the processor architecture employed by processor 10 may define a mode, referred to below as "long mode". Long mode is a mode in which 64 bit processing is selectable as an operating mode, as well as 32 bit or 16 bit processing as specified in the x86 architecture. More particularly, long mode may provide for an operating mode in which virtual addresses may be greater than 32 bits in size. In the 32 bit and 16 bit modes; the maximum size of the virtual address may be 32 bits. In order to support the larger virtual addresses, the page table structure may be defined differently when long mode is active than when long mode is inactive (since there are more address bits to be translated). Therefore, as part of the processing of switching to and from long mode, an instruction which updates the page table base address register 34 (e.g. the CR3 register in the x86 architecture) may be executed. The page table base address register 34 stores an address locating the page tables in memory. If switching to long mode, the instruction may update the page table base address register 34 to locate a long mode page table. If switching from long mode, the instruction may update the page table base address register 34 to locate a non-long mode page tables (non-long mode page tables are referred to herein as legacy page tables).During the time period between the instructions changing the LME indication and changing the page table base register, no translations may be performed, regardless of which of the two registers is changed first. For example, activating long mode may including changing the LME indication to indicate that long mode is desired and changing the page table base address register to indicate the long mode page tables. If the page table base address is changed before the LME indication and a translation is attempted between the changing of the page table base address and the changing of the LME indication, the long mode page tables would be used for the translation before long mode is activated (i.e. while the processor is performing a non-long mode translation). If the LME indication is changed before the page table base address is changed and a translation is attempted between the changing of the LME indication and the changing of the page table base address, the processor would be performing a long mode translation using legacy page tables. In either case, the translation may not be performed properly.While some embodiments of processor 10 could employ translation lookaside buffers (TLBs) to mitigate the occurrence of translations (in which processor 10 traverses the page tables) during the transition to and from long mode, a TLB miss during the transition may not be completely ruled out. Accordingly, processor 10 may implement a mechanism allowing for orderly transition to and from long mode, even though multiple registers may be changed to perform the transition. Particularly, processor 10 may employ a long mode active (LMA) indication in a control register (e.g. control register 26 in the present embodiment, although the LMA indication may be stored in any control register, including control registers not storing the LME indication). The processor 10 may use the LMA indication as the indication of whether or not long mode is active (i.e. whether or not the processor is operating in long mode). However, the LMA indication may not be modified directly via an instruction. Instead, an instruction is used to change the state of the LME indication to indicate whether or not long mode is desired. Long mode may be activated (as indicated by the LMA indication) via the combination of enabling paging (as indicated by the PG indication in control register 28 and described in more detail below) and the LME indication indicating that long mode is desired. Viewed in another way, the LME indication may be used to enable the transition to long mode. The LMA indication may indicate whether or not the transition has successfully occurred, and thus indicates whether processor 10 is operating according to the long mode definition or processor 10 is operating according to the legacy definition of the x86 processor architecture.To activate long mode, paging may be disabled, then the LME indication may be set to indicate that long mode is desired, the page table base address register 34 may be updated to locate the long mode page tables, and paging may be enabled. To deactivate long mode, paging may be disabled, the LME indication may be set to indicate that long mode is not desired, the page table base address register 34 may be updated to locate the legacy page tables, and paging may be enabled again. In this manner, translations may be performed using the correct page tables at any given point. Additionally, a mode in which long mode is active and paging is not enabled may be avoided (reducing the overall number of modes and thus simplifying processor 10).Processor 10 is configured to establish an operating mode in response to information stored in a code segment descriptor corresponding to the currently executing code and further in response to one or more enable indications stored in one or more control registers. As used herein, an "operating mode" specifies default values for various programmably selectable processor attributes. For example, the operating mode may specify a default operand size and a default address size. The default operand size specifies the number of bits in an operand of an instruction, unless an instruction's encoding overrides the default. The default address size specifies the number of bits in an address of a memory operand of an instruction, unless an instruction's encoding overrides the default. The default address size specifies the size of at least the virtual address of memory operands. As used herein, a "virtual address" is an address generated prior to translation through an address translation mechanism (e.g. a paging mechanism) to a "physical address", which is the address actually used to access a memory. Additionally, as used herein, a "segment descriptor" is a data structure created by software and used by the processor to define access control and status for a segment of memory. A "segment descriptor table" is a table in memory having multiple entries, each entry capable of storing a segment descriptor.In the illustrated embodiment, MMU 20 generates an operating mode and conveys the operating mode to execution core 14. Execution core 14 executes instructions using the operating mode. More particularly, execution core 14 fetches operands having the default operand size from register file 22 or memory (through data cache 16, if the memory operands are cacheable and hit therein, or through external interface unit 18 if the memory operands are noncacheable or miss data cache 16) unless a particular instruction's encoding overrides the default operand size, in which case the overriding operand size is used. Similarly, execution core 14 generates addresses of memory operands, wherein the addresses have the default address size unless a particular instruction's encoding overrides the default address size, in which case the overriding address size is used. In other embodiments, the information used to generate the operating mode may be shadowed locally in the portions of processor 10 which use the operating mode (e.g. execution core 14), and the operating mode may be determined from the local shadow copies.As mentioned above, MMU 20 generates the operating mode responsive to a code segment descriptor corresponding to the code being executed and further responsive to one or more values in control registers. Information from the code segment descriptor is stored in one of the segment registers 24 (a register referred to as CS, or code segment). Additionally, control register 26 stores an enable indication (LME) which is used to enable transition to long mode and the LMA indication indicating whether or not long mode is active. In long mode, an operating mode in which the default address size is greater than 32 bits ("32/64 mode") as well as certain compatibility modes for the 32 bit and 16 bit operating modes may be available using the segment descriptor indications. The default operand size may be 32 bits in 32/64 mode, but instructions may override the default 32 bit operand size with a 64 bit operand size when desired. If the LME indication is in an enabled state, then long mode may be activated. If the LME indication is in a disabled state, then long mode may not be activated. In one embodiment, the default address size in 32/64 mode may be implementation-dependent but may be any value up to and including 64 bits. Furthermore, the size of the virtual address may differ in a given implementation from the size of the physical address in that implementation.It is noted that various indications are described herein (e.g. LMA, LME, etc.). Generally, an indication is a value which may be placed into two or more states. Each state may be assigned a meaning. Some of the indications described herein (including some enable indications) may be described as bits. The bit being set may be one state (e.g. the enabled state for enable indications) and the bit being clear may be the other state (e.g. the disabled state for enable indications). However, other encodings are possible, including encodings in which multiple bits are used and encodings in which the enabled state is the clear state and the disabled state is the set state. Accordingly, the remainder of this description may refer to the LME indication in control register 26 as the LME bit, with the enabled state being set and the disabled state being clear. However, other encodings of the LME indication are contemplated, as set forth above. Similarly, the LMA indication may be referred to as the LMA bit, with the set state indicating that long mode is active and the clear state indicating that long mode is inactive. However, other encodings of the LMA indication are contemplated, as set forth above.Segment registers 24 store information from the segment descriptors currently being used by the code being executed by processor 10. As mentioned above, CS is one of segment registers 24 and specifies the code segment of memory. The code segment stores the code being executed. Other segment registers may define various data segments (e.g. a stack data segment defined by the SS segment register, and up to four data segments defined by the DS, ES, FS, and GS segment registers). FIG. 1 illustrates the contents of an exemplary segment register 24A, including a selector field 24AA and a descriptor field 24AB. Selector field 24AA is loaded with a segment selector to activate a particular segment in response to certain segment load instructions executed by execution core 14. The segment selector identifies the segment descriptor in a segment descriptor table in memory. More particularly, processor 10 may employ two segment descriptor tables: a local descriptor table and a global descriptor table. The base address of the local descriptor table is stored in the LDTR 30. Similarly, the base address of the global descriptor table is stored in GDTR 32. A bit within the segment selector (the table indicator bit) selects the descriptor table, and an index within the segment selector is used as an index into the selected table. When an instruction loads a segment selector into one of segment registers 24, MMU 20 reads the corresponding segment descriptor from the selected segment descriptor table and stores information from the segment descriptor into the segment descriptor field (e.g. segment descriptor field 24AB for segment register 24A). The information stored in the segment descriptor field may comprise any suitable subset of the segment descriptor, including all of the segment descriptor, if desired. Additionally, other information derived from the segment descriptor or other sources may be stored in the segment descriptor field, if desired. For example, an embodiment may decode the operating mode indications from the code segment descriptor and store the decoded value rather than the original values of the operating mode indications. If an instruction causes CS to be loaded with a segment selector, the code segment may change and thus the operating mode of processor 10 may change. Segment descriptor tables are described in more detail below.In one embodiment, only the CS segment register is used in 32/64 mode. The data segment registers are ignored. In 16 and 32 bit modes, the code segment and data segments may be active. Furthermore, a second enable indication (PE) in control register 28 may affect the operation of MMU 20. The PE enable indication may be used to enable protected mode, in which segmentation and/or paging address translation mechanisms may be used. If the PE enable indication is in the disabled state, segmentation and paging mechanisms are disabled and processor 10 is in "real mode" (in which addresses generated by execution core 14 are physical addresses). Similar to the LME indication, the PE indication may be a bit in which the enabled state is the bit being set and the disabled state is the bit being clear. However, other embodiments are contemplated as described above.Control register 28 is further illustrated in FIG. 1 as storing a paging enable indication (PG). The PG indication may indicate whether or not paging is enabled. As mentioned above, the LMA bit is set once paging is enabled and the LME bit is set. As used herein, the term "paging" or "paging address translation" refers to the translation of virtual addresses to physical addresses using mappings stored in a page table structure indicated by the page table base address register 34. A given page mapping maps any virtual address having the same virtual page number to a corresponding physical address in a page of physical memory. The page table is a predefined table of entries stored in memory. Each of the entries store information used to map virtual addresses to physical addresses.It is noted that MMU 20 may employ additional hardware mechanisms, as desired. For example, MMU 20 may include paging hardware to implement paging address translation from virtual addresses to physical addresses. The paging hardware may include a translation lookaside buffer (TLB) to store page translations.It is noted that control registers 26 and 28 may be implemented as architected control registers (e.g. control register 26 may be CR4 and control register 28 may be CR0). Alternatively, one or both of the control registers may be implemented as model specific registers to allow for other uses of the architected control registers without interfering with 32/64 mode. Generally, the control registers are each addressable by one or more instructions defined in the processor architecture, so that the registers may be changed as desired.Generally, instruction cache 12 is a high speed cache memory for storing instruction bytes. Execution core 14 fetches instructions from instruction cache 12 for execution. Instruction cache 12 may employ any suitable cache organization, including direct-mapped, set associative, and fully associative configurations. If an instruction fetch misses in instruction cache 12, instruction cache 12 may communicate with external interface unit 18 to fill the missing cache line into instruction cache 12. Additionally, instruction cache 12 may communicate with MMU 20 to receive physical address translations for virtual addresses fetched from instruction cache 12.Execution core 14 executes the instructions fetched from instruction cache 12. Execution core 14 fetches register operands from register file 22 and updates destination registers in register file 22. The size of the register operands is controlled by the operating mode and any overrides of the operating mode for a particular instruction. Similarly, execution core 14 fetches memory operands from data cache 16 and updates destination memory locations in data cache 16, subject to the cacheability of the memory operands and hitting in data cache 16. The size of the memory operands is similarly controlled by the operating mode and any overrides of the operating mode for a particular instruction. Furthermore, the size of the addresses of the memory operands generated by execution core 14 is controlled by the operating mode and any overrides of the operating mode for a particular instruction.Execution core 14 may employ any suitable construction. For example, execution core 14 may be a superpipelined core, a superscalar core, or a combination thereof. Execution core 14 may employ out of order speculative execution or in order execution, according to design choice. Additionally, embodiments of execution core 14 may employ any of the above constructions and may include microcoding, as desired.Register file 22 may include 64 bit registers which may be accessed as 64 bit, 32 bit, 16 bit, or 8 bit registers as indicated by the operating mode of processor 10 and any overrides for a particular instruction. The register format for one embodiment is described below with respect to FIG. 7. The registers included in register file 22 may include the RAX, RBX, RCX, RDX, RDI, RSI, RSP, and RBP registers (which may be 64 bit versions of the EAX, EBX, ECX, EDX, EDI, ESI, ESP, and EBP registers defined in the x86 processor architecture, respectively). Additionally, in one embodiment, register file 22 may include additional registers addressed using a register extension (REX) prefix byte, described in more detail below. Register file 22 may further include the RIP register, which may be a 64 bit version of the EIP register. Alternatively, execution core 14 may employ a form of register renaming in which any register within register file 22 may be mapped to an architected register. The number of registers in register file 22 may be implementation dependent for such an embodiment.Data cache 16 is a high speed cache memory configured to store data. Data cache 16 may employ any suitable cache organization, including direct-mapped, set associative, and fully associative configurations. If a data fetch or update misses in data cache 16, data cache 16 may communicate with external interface unit 18 to fill the missing cache line into data cache 16. Additionally, if data cache 16 employs a writeback caching policy, updated cache lines which are being cast out of data cache 16 may be communicated to external interface unit 18 to be written back to memory. Data cache 16 may communicate with MMU 20 to receive physical address translations for virtual addresses presented to data cache 16.External interface unit 18 communicates with portions of the system external to processor 10. External interface unit 18 may communicate cache lines for instruction cache 12 and data cache 16 as described above, and may communicate with MMU 20 as well. For example, external interface unit 18 may access the segment descriptor tables and/or paging tables on behalf of MMU 20.It is noted that processor 10 may include an integrated level 2 (L2) cache, if desired. Furthermore, external interface unit 18 may be configured to communicate with a backside cache in addition to communicating with the system.It is noted that the term "mode" refers to a state of the processor which governs one or more aspects of the processor operation. The governed aspects of the processor operate different based on the selected mode. A "mode indication" is a value or values which indicate the current mode. As mentioned above with respect to indications in general, a mode indication may be a single bit or a multibit value, as desired. The LMA bit may be an example of a mode indication. Additionally, a mode is "active" if the processor is operating according to the mode. A mode is "inactive" if the processor is not operating according to the mode (e.g. the processor may be operating according to some other mode).While the processor architecture described herein may be compatible with the x86 processor architecture for 16 and 32 bit modes, in one embodiment, other embodiments may employ any 16 and 32 bit modes. The other embodiments may or may not be compatible with the x86 processor architecture or any other processor architecture. It is further noted that, while a specific set of information is described herein as being used to generate the operating mode, any combination of indications and/or information from memory data structures such as segment descriptor tables and page tables may be used to generate the operating mode in various embodiments.Turning now to FIG. 2, a block diagram of one embodiment of a code segment descriptor 40 for 32/64 mode is shown. Other embodiments are possible and contemplated. In the embodiment of FIG. 2, code segment descriptor 40 comprises 8 bytes with the most significant 4 bytes illustrated above the least significant 4 bytes. The most significant four bytes are stored at a numerically larger address than the least significant four bytes. The most significant bit of each group of four bytes is illustrated as bit 31 in FIG. 2 (and FIG. 3 below), and the least significant bit is illustrated as bit 0. Short vertical lines within the four bytes delimit each bit, and the long vertical lines delimit a bit but also delimit a field (both in FIG. 2 and in FIG. 3).Unlike the 32 bit and 16 bit code segment descriptors illustrated in FIG. 3 below, code segment descriptor 40 does not include a base address or limit. Processor 10 employs a flat virtual address space for 32/64 mode (rather than the segmented linear address space employed in 32 bit and 16 bit modes). Accordingly, the portions of code segment descriptor 40 which would otherwise store the base address and limit are reserved in segment descriptor 40. It is noted that a virtual address provided through segmentation may also be referred to herein as a "linear address". The term "virtual address" encompasses any address which is translated through a translation mechanism to a physical address actually used to address memory, including linear addresses and other virtual addresses generated in non-segmented architectures.Segment descriptor 40 includes a D bit 42, an L bit 44 (set to one for a 32/64 mode code segment), an available bit (AVL) 46, a present (P) bit 48, a descriptor privilege level (DPL) 50, and a type field 52. D bit 42 and L bit 44 are used to determine the operating mode of processor 10, as illustrated in FIG. 5 below. AVL bit 46 is available for use by system software (e.g. the operating system). P bit 48 is used to indicate whether or not the segment is present in memory. If P bit 48 is set, the segment is present and code may be fetched from the segment. If P bit 48 is clear, the segment is not present and an exception is generated to load the segment into memory (e.g. from disk storage or through a network connection). The DPL indicates the privilege level of the segment. Processor 10 employs four privilege levels (encoded as 0 through 3 in the DPL field, with level 0 being the most privileged level). Certain instructions and processor resources (e.g. configuration and control registers) are only executable or accessible at the more privileged levels, and attempts to execute these instructions or access these resources at the lower privilege levels result in an exception. When information from code segment 40 is loaded into the CS segment register, the DPL becomes the current privilege level (CPL) of processor 10. Type field 52 encodes the type of segment. For code segments, the most significant bit two bits of type field 52 may be set (the most significant bit distinguishing a code or data segment from a system segment, and the second most significant bit distinguishing a code segment from a data segment), and the remaining bits may encode additional segment type information (e.g. execute only, execute and read, or execute and read only, conforming, and whether or not the code segment has been accessed).It is noted that, while several indications in the code segment descriptor are described as bits, with set and clear values having defined meanings, other embodiments may employ the opposite encodings and may use multiple bits, as desired. Thus, for example, the D bit 42 and the L bit 44 may each be an example of an operating mode indication which may be one or more bits as desired, similar to the discussion of enable indications above.Turning now to FIG. 3, a block diagram of one embodiment of a code segment descriptor 54 for 32 and 16 bit compatibility mode is shown. Other embodiments are possible and contemplated. As with the embodiment of FIG. 2, code segment descriptor 54 comprises 8 bytes with the most significant 4 bytes illustrated above the least significant 4 bytes.Code segment descriptor 54 includes D bit 42, L bit 44, AVL bit 46, P bit 48, DPL 50, and type field 52 similar to the above description of code segment descriptor 40. Additionally, code segment descriptor 54 includes a base address field (reference numerals 56A, 56B, and 56C), a limit field (reference numerals 57A and 57B) and a G bit 58. The base address field stores a base address which is added to the logical fetch address (stored in the RIP register) to form the linear address of an instruction, which may then optionally be translated to a physical address through a paging translation mechanism. The limit field stores a segment limit which defines the size of the segment. Attempts to access a byte at a logical address greater than the segment limit are disallowed and cause an exception. G bit 58 determines the scaling of the segment limit field. If G bit 58 is set the limit is scaled to 4 K byte pages (e.g. 12 least significant zeros are appended to the limit in the limit field). If G bit 58 is clear, the limit is used as is.It is noted that code segment descriptors for 32 and 16 bit modes when long mode is not active may be similar to code segment descriptor 54, except the L bit is reserved and defined to be zero. It is further noted that, in 32 and 16 bit modes (both compatibility mode with the LMA bit set and modes with the LMA bit clear) according to one embodiment, data segments are used as well. Data segment descriptors may be similar to code segment descriptor 54, except that the D bit 42 is defined to indicate the upper bound of the segment or to define the default stack size (for stack segments).Turning next to FIG. 4, a diagram illustrating exemplary uses of the LMA bit and the compatibility modes to allow for a high degree of flexibility in implementing the 32/64 mode and the 32 and 16 bit modes is shown. A box 60 illustrates exemplary operation when the LMA bit is set (long mode is active), and a box 62 illustrates exemplary operation when the LMA bit is clear (long mode is not active).As illustrated in box 60, the compatibility modes supported when long mode is active may allow for a 64 bit operating system (i.e. an operating system designed to take advantage of the virtual and physical address spaces in excess of 32 bits and/or data operands of 64 bits) to operate with a 32 bit application program (i.e. an application program written using 32 bit operand and address sizes). The code segment for the operating system may be defined by the 32/64 mode code segment descriptor 40 illustrated in FIG. 2, and thus the L bit may be set. Accordingly, the operating system may take advantage of the expanded virtual address space and physical address space for the operating system code and the data structures maintained by the operating system (including, e.g. the segment descriptor tables and the paging translation tables). The operating system may also use the 64 bit data type defined in 32/64 mode using instruction encodings which override the default 32 bit operand size. Furthermore, the operating system may launch a 32 bit application program by establishing one or more 32 bit compatibility mode segment descriptors (L bit cleared, D bit set, e.g. segment descriptor 54 shown in FIG. 2) in the segment descriptor table and branching into one of the compatibility mode segments. Similarly, the operating system may launch a 16 bit application program by establishing one or more 16 bit compatibility mode segment descriptors (L bit cleared, D bit cleared, e.g. segment descriptor 54 shown in FIG. 2) in the segment descriptor table and branching into one of the compatibility mode segments. Accordingly, a 64 bit operating system may retain the ability to execute existing 32 bit and 16 bit application programs in the compatibility mode. A particular application program may be ported to 32/64 mode if the expanded capabilities are desired for that program, or may remain 32 bit or 16 bit.While processor 10 is executing the 32 bit application program, the operating mode of processor 10 is 32 bit. Thus, the application program may generally execute in the same fashion as it does in 32 bit mode with the LMA bit clear (e.g. when the operating system is a 32 bit operating system as well). However, the application program may call an operating system service, experience an exception, or terminate. In each of these cases, processor 10 may return to executing operating system code (as illustrated by arrow 64 in FIG. 4). Since the operating system code operates in 32/64 mode, the address of the operating system service routine, exception handler, etc. may exceed 32 bits. Thus, processor 10 may need to generate an address greater than 32 bits prior to returning to the operating system code. The LMA bit provides processor 10 with an indication that the operating system may be operating in 32/64 mode even though the current operating mode is 32 bit, and thus processor 10 may provide the larger address space for operating system calls and exceptions.In one embodiment, exceptions are handled using interrupt segment descriptors stored in an interrupt segment descriptor table. If the LMA bit is set, the interrupt segment descriptors may be 16 byte entries which include a 64 bit address of the operating system routine which handles the exception. If the LMA bit is clear, the interrupt segment descriptors may be eight byte entries which include a 32 bit address. Accordingly, processor 10 accesses the interrupt descriptor table responsive to the LMA indication (i.e. reading a 16 byte entry if the LMA bit is set and reading an eight byte entry if the LMA bit is clear). Therefore, exceptions may be handled by the 64 bit operating system even though the application program is executing in 32 bit compatibility mode. Furthermore, processor 10 supports a 32 bit (or 16 bit) operating system if the LMA bit is clear.Similarly, the call mechanisms within processor 10 may operate in different fashions based on the state of the LMA bit. Since the operating system typically executes at a higher privilege level than the application program, transfers from the application program to the operating system are carefully controlled to ensure that the application program is only able to execute permitted operating system routines. More generally, changes in privilege level are carefully controlled. In one embodiment, processor 10 may support at least two mechanisms for performing operating system calls. One method may be through a call gate in the segment descriptor tables (described in more detail below). Another method may be the SYSCALL instruction supported by processor 10, which uses a model specific register as the source of the address of the operating system routine. Updating the model specific registers is a privileged operation, and thus only code executing at a higher privilege level (e.g. operating system code) may establish the address in the model specific register used by the SYSCALL instruction. For the SYSCALL method, a second model specific register may be defined to store the most significant 32 bits of the address of the operating system routine. Thus, if the LMA bit is set, the address may be read from the two model specific registers. If the LMA bit is clear, the address may be read from the model specific register storing the least significant 32 bits. Alternatively, the model specific register used by the SYSCALL instruction may be expanded to 64 bits and the address may be 32 bits (the least significant 32 bits of the model specific register) or 64 bits based on the state of the LMA bit.As illustrated above, having the LMA bit set may allow for processor 10 to operate in a system in which the operating system is 64 bit and one or more application programs are not 64 bit (e.g. 32 bit as shown or 16 bit, which operates in a similar fashion to the above description). Generally, even though the processor may be operating in 32 or 16 bit mode, the LMA bit informs the processor that the operating system data structures are as defined for the 64 bit mode, and the processor may access the structures appropriately. Additionally, as illustrated by box 62, having the LMA bit clear may allow for processor 10 to operate in 32 bit or 16 bit modes compatible with the x86 architecture. As described above, the mechanisms for handling exceptions and operating system calls are designed to handle the LMA bit being set or clear, and thus the 32 bit and 16 bit modes may operate unmodified, even though processor 10 is capable of operating in 32/64 mode. Furthermore, by providing the x86 compatible 16 and 32 bit modes when the LMA bit is clear, (and ignoring the L bit, which is reserved in these modes) processor 10 may operate in a system in which the L bit is defined for some other purpose than for 32/64 mode and may still support 32/64 mode if the LMA bit is set. Accordingly, a system employing a 32 bit operating system and 32 bit or 16 bit application programs may employ processor 10. Subsequently, the system could be upgraded to a 64 bit operating system without having to change processor 10.Not illustrated in FIG. 4 is a 64 bit operating system and a 64 bit application program operating with the LMA bit set. The mechanisms for calling operating system routines described above for the 64 bit operating system and 32 bit application program may apply equally to the 64 bit application program as well. Additionally, call gates which support 64 bits of offset are supported (as will be described in more detail below).Turning next to FIG. 5, a table 70 is shown illustrating the states of the LMA bit, the L bit in the code segment descriptor, and the D bit in the code segment descriptor and the corresponding operating mode of processor 10 according to one embodiment of processor 10. Other embodiments are possible and contemplated. As table 70 illustrates, if the LMA bit is clear, then the L bit is reserved (and defined to be zero). However, processor 10 may treat the L bit as a don't care if the LMA bit is clear. Thus, the x86 compatible 16 bit and 32 bit modes may be provided by processor 10 if the LMA bit is clear. If the LMA bit is set and the L bit in the code segment is clear, then a compatibility operating mode is established by processor 10 and the D bit selects 16 bit or 32 bit mode. If the LMA bit and the L bit are set and the D bit is clear, 32/64 mode is selected for processor 10. Finally, the mode which would be selected if the LMA, L and D bits are all set is reserved.As mentioned above and illustrated in FIG. 6 below, the 32/64 operating mode includes a default address size in excess of 32 bits (implementation dependent but up to 64 bits) and a default operand size of 32 bits. The default operand size of 32 bits may be overridden to 64 bits via a particular instruction's encoding. The default operand size of 32 bits is selected to minimize average instruction length (since overriding to 64 bits involves including an instruction prefix in the instruction encoding which may increase the instruction length) for programs in which 32 bits are sufficient for many of the data manipulations performed by the program. For such programs (which may be a substantial number of the programs currently in existence), moving to a 64 bit operand size may actually reduce the execution performance achieved by the program (i.e. increased execution time). In part, this reduction may be attributable to the doubling in size in memory of the data structures used by the program when 64 bit values are stored. If 32 bits is sufficient, these data structures would store 32 bit values, Thus, the number of bytes accessed when the data structure is accessed increases if 64 bit values are used where 32 bit values would be sufficient, and the increased memory bandwidth (and increased cache space occupied by each value) may cause increased execution time. Accordingly, 32 bits is selected as the default operand size and the default may be overridden via the encoding of a particular instruction.Turning next to FIG. 6, a table 72 is shown illustrating one embodiment of the use of instruction prefixes to override the operating mode for a particular instruction. Other embodiments are possible and contemplated. Execution core 14 determines the address size and operand size for a particular instruction according to table 72. In particular for the embodiment illustrated in FIG. 6, an instruction prefix byte (the address size override prefix byte) may be used to override the default address size and another instruction prefix byte (the operand size override prefix byte) may be used to override the default operand size. Additionally, a REX prefix byte may be used to override the default operand size as well. The address size override prefix byte is encoded as 67 (in hexadecimal) and the operand size override prefix byte is encoded as 66 (in hexadecimal). The override prefix used in a particular instruction forms the columns of the table. The rows of the table indicate the operand size and address size of the particular instruction, based on the operating mode and the override prefix in the corresponding column.The column labeled "None" illustrates the default operand size and address size for each operating mode. It is noted that the 32 bit and 16 bit mode rows refer to both the compatibility modes (LMA set) and the standard modes (LMA clear). Furthermore, while the default address size is 64 bits in 32/64 mode, the actual number of address bits may be implementation dependent, as discussed above.The inclusion of the address size override prefix in 32/64 bit mode changes the address size from 64 bit (which may be less than 64 bits for a given implementation but is greater than 32 bits) to 32 bit, as shown in table 72. Additionally, the inclusion of the operand size override prefix in 32/64 bit mode changes the operand size from 32 bit to 16 bit. It may be desirable to provide for a 16 bit operand (e.g. to support the short integer data type in the "C" programming language). The inclusion of the REX prefix may be used to override the operand size to 64 bits in 32/64 mode. In one embodiment, the REX prefix is a byte in which the most significant four bits are "4" and the most significant bit of the least significant four bits is set. In the illustrated embodiment, the REX prefix byte does not apply ("DNA" in FIG. 6) except for operand size override in 32/64 mode.For the 32 bit modes, the inclusion of an override prefix toggles the default 32 bit size to 16 bit. Similarly, for 16 bit modes, the inclusion of an override prefix toggles the default 16 bit size to 32 bit.Turning now to FIG. 7, a diagram illustrating one embodiment of the RAX register 74 is shown. Other registers within register file 22 may be similar. Other embodiments are possible and contemplated. In the embodiment of FIG. 7, register 74 includes 64 bits, with the most significant bit labeled as bit 63 and the least significant bit labeled as bit 0. FIG. 7 illustrates the portions of the RAX register accessed based upon the operand size of an instruction (if the A register is selected as an operand). More particularly, the entirety of register 74 is accessed if the operand size is 64 bits (as illustrated by the brace labeled "RAX" in FIG. 7). If the operand size is 32 bits, bits 31:0 of register 74 are accessed (as illustrated by the brace labeled "EAX" in FIG. 7). If the operand size is 16 bits, bits 16:0 of the register are accessed (as illustrated by the brace labeled "AX" in FIG. 7). The above operand sizes may be selected based on the operating mode and the inclusion of any override prefixes. However, certain instruction opcodes are defined which access an eight bit register (AH or AL in FIG. 7).Turning next to FIG. 8, a block diagram is shown illustrating one embodiment of control registers 26 and 28, a circuit 80, and an operating mode generation circuit 82. Other embodiments are possible and contemplated. Control registers 26 and 28 are coupled to circuit 80, and control register 26 is further coupled to operating mode generation circuit 82. Operating mode generating circuit 82 is further coupled to receive the L bit and D bit from the segment descriptor corresponding to the code segment register (CS) and to provide an operating mode.Circuit 80 is configured to generate the LMA bit from the LME and PG bits. Thus, circuit 80 is coupled to receive the LME and PG bits, and is coupled to provide the LMA bit. In the illustrated embodiment, circuit 80 is represented by an AND gate, since the LMA bit is defined to be set if the PG bit is set and the LME bit is set and the LMA bit is defined to be cleared otherwise. Other embodiments may use different circuitry for circuit 80, depending on the definition of the LME, PG, and LMA indications. Furthermore, embodiments are contemplated in which the circuit 80 comprises microcoding to change the LMA bit based on changes to the LME and PG bits, and embodiments are contemplated in which the functionality of circuit 80 is realized in software (e.g. the software embodiments described below). Generally circuit 80 may generate the LMA indication to indicate that long mode is active if the paging indication indicates paging is active and the LME indication indicates that long mode is desired, and may generate the LMA indication to indicate that long mode is inactive otherwise.Operating mode generation circuit 82 is configured to generate an operating mode (e.g. for execution core 14 shown in FIG. 1) responsive to the LMA bit and the L bit and D bit from the code segment descriptor. More particularly, operating mode generation circuit 82 may be configured to generate the operating mode according to the table shown in FIG. 5. Accordingly, long mode is not active in the embodiment of FIG. 8 unless paging is enabled and the LME bit is set and the operating modes available when long mode is active (LMA bit set in FIG. 5) are not available unless long mode is active.As mentioned above, while the LMA bit is shown in FIG. 8 as being stored in the same register as the LME bit, other embodiments may store the LMA bit in any register, as desired.In addition to generating the operating mode as shown in FIGS. 5 and 8, processor 10 may implement certain checks for invalid combinations of indications when changing one of the indications in response to executing an instruction addressing the register storing that indication. If the combination (including the changed indication) is invalid, processor 10 may signal an exception instead of changing the indication. In this manner, processor 10 may prevent entering an undefined state (i.e. a combination for which the behavior of the processor is not specified in the processor architecture). In other words, processor 10 insures that, when an indication is changed, the state the processor enters is consistent with the defined states in the processor architecture. Thus, these checks may be termed "consistency checks". FIG. 9 is a table 90 illustrating exemplary consistency checks for one embodiment of processor 10. Other embodiments are possible and contemplated.Particularly, if the LME bit is being changed from 0 to 1 (disabled to enabled), then an exception is signalled if paging is enabled (PG bit is 1). Similarly, if the LME bit is being changed from 1 to 0, an exception is signalled if paging is enabled. In this manner, processor 10 enforces the requirement that the LME bit be changed only when paging is disabled. Otherwise, the definition of the page tables to be used for translation would change (since the LMA bit would change state in response to the LME change) without changing the page table base address register to point to the appropriate set of page tables.If the PG bit is being changed from 0 to 1, an exception is signalled if the LME bit is 1 and the physical address extension (PAE) bit is zero. The PAE bit is defined in the x86 architecture (as a bit in control register CR4) and is indicative, when set, that physical address extensions are enabled. The physical address extension in the x86 architecture extends the physical addresses from 32 bits to 36 bits, and thus page table entries are larger to accommodate the additional physical address bits. The physical address extension is required to be enabled for long mode to be active, since physical addresses in long mode may exceed 32 bits as well. Similarly, if the PAE bit is changed from 1 to 0 and the LMA bit is set (indicating long mode is active), an exception is signalled.FIGS. 10 and 11 illustrate a set of operations which may be used to enter long mode and leave long mode, respectively. Each operation may be performed executing by one or more instructions defined by the processor architecture implemented by processor 10. Particularly, the one or more instructions may include an instruction addressing the register which stores the corresponding indication (or page table base address). An instruction addresses a register if the instruction specifies the register as an operand. Thus, execution of the instruction may result in reading or writing the addressed register.Turning next to FIG. 10, a flowchart is shown illustrating a set of operations to enter long mode. Other embodiments are possible and contemplated.Control register 28 is written to clear the PG bit (block 100), thus disabling paging. Paging is disabled prior to attempting to enter long mode so that the LME bit may be set and the page table base address register 34 may be programmed to point to the long mode page tables without putting processor 10 into any inconsistent states. It is noted that block 100 is optional. If the code sequence represented by FIG. 10 is executed at a time when it is known that paging is disabled, block 100 may be omitted.The control register storing the PAE bit is written to set the PAE bit (block 102), enabling physical address extensions. The page table base address register (e.g. CR3, in the x86 architecture) is written to point to the long mode page tables (block 104). The control register 26 is written to set the LME bit (block 106). The operations represented by blocks 102, 104, and 106 may be performed in any order.Finally, the control register 28 is written to set the PG bit (block 108). Setting the PG bit enables paging, and thus may cause the transition from long mode being inactive (LMA bit clear) to long mode being active (LMA bit set).Turning now to FIG. 11, a flowchart is shown illustrating a set of operations to leave long mode. Other embodiments are possible and contemplated.Similar to the operations for entering long mode, leaving long mode includes writing the control register 28 to clear the PG bit (block 110), thus disabling paging. The disabling of paging at block 110 also causes long mode to become inactive (LMA bit clears).The page table base address register (e.g. CR3, in the x86 architecture) is written to point to the legacy page tables (block 112). The control register 26 is written to clear the LME bit (block 114). The operations represented by blocks 112 and 114 may be performed in any order.Optionally, if the mode being entered upon leaving long mode includes paging, control register 28 is written to set the PG bit (block 116), thus enabling paging. Since the LME bit was cleared in block 114, enabling paging at block 116 does not activate long mode.Software EmbodimentsWhile the above description may generally have described a processor which may directly support, in hardware, the processor architecture having the features described above, it is contemplated that other processor embodiments may not directly implement the processor architecture. Instead, such embodiments may directly implement a different processor architecture (referred to below as a native processor architecture, which may define a native instruction set including native instructions). Any native processor architecture may be used. For example, the MIPS, Power PC, Alpha, Sparc, ARM, etc. architectures may be used. The processor architecture may be implemented in software executing on the native processor architecture in a variety of fashions, using any native processor architecture such as, for example, the Crusoe products of Transmeta Corporation.Generally, a processor embodiment implementing a native processor architecture different than the processor architecture described above (referred to below as the normative processor architecture) may support the non-native processor architecture in a variety of fashions. For example, such a processor embodiment may execute interpreter software which reads each non-native instruction in a non-native code sequence as data, and executes various software routines which emulate the defined operation of the normative instruction as defined in the non-native processor architecture. Alternatively, translator software may be executed. The translator software may translate the non-native instructions in the code sequence to an equivalent set of native instructions defined by the native instruction set architecture. The native code sequence may be stored in memory, and may be executed instead of the corresponding non-native code sequence. In yet another alternative, a mixture of interpretation and translation may be used. For example, the code sequence may be interpreted, but the interpreter may also generate statistics about which parts of the code sequence are being most frequently executed. The most frequently executed portions may then be translated to native code sequences.In any of the above methods, the architected state defined by the non-native processor architecture may be maintained by the combination of the processor and the software (interpreter or translator) in a variety of fashions. For example, the non-native architected state may be mapped to memory locations in a memory addressable by the processor, to general registers defined by the native processor architecture (by software convention, either in the interpreter or in the translator), or the processor may directly support the non-native architected state by defining registers or other storage hardware within the processor that corresponds to the non-native architected state. The non-native architected state may be stored using any combination of the above methods, as desired.Generally, the architected state includes any state defined to exist by the architecture. For example, in the above described embodiment, the non-native architected state may include general registers (e.g. RAX, RBX, etc.), segment registers, control registers, other registers such as the model specific registers (MSRs), etc. Additionally, the architected state may include data structures defined for the operating system to create, such as the descriptor tables, page tables, task state segments, etc.Turning to FIG. 12, a flowchart illustrating an exemplary interpreter which may be used to interpret non-native instructions is shown. Other embodiments are possible and contemplated. While the blocks shown are illustrated in a particular order for ease of understanding, any suitable order may be used. Furthermore, blocks may be performed in parallel, as desired.The blocks shown in FIG. 12 illustrate the emulation of one non-native instruction. Generally, the interpreter may execute the blocks shown in FIG. 12 for each non-native instruction to be executed according to the non-native code sequence to be executed.The interpreter may determine the operating mode for the non-native instruction (block 1000). As described above, the operating mode may be determined from the LMA bit in control register 26 and the L bit and D bit from the code segment descriptor indicated by the CS segment register. The operating mode may be determined anew from the LMA, L bit, and D bit for each non-native instruction, or the resulting operating mode may be stored in a temporary register for access by the interpreter for each non-native instruction. If the resulting operating mode is stored, the interpreter may update the stored operating mode if an instruction modifies the CS segment register or interrupt or exception handling causes the operating mode to change. As mentioned above, the CS segment register and the control register(s) (which are part of the non-native architected state) may actually be memory locations, general registers, or special purpose registers, or any combination thereof.The interpreter may read the current non-native instruction from memory, and may analyze the non-native instruction to determine the operations to be taken to emulate the non-native instruction (block 1002). The interpreter may read the non-native instruction one byte at a time, or may read a suitable set of consecutive bytes and process the bytes. For example, a native processor architecture in which operands are 32 bit may read 32 bits (4 bytes) of the non-native instruction at a time, and then may process the four bytes before reading any additional bytes.Generally, the interpreter software may decode the non-native instruction in a manner analogous to processor 10 decoding the instruction in hardware. Thus, for the illustrated non-native processor architecture, which is compatible with the x86 processor architecture, the analyzing of the non-native instruction includes analyzing any prefix bytes which may precede the opcode byte, analyzing the opcode byte, analyzing the addressing mode (Mod R/M) byte (if present), and analyzing the scale-index-base (SIB) byte (if present). Prefix bytes may override the operating mode, and may also include register specifier bits (e.g. the REX prefix byte). The opcode byte specifies the operation to be performed, and in some cases may include a register specifier or may implicitly specify an operand (e.g. the stack or the stack pointer). The Mod R/M byte specifies operands (including any displacement or immediate operands which may follow the Mod R/M byte or the SIB byte, if the SIB byte is present) and may include register specifiers. Finally, the SIB byte may include register specifiers. From the information gained from analyzing the non-native instruction, the interpreter has the information to emulate the non-native instruction (including operating mode for the non-native instruction, which specifies the operand size and address size of the non-native instruction, operands, the operation to be performed, etc.).If the non-native instruction includes a memory operand (decision block 1004), the interpreter may calculate the effective address of the instruction (block 1006). If the non-native instruction has a memory operand, some of the operands identified in block 1002 may be address operands used to generate the effective address. Thus, the interpreter may read the address operands from the non-native architected state and may add them to generate an effective address. The size of the effective address may be determined by the address size for the instruction, as determined at blocks 1000 and 1002. It is noted that the native processor architecture may support an address size which is less than the address size supported by the non-native processor architecture. For example, in one exemplary embodiment described above, the virtual address size may be 48 bits in 32/64 mode. The native processor may, for example, support a virtual address size of 32 bits. In such an embodiment, block 1006 may represent a series of calculations in which the least significant bits (e.g. 32 bits) of the virtual address may be calculated, and any carry from the least significant bits may be carried into a calculation of the most significant bits of the virtual address.The interpreter may then perform the operation specified by the non-native instruction (block 1008). If the non-native instruction includes a memory operand as a source operand, the interpreter may read the memory operand from the effective address calculated at block 1006. Other operands may be read from the non-native architected state. The operation may include an arithmetic operation, a logical operation, a shift, a move to another storage location, etc. The native processor architecture may support an operand size smaller than the operand size of the instruction. In such cases, performing the operation may include multiple calculations on portions of the operand to calculate the result. Additionally, if the non-native instruction updates one of the registers storing information used to generate the operating mode, the consistency checks of table 90 may be applied by the interpreter. Also, the interpreter may update the LMA indication if the LME indication or the PG indication is changed by a non-native instruction.The interpreter determines if the non-native instruction resulted in an exception (decision block 1010). Generally, exceptions may occur throughout the execution of the operations specified by the non-native instruction. For example, accessing a source memory operand may result in a page fault before any of the actual instruction operation is performed. During the operations, various architecturally-defined exceptions may also occur. The interpreter may interrupt processing of the non-native instruction upon detecting an exception, and may branch to exception handler instructions (block 1012). The exception handler may be native code or non-native code, as desired. If the normative processor architecture specifies the update of any architected state when an exception is taken (e.g. various control registers may store the address of the exception causing instruction, the exception reason, etc.), the interpreter may update the non-native architected state as defined.It is noted that the interpreter software is executing on the native processor, and thus is subject to experiencing exceptions as defined in the native processor architecture. These exceptions may generally be different from the exceptions detected by the interpreter software, which are exceptions experienced by the non-native code being interpreted according to the non-native processor architecture.If no exception occurs during emulation of the non-native instruction, the interpreter may update the non-native architected state according to the definition of the non-native instruction (block 1014). Finally, the interpreter may calculate the next normative instruction fetch address to fetch the next instruction (block 1016). The next fetch address may be sequential to the current non-native instruction, or may be a different address (e.g. if the current non-native instruction is a taken branch, the next fetch address may be the target address of the branch instruction).It is noted that the interpreter may operate in protected mode, using virtual addresses. In other words, the effective address calculated at block 1006 may be a virtual address which is translated by the translation mechanism specified by the non-native processor architecture to a physical address. The processor may include a translation lookaside buffer (TLB) used to cache translations. The processor may either support reload of the TLB from the non-native translation tables (page tables), or may take an exception on a TLB miss to allow software reload of the TLB.Turning to FIG. 13, a flowchart illustrating an exemplary translator which may be used to translate non-native instructions in the non-native processor architecture to native instructions in the native processor architecture. Other embodiments are possible and contemplated. While the blocks shown are illustrated in a particular order for ease of understanding, any suitable order may be used. Furthermore, blocks may be performed in parallel, as desired.The blocks shown in FIG. 13 illustrate the translation of one non-native code sequence responsive to a fetch address for the first instruction in the non-native code sequence. The code translator may translate any number of non-native instructions to produce a translated code sequence having native instructions. For example, the translator may translate from the initial non-native instruction to a basic block boundary (i.e. a branch instruction). Alternatively, the translator may speculatively translate two or more basic blocks or may translate up to a maximum number of non-native or resulting native instructions, if desired.Generally, the translator may maintain a translation cache which stores translated code sequences previously produced by the translator. The translation cache may identify translated code sequences by the fetch address of the first non-native instruction in the corresponding non-native code sequences. Thus, the translator may determine if a translated code sequence corresponding to the fetch address is stored in the translation cache (decision block 1030). If there is a translated code sequence in the translation cache, the translator may cause the processor to branch to that translated code sequence (block 1032). On the other hand, if there is no translated code sequence, the translator may translate one or more non-native instructions from the non-native code sequence into native instructions in a translated code sequence (block 1034).Generally, the translator may translate each non-native instruction into one or more native instructions which, when executed, may perform the same operation on the non-native architected state that the non-native instruction would have performed. The translator may generally perform the same decoding of instructions as is performed by the interpreter (block 1002 in FIG. 12) to determine what operations may need to be performed. For example, if the native processor architecture is a load/store architecture in which memory operands are accessed using explicit load/store instructions and other instruction use only register operands, load and store instructions may be used to access the memory operands and other instructions may be used to perform the explicit operation of a non-native instruction having a memory operand. The translated instructions may make use of temporary registers to hold intermediate values corresponding to the execution of the non-native instruction. Additionally, the translated instructions may access the non-native architected state to retrieve operands and may update the non-native architected state with the final results of the non-native instruction. Generally, the native instructions corresponding to the non-native instruction may perform all of the operations defined for the instruction (e.g. blocks 1006, 1008, 1010, 1014, and 1016 in FIG. 12).Once the translator has determined to terminate translation and save the translated sequence for execution, the translator may optionally optimize the translated code sequence (block 1036). The optimizations may include reordering the translated instructions for quicker execution, eliminating redundancies (e.g. redundant memory references, which may occur if multiple non-native instructions in the source code sequence accessed the same memory location), etc. Any suitable set of optimizations may be used. The resulting translated code sequence may then be stored into the translation cache. Additionally, the processor may branch to the translated code sequence and execute the sequence (block 1032).It is noted that, while the above description may refer to accessing and/or updating non-native architected state, including various registers, the non-native architected state may be stored in any suitable fashion. For example, architected registers may actually be stored in memory locations, as highlighted above. The mapping of architected registers from the non-native processor architecture to memory locations may be used in either of the interpreter or the translator embodiments, or combinations thereof, to locate the non-architected state used during execution of the non-native instruction or affected by the execution of the non-native instruction. Thus, instructions which access the non-native architected state may perform memory reads/writes or register reads/writes, as the case may be.Turning next to FIG. 14, a block diagram illustrating one exemplary mapping of non-native architected state to either memory locations in a memory 1040 or to processor resources in a native processor 1042. Native processor 1042 includes a register file 1044 including the architected general registers of the native processor architecture. Any number of registers may be provided.In the embodiment of FIG. 14, all of the non-native architected state is mapped to memory 1040. For example, descriptor tables 1046 (which may include a global descriptor table, a local descriptor table, and an interrupt descriptor table), page tables 1048 (which store virtual to physical address translations), task state segments 1050, general registers 1052, segment registers 1054, control registers 1056, and other registers 1058 may represent non-native architected state.Thus, in the embodiment of FIG. 14, to access any non-native architected state, a memory access may be performed. For example, if a non-native instruction has one of the general registers as an operand, the interpreter or translated native instruction performs a memory access to the memory location mapped to that general register to access or update that general register. The registers in register file 1044 may be used by the interpreter or translator as temporary registers to hold intermediate results or for other local interpreter/translator state.General registers 1052 may include integer general registers (e.g. RAX, RBX, etc. as described above), the additional integer general registers defined by the REX prefix byte, floating point registers, Streaming Single Instruction, Multiple Data (SIMD) Extension (SSE) registers, and the additional SSE registers defined by the REX prefix byte. Segment registers 1054 may include storage locations corresponding to the segment registers 24 shown in FIG. 1 above.Control registers 1056 may include storage locations corresponding to various control registers defined in the non-native processor architecture. For example, control registers storing the LMA, LME, PG and PE bits, as well as the LDTR and GDTR registers and the CR3 register (which stores the base address of the page tables 1048) are shown. Other control registers may be included as well.Other registers 1058 includes any remaining architected registers. For example, the EFLAGS register (which stores condition code information), the instruction pointer (RIP) register (which stores the address of the instruction to be executed), and the model specific registers (MSRs) may be included in other registers 1058.While the example of FIG. 14 maps all of the non-native architected state to memory 1040, other embodiments may implement other mappings. In FIG. 15, for example, some of the general registers in register file 1044 are mapped to the general registers 1052. Accordingly, if a non-native instruction has a general register as an operand, the interpreter accesses the corresponding register in register file 1044. Similarly, the translator generates a translated instruction having the corresponding register in register file 1044 as an operand. Other architected state may still be accessed via memory operations in the embodiment of FIG. 15. Other registers in register file 1044 which are not assigned to non-native architected state may again be used as temporary registers for interpreter or translator use, as described above.While the embodiment of FIG. 15 illustrates mapping the general registers 1052 to registers in register file 1044, any other non-native architected state may be mapped to registers in register file 1044. For example, any of segment registers 1054, control registers 1056, or other registers 1058 (or portions of any of these registers) may be mapped to register file 1044, as desired.FIG. 16 illustrates another example of an embodiment in which the general registers 1052 and the EFLAGS and RIP registers are mapped to registers in register file 1044. Additionally, while other embodiments are possible, in the example of FIG. 16 the segment registers 1054 are implemented in hardware in processor 1042. More specifically, processor 1042 may not only implement storage for segment registers 1054, but may also include logic to generate the operating mode for instructions based on the information in the segment registers. Furthermore, the logic may include limit, attribute, privilege, or other checks to ensure that accesses to the segment attempted by the normative instructions (or the non-native instructions in the interpreter or the translated code sequence which correspond to the non-native instructions) are permitted. Also shown in FIG. 16 are special registers 1602 and descriptor cache 1604 (described in more detail below). In one embodiment, special register may be implementation dependent and may be configured to store data pertaining the state of processor 1042 and/or other operational attributes.Similarly, other embodiments may implement various control registers 1056 or other registers 1058 in hardware, including corresponding logic to act on the contents of the registers as defined in the non-native architecture. Generally, various embodiments of processor 1042 may implement any non-native architected state in hardware. Certain architected state may generally be implemented in memory since the non-native processor architecture defines the state to be in memory (e.g. descriptor tables 1046, pages tables 1048, and task state segments 1050). Such memory-based architected state may be cached in caches within processor 1042 (e.g. TLBs for page table information, hidden segment register portions for segment descriptor information, etc.).As the above discussion illustrates, the non-native architected state may be stored in any suitable storage location. Generally, a storage location is a location capable of storing a value. Suitable storage locations may include, in various embodiments, a memory location, a general register mapped to the non-native architected state, or a special purpose register (which may include additional hardware to interpret the contents of the register), depending upon the embodiment. Additionally, suitable storage locations could include a scratch pad RAM (such as a portion of a cache predetermined to be used as scratch pad RAM).Turning next to FIG. 17, a block diagram is shown illustrating one embodiment of a global descriptor table 1780 and a local descriptor table 1782. Other embodiments are possible and contemplated. As illustrated in FIG. 17 and mentioned above, the base address of global descriptor table 1780 is provided by GDTR 32 and the base address of local descriptor table 1782 is provided by LDTR 30. Accordingly, to support placing global descriptor table 1780 and local descriptor table 1782 arbitrarily within the virtual address space, GDTR 32 and LDTR 30 may store 64 bit base addresses. If the LME bit is clear, the least significant 32 bits of the base address may be used to locate the descriptor tables.Both global descriptor table 1780 and local descriptor table 1782 are configured to store segment descriptors of various types. For example, 32/64 mode code segment descriptors 1784, 1786, and 1790 and compatibility mode descriptors 1792 and 1794 are illustrated in FIG. 17. Each of descriptors 1784, 1886 and 1790 occupies an entry in the corresponding descriptor table, where an entry is capable of storing one segment descriptor (e.g. 8 bytes for the embodiments illustrated in FIGS. 2 and 3). Another type of descriptor in global descriptor table 1780 is a local descriptor table descriptor 1796, which defines a system segment for the local descriptor table 1782 and provides the base address stored in LDTR 30. LDTR 30 is initialized using an LLDT instruction having as an operand a segment selector locating descriptor 1796 in global descriptor table 1780. Global descriptor table 1780 may store multiple LDT descriptors locating different local descriptor tables, if desired. Since the LDT descriptor 1796 may store a 64 bit offset if the LME bit is set, LDT descriptor 1796 may occupy two entries in global descriptor table 1780. It is noted that the lower half of LDT descriptor 1796 may be similar to the 32 bit LDT descriptor and the upper half of LDT descriptor 1796 may be similar to the upper half of call gate descriptor 120 described in FIG. 18 below. If the LME bit is clear, LDT descriptor 1796 may occupy a single entry in global descriptor table 1780. Similarly, each task may have a task state segment (TSS) descriptor in one of descriptor tables 1780 and 1782 to store certain information related to the task. Accordingly, a TSS descriptor may occupy two entries to allow for TSS information to be stored anywhere in the 64 bit address space.The local and global descriptor tables may also store a call gate descriptor. For example, FIG. 17 illustrates call gate descriptors 1700, 1702, and 1704. Call gate descriptors support a 64 bit offset as well, and thus may occupy two entries in the corresponding descriptor table as well. While other types of gates are possible, an exemplary 32/64 call gate descriptor is described in FIG. 18 below for illustrative purposes.By maintaining the segment descriptor tables 1780 and 1782 at 8 bytes and using two entries for descriptors which include 64 bit offsets, descriptors for 16 and 32 bit modes may be stored in the same tables as the descriptors which include 64 bit offsets. Thus, applications operating in compatibility modes may have appropriate descriptors in the same segment descriptor tables as the 64 bit operating systems.Generally, gates may be used to manage the transition between a code segment having a lesser privilege level and a code segment have a greater privilege level (e.g. an application program calling an operating system routine). The lesser privileged code includes a call or other branch instruction specifying, as a target, a segment selector (and an offset into the segment, which is ignored in this case). The segment selector identifies a call gate descriptor within the descriptor tables, which includes a minimum privilege level required to execute the greater privilege level code. When processor 10 executes the call or other branch instruction, processor 10 indexes the descriptor tables with the segment selector and locates the call gate. If the current privilege level of processor 10 and the requestor privilege level (which is part of the segment selector, and may be used to lower the current privilege level for privilege checking purposes) both reflect sufficient privilege (e.g. the privilege levels are numerically less than or equal to the minimum privilege level in the call gate descriptor), then the call may proceed. The call gate descriptor includes a segment selector for the target segment (the code segment having the greater privilege level) and the offset within the target segment at which code fetching is to begin. Processor 10 extracts the segment selector and the offset from the call gate descriptor and reads the target segment descriptor to begin fetching the code having the greater privilege level. On the other hand, if either the current privilege level or the requestor privilege level is a lesser privilege level than the minimum privilege level in the call gate descriptor (e.g. either the current or requestor privilege level is numerically greater than the minimum privilege level), processor 10 signals an exception after accessing the call gate descriptor and without accessing the target descriptor. Thus, access to code executing at greater privilege levels is carefully controlled.As mentioned above, the call gate descriptor includes a target segment selector and offset within the segment. The reference to the target segment descriptor is illustrated in FIG. 17 as an arrow from a call gate descriptor to another descriptor. For example, call gate descriptor 1700 references mode descriptor 1790; call gate descriptor 1702 references 32/64 mode descriptor 1786, and call gate descriptor 1704 references 32/64 mode descriptor 1784. As FIG. 17 illustrates, a call gate descriptor may be stored in either descriptor table and may reference a descriptor in the other table or in the same table. Furthermore, a call gate descriptor may reference either a 32/64 mode descriptor or a compatibility mode descriptor.Generally, when processor 10 reads a descriptor from one of the descriptor tables using a segment selector, one descriptor table entry is read. However, if the LME bit is set and processor 10 detects that the entry is a call gate descriptor, an LDT descriptor, or a TSS descriptor, processor 10 reads the next succeeding entry in the table to obtain the remainder of the descriptor. Accordingly, call gate descriptors, LDT descriptors, and TSS descriptors may coexist in a table with compatibility mode descriptors (or standard mode descriptors) which are of a different size, without redefining the size of the table entries nor how the table is managed for descriptors which occupy one entry. Furthermore, since the second portion of the call gate descriptor, the LDT descriptor, and the TSS descriptor may be accessed as a segment descriptor, the portion of the descriptor which would be the type field of a descriptor in the second portion is set to an invalid type when the descriptor is stored into the descriptor table, as shown below in FIG. 18. Alternatively, processor 10 may read two consecutive entries from a descriptor table each time a descriptor table read is performed, and the second entry may be used if the first entry is a call gate, LDT descriptor type, or TSS descriptor type.It is noted that code operating in any operating mode (32/64 mode, 32 bit compatibility mode, or 16 bit compatibility mode) may reference a call gate descriptor when the LME bit is set. Thus, a 32 or 16 bit application may call an operating system routine even if the address of the routine is outside the 32 bit or 16 bit address space using the call gate mechanism. Additionally, a call gate descriptor may reference a code segment having any operating mode. The operating system may ensure that the most significant 32 bits of the offset in the call gate are zero (for a 32 bit target segment) or the most significant 48 bits of the offset in the call gate are zero (for a 16 bit target segment).Turning now to FIG. 18, a block diagram of one embodiment of a call gate descriptor 120 is shown. Other embodiments are possible and contemplated. Similar to FIGS. 2 and 3, the most significant bytes are illustrated above the least significant bytes. The most significant bit of each group of four bytes is illustrated as bit 31 and the least significant bit is illustrated as bit 0. Short vertical lines within the four bytes delimit each bit, and the long vertical lines delimit a bit but also delimit a field. As mentioned above, a call gate descriptor occupies two entries in a descriptor table. The horizontal dashed line in FIG. 18 divides call gate descriptor 120 into an upper portion (above the line) and a lower portion (below the line). The lower portion is stored in the entry indexed by the call gate's segment selector, and the upper portion is stored in the next succeeding entry.Call gate descriptor 120 includes a target segment selector (field 122), an offset (fields 124A, 124B, and 124C), a present (P) bit 126, a descriptor privilege level (DPL) 128, a type field 130, and a pseudo-type field 132. The P bit is similar to P bit 48 described in FIGS. 2 and 3 above. The target segment selector identifies an entry within one of the descriptor tables at which the target segment descriptor (having the greater privilege level) is stored. The offset identifies the address at which code fetching is to begin. In 32/64 mode, since the code segment has no base address and flat linear addressing is used, the offset is the address at which code fetching begins. In other modes, the offset is added to the segment base defined by the target segment descriptor to generate the address at which code fetching begins. As mentioned above, the offset may comprise 64 bits in the present embodiment.DPL 128 stores the minimum privilege level the calling routine must have (both in the current privilege level and the requested privilege level) which may successfully pass through the call gate and execute the called routine at the privilege level specified in the target segment descriptor.Type field 130 is coded to a call gate descriptor type. In one embodiment, this type is coded as the 32 bit call gate type defined in the x86 architecture. Alternatively, other encodings may be used. Finally, pseudo-type field 132 is coded to an invalid type (e.g. zero) to ensure that if a segment selector identifying the segment table entry storing the upper half of call gate descriptor 120 is presented, then an exception will be signaled by processor 10.In order to improve performance, processor 1042 may store the base address of each segment in hidden descriptor-cache registers 1604. Each time a segment register 1054 is loaded, the segment's base address, size limit, and access attributes are loaded into these hidden registers 1604. Subsequent memory references may then utilize these hidden registers 1604 in order to more quickly form addresses. In the absence of these registers 1604, additional memory accesses may be required. For example, while operating in protected mode, the segment base and other values must be obtained via the appropriate descriptor table 1046. However, these descriptor-table values reside in memory 1040. Consequently, without the descriptor-cache registers 1604, each memory access would require additional accesses to memory.While operating in long mode, execution of certain instructions may result in a change in the processor's operating mode. For example, in one embodiment, a far transfer instruction (i.e., a control transfer instruction which may include a transfer of control between different segments) may result in a switch between 64-bit mode and compatibility mode. As previously mentioned, segmentation may be disabled while in 64-bit mode and enabled while in compatibility mode.As in the x86 architecture, during execution of a far transfer instruction, the CS register may be loaded with a new value, which could potentially change the segmentation state from being disabled to being enabled, or vice-versa. However, there may be particular operations (including microcode operations) whose operation depends on the segmentation state. Certain operations may behave one way when segmentation is enabled, and behave another way when segmentation is disabled. Because operations which change the segmentation state, and operations which depend on the segmentation state may be speculatively executed out of order, proper synchronization of those operations is desired.In one embodiment, when performing a far transfer instruction a target limit check may be performed to ensure that the attempted transfer operation is permitted. In one embodiment, a far transfer instruction initiates a microcode routine which performs both a load to the CS register and a corresponding check (e.g., a limit check). Generally speaking, in the x86 architecture, a limit check is a protection check which involves checking the offset used in the address calculation against the segment's limit. If an operation tries to address beyond the limit, an exception is raised. As already mentioned, during the far transfer operation, a new value may be loaded into the CS register. In one embodiment this may be accomplished by using a move instruction or similar operation. Subsequent to the CS register load, the target instruction address may be read and a limit check performed. For example, in one embodiment a read of the target RIP is done, followed by a limit check. As described above, the RIP is a 64-bit instruction pointer to support 64-bit mode.However, because segmentation may be disabled while operating in 64-bit mode, the above described limit check may not be appropriate or necessary. Consequently, in one embodiment the type of check performed depends on the segmentation state. If segmentation is enabled, then a normal limit check is performed. On the other hand, if segmentation is not enabled, a check of the target address is performed to ensure it is in correct form. In one embodiment, an address is in correct form when it is in "canonical" address form. Consequently, when segmentation is not enabled, a check may be performed to ensure that address references are in "canonical address form". While only limit and canonical check are mentioned above, numerous other checks are possible and are contemplated.While long mode may define 64 bits of virtual address, particular implementations may support fewer than 64 bits. For example, in one embodiment 48 bits of virtual address may be supported. Although implementations might not use all 64 bits of the virtual address, addresses may be required to adhere to a particular format. In such implementations, addresses may be checked to ensure they are in a correct, or "canonical", address form. In one embodiment, bits 63 through the most-significant implemented bit are checked to see if they are all zeros or all ones. If a virtual memory reference docs not conform to this format, the address is not in canonical form and an exception may be generated.During execution of a far transfer, a first operation configured to load the CS with a new value may be dispatched, followed by a second operation configured to perform either a limit or a canonical check. As previously mentioned, the result of the first operation to load the CS with a new value could cause the segmentation state to change. However, if the first operation is not retired (i.e., the CS has not yet been written to) before the second operation is dispatched, and the segmentation state has changed, the second operation may utilize the old CS value to get the original segmentation information and could potentially perform a wrong check.Another potential problem related to the segmentation state concerns stack pushes. After a new CS is loaded (pursuant to a control transfer instruction), there may be several stack pushes to store the original processor state. If segmentation is enabled, then the SS base should be added to the offset to get the correct stack address. However, if segmentation is disabled, the SS base should be ignored. Consequently, the segmentation state for stack pushes should be based on the new CS. Depending on whether the CS load operation has retired or not, the stack pushes could get the wrong segmentation information as well.Various alternatives are proposed herein for detecting changes in segmentation state and ensuring synchronization of state changes with corresponding check operations. In one embodiment, an exception handler may be used to ensure synchronization of segmentation information with corresponding operations. For example, operations (e.g., microcode in one embodiment) configured to perform the above described limit/canonical checks may include an exception handler. Prior to performing a limit or canonical check, a determination is made as to whether or not the segmentation state has changed. If a change in segmentation state is detected, an exception is generated which is configured to flush the pipeline. Operations following the CS load operation may then be re-executed in order to ensure the new segmentation state is picked up correctly.In a second embodiment, specific reads of the current and new segmentation state followed by a comparison may be performed in order to detect segmentation state changes. In such an embodiment, a read of the code segment descriptor corresponding to the new CS selector is performed in order to ascertain attribute(s) corresponding to the new CS selector. In addition, a read to the GDT table is performed in order to ascertain attribute(s) corresponding to the original CS selector. A comparison of the original CS.L and CS.D bits may then be made to the new CS.L and CS.D bits in order to determine if a segmentation state change is indicated. Based on the result of this comparison, if the segmentation state is changed, a branch abort may be used to ensure that the pipeline is flushed and the new segmentation state is picked up.FIG. 19 illustrates yet another embodiment configured to detect segmentation state changes and ensure synchronization with subsequent operations. FIG. 19 shows a portion of processor 10, including MMU 20, data cache 16, instruction cache 12, microcode (MROM) unit 1910, register file 22, execution core 14, descriptor cache 1604, and special registers 1602. Execution core 14 includes load/store unit 1900 configured to perform load/store operations. Alternative embodiments which utilize other than a load/store architecture may utilize other circuitry to perform similar functionality. Also illustrated is a special register bus 1970 for providing access to special registers 1602.Because corresponding descriptor cache registers 1604 may be loaded with new values each time a segment register is loaded, and because certain embodiments of processor 10 may employ speculative execution of operations, temporary storage of a new segment descriptor may be desired for speculative operations. If the speculative operation is aborted, the temporary descriptor may be discarded. In the embodiment of FIG. 19, special registers 1602 includes temporary descriptor storage 1904 for use in temporarily storing a new descriptor until the corresponding operation is either aborted or retired.During execution of a far transfer, the load/store unit 1900 is configured to load the CS register with a new value. In one embodiment, a far transfer operation may correspond to a microcode routine in MROM unit 1910 which includes a sequence of microinstructions or similar operations. In response to a far transfer operation, load/store unit 1900 may also be configured to generate a temporary segment descriptor corresponding to the new selector. In the embodiment shown, temporary segment descriptor values may be stored in temporary descriptor storage 1904. Because load/store unit 1900 is configured to generate this new temporary descriptor 1904, the load/store 1900 unit has ready access to the segmentation state corresponding to the new selector. Load/store unit 1900 may also access temporary descriptor 1904 via special registers bus 1970.In order to determine the original segmentation state, an additional read via the GDT may be performed. Subsequent to receiving the original segmentation state information, a comparison of the new and original states is done in order to determine if a change in state has occurred. If a change in state is detected, a branch is taken to flush the pipeline. Operations following the CS load operation may then be re-executed in order to ensure the new segmentation state is picked up correctly.In yet another embodiment, a special register is created to report the segmentation state change. In this embodiment, the newly created register is located in the load/store unit 1900 or a location readily accessible to load/store unit 1900, such as special registers 1602. The checking operation need only read this new register to determine if the segmentation state has changed. If there has been no change in state, sequential execution of instructions may continue. Otherwise, a branch may be taken which is configured to flush the pipe and ensure the new segmentation state is picked up. By creating and utilizing a special register to report segmentation state changes, the above described additional read via the GDT to obtain the original segmentation state is not needed. Consequently, the latency associated with the GDT read may be eliminated.In order to create and utilize this special register in an efficient manner, certain observations may be made with respect to existing x86 associated operations. For example, for INT and far CALLs through a call gate (both of which may perform checks for changes in privilege level), a read of the new CS descriptor may be performed in order to determine the new CPL. In one embodiment, the new CPL may be read from the new descriptor corresponding to the new selector. As described above, a temporary descriptor 1904 may be created and the new descriptor values stored in temporary registers. Among the temporary registers may be a CPL register to indicate a new CPL, and an attribute register to indicate the corresponding segmentation state information. Using the second embodiment described above, two descriptor reads would then be performed. A first read is performed to the CPL register to get the CPL, and another to the attribute register to get the CS.D and CS.L attributes (for use in checking for segmentation state changes). If only one CS descriptor access is allowed at a time, the added read results in additional cycles.Rather than doing two register reads to obtain the descriptor values, an extra bit SEG_STATE_CHG may be added to the existing register CPL. The new bit SEG_STATE_CHG is used to indicate a change of the segmentation state. However, because other operations may use and rely on the CPL register, simply expanding the CPL register by a bit may require additional changes in order to ensure that operations which rely on accessing the CPL operate correctly. For example, for those operations which rely on the CPL, a mask of the new bit may need to be performed. As an alternative to this added bit/masking approach, an entirely new register SEGCHG_CPL 1940 may be defined which serves both purposes. The new register SEGCHG_CPL 1940 is a combination of CPL and SEG_STATE_CHG. In the new register SEGCHG_CPL 1940, bits 1:0 may be used to represent the current CPL 1950, and bit 21960 used to indicate a change in the segmentation state. Utilizing this newly defined register, a single register read may be used to obtain both the new CPL and determine whether a segmentation state change is indicated.Turning now to FIG. 20, a method for detecting and responding to a segmentation change is shown. In response to detecting a far transfer 2000, a load selector operation is initiated 2002. As described above, in one embodiment the load selector operation may correspond to a number of microinstructions corresponding to a microcode unit 1910. In addition to the load selector operation 2002, a check operation is initiated 2004. It is noted that in an architecture capable of out-of-order execution, the load operation 2002 and check operation 2004 may be executed in reverse order. If a read of the SEGCHG bit 1960 indicates a segmentation state change 2006, a branch configured to flush the pipeline corresponding to the transfer operation is initiated 2008. On the other hand, if no segmentation state change is detected 2006, the current state is determined 2014. If segmentation is enabled 2014, then a limit check is performed 2016. Alternately, if segmentation is disabled, a canonical check is performed 2018.FIG. 21 is a block diagram of one embodiment of a carrier medium 1090. Other embodiments are possible and contemplated. In the embodiment of FIG. 21, carrier medium 1090 stores an interpreter program 1092, a translator program 1094, a long mode (LM) enter routine 1096, and LM leave routine 1098.Generally speaking, a carrier medium may include storage media such as magnetic or optical media, e.g., disk or CD-ROM, volatile or non-volatile memory media such as RAM (e.g. SDRAM, RDRAM, SRAM, etc.), ROM, etc., as well as transmission media or signals such as electrical, electromagnetic, or digital signals, conveyed via a communication medium such as a network and/or a wireless link. Carrier medium 1090 may thus be coupled to a computer system including processor 1042, may be part of a computer system including processor 1042, or may be a communication medium on which the computer system is capable of communicating. Computer systems including processor 1042 may be of any construction. For example, computer systems similar to those shown in FIGS. 22 and 23 may be suitable.Interpreter program 1090 may operate according to the flowchart of FIG. 12. Translator program 1094 may operate according to the flowchart of FIG. 13. Generally, interpreter program 1092 and translator program 1094 may each comprise code sequences including native instructions.LM enter routine 1096 may comprise native or non-native instructions which, when executed, perform the operations of FIG. 10. LM leave routine 1098 may comprise native or non-native instructions which, when executed, perform the operations of FIG. 11.Computer SystemsTurning now to FIG. 22, a block diagram of one embodiment of a computer system 200 including processor 10 coupled to a variety of system components through a bus bridge 202 is shown. Other embodiments are possible and contemplated. In the depicted system, a main memory 204 is coupled to bus bridge 202 through a memory bus 206, and a graphics controller 208 is coupled to bus bridge 202 through an AGP bus 210. Finally, a plurality of PCI devices 212A-212B are coupled to bus bridge 202 through a PCI bus 214. A secondary bus bridge 216 may further be provided to accommodate an electrical interface to one or more EISA or ISA devices 218 through an EISA/ISA bus 220. Processor 10 is coupled to bus bridge 202 through a CPU bus 224 and to an optional L2 cache 228. Together, CPU bus 224 and the interface to L2 cache 228 may comprise an external interface to which external interface unit 18 may couple.Bus bridge 202 provides an interface between processor 10, main memory 204, graphics controller 208, and devices attached to PCI bus 214. When an operation is received from one of the devices connected to bus bridge 202, bus bridge 202 identifies the target of the operation (e.g. a particular device or, in the case of PCI bus 214, that the target is on PCI bus 214). Bus bridge 202 routes the operation to the targeted device. Bus bridge 202 generally translates an operation from the protocol used by the source device or bus to the protocol used by the target device or bus.In addition to providing an interface to an ISA/EISA bus for PCI bus 214, secondary bus bridge 216 may further incorporate additional functionality, as desired. An input/output controller (not shown), either external from or integrated with secondary bus bridge 216, may also be included within computer system 200 to provide operational support for a keyboard and mouse 222 and for various serial and parallel ports, as desired. An external cache unit (not shown) may further be coupled to CPU bus 224 between processor 10 and bus bridge 202 in other embodiments. Alternatively, the external cache may be coupled to bus bridge 202 and cache control logic for the external cache may be integrated into bus bridge 202. L2 cache 228 is further shown in a backside configuration to processor 10. It is noted that L2 cache 228 may be separate from processor 10, integrated into a cartridge (e.g. slot 1 or slot A) with processor 10, or even integrated onto a semiconductor substrate with processor 10.Main memory 204 is a memory in which application programs are stored and from which processor 10 primarily executes. A suitable main memory 204 comprises DRAM (Dynamic Random Access Memory). For example, a plurality of banks of SDRAM (Synchronous DRAM) or Rambus DRAM (RDRAM) may be suitable.PCI devices 212A-212B are illustrative of a variety of peripheral devices such as, for example, network interface cards, video accelerators, audio cards, hard or floppy disk drives or drive controllers, SCSI (Small Computer Systems Interface) adapters and telephony cards. Similarly, ISA device 218 is illustrative of various types of peripheral devices, such as a modem, a sound card, and a variety of data acquisition cards such as GPIB or field bus interface cards.Graphics controller 208 is provided to control the rendering of text and images on a display 226. Graphics controller 208 may embody a typical graphics accelerator generally known in the art to render three-dimensional data structures which can be effectively shifted into and from main memory 204. Graphics controller 208 may therefore be a master of AGP bus 210 in that it can request and receive access to a target interface within bus bridge 202 to thereby obtain access to main memory 204. A dedicated graphics bus accommodates rapid retrieval of data from main memory 204. For certain operations, graphics controller 208 may further be configured to generate PCI protocol transactions on AGP bus 210. The AGP interface of bus bridge 202 may thus include functionality to support both AGP protocol transactions as well as PCI protocol target and initiator transactions. Display 226 is any electronic display upon which an image or text can be presented. A suitable display 226 includes a cathode ray tube ("CRT"), a liquid crystal display ("LCD"), etc.It is noted that, while the AGP, PCI, and ISA or EISA buses have been used as examples in the above description, any bus architectures may be substituted as desired. It is further noted that computer system 200 may be a multiprocessing computer system including additional processors (e.g. processor 10a shown as an optional component of computer system 200). Processor 10a may be similar to processor 10. More particularly, processor 10a may be an identical copy of processor 10. Processor 10a may be connected to bus bridge 202 via an independent bus (as shown in FIG. 22) or may share CPU bus 224 with processor 10. Furthermore, processor 10a may be coupled to an optional L2 cache 228a similar to L2 cache 228.Turning now to FIG. 23, another embodiment of a computer system 300 is shown. Other embodiments are possible and contemplated. In the embodiment of FIG. 23, computer system 300 includes several processing nodes 312A, 312B, 312C, and 312D. Each processing node is coupled to a respective memory 314A-314D via a memory controller 316A-316D included within each respective processing node 312A-312D. Additionally, processing nodes 312A-312D include interface logic used to communicate between the processing nodes 312A-312D. For example, processing node 312A includes interface logic 318A for communicating with processing node 312B, interface logic 318B for communicating with processing node 312C, and a third interface logic 318C for communicating with yet another processing node (not shown). Similarly, processing node 312B includes interface logic 318D, 318E, and 318F; processing node 312C includes interface logic 318G, 318H, and 318I; and processing node 312D includes interface logic 318J, 318K, and 318L. Processing node 312D is coupled to communicate with a plurality of input/output devices (e.g. devices 320A-320B in a daisy chain configuration) via interface logic 318L. Other processing nodes may communicate with other I/O devices in a similar fashion.Processing nodes 312A-312D implement a packet-based link for inter-processing node communication. In the present embodiment, the link is implemented as sets of unidirectional lines (e.g. lines 324A are used to transmit packets from processing node 312A to processing node 312B and lines 324B are used to transmit packets from processing node 312B to processing node 312A). Other sets of lines 324C-324H are used to transmit packets between other processing nodes as illustrated in FIG. 23. Generally, each set of lines 324 may include one or more data lines, one or more clock lines corresponding to the data lines, and one or more control lines indicating the type of packet being conveyed. The link may be operated in a cache coherent fashion for communication between processing nodes or in a noncoherent fashion for communication between a processing node and an I/O device (or a bus bridge to an I/O bus of conventional construction such as the PCI bus or ISA bus). Furthermore, the link may be operated in a non-coherent fashion using a daisy-chain structure between I/O devices as shown. It is noted that a packet to be transmitted from one processing node to another may pass through one or more intermediate nodes. For example, a packet transmitted by processing node 312A to processing node 312D may pass through either processing node 312B or processing node 312C as shown in FIG. 23. Any suitable routing algorithm may be used. Other embodiments of computer system 300 may include more or fewer processing nodes then the embodiment shown in FIG. 23.Generally, the packets may be transmitted as one or more bit times on the lines 324 between nodes. A bit time may be the rising or falling edge of the clock signal on the corresponding clock lines. The packets may include command packets for initiating transactions, probe packets for maintaining cache coherency, and response packets from responding to probes and commands.Processing nodes 312A-312D, in addition to a memory controller and interface logic, may include one or more processors. Broadly speaking, a processing node comprises at least one processor and may optionally include a memory controller for communicating with a memory and other logic as desired. More particularly, each processing node 312A-312D may comprise one or more copies of processor 10. External interface unit 18 may includes the interface logic 318 within the node, as well as the memory controller 316.Memories 314A-314D may comprise any suitable memory devices. For example, a memory 314A-314D may comprise one or more RAMBUS DRAMs (RDRAMs), synchronous DRAMs (SDRAMs), static RAM, etc. The address space of computer system 300 is divided among memories 314A-314D. Each processing node 312A-312D may include a memory map used to determine which addresses are mapped to which memories 314A-314D, and hence to which processing node 312A-312D a memory request for a particular address should be routed. In one embodiment, the coherency point for an address within computer system 300 is the memory controller 316A-316D coupled to the memory storing bytes corresponding to the address. In other words, the memory controller 316A-316D is responsible for ensuring that each memory access to the corresponding memory 314A-314D occurs in a cache coherent fashion. Memory controllers 316A-316D may comprise control circuitry for interfacing to memories 314A-314D. Additionally, memory controllers 316A-316D may include request queues for queuing memory requests.Generally, interface logic 318A-318L may comprise a variety of buffers for receiving packets from the link and for buffering packets to be transmitted upon the link. Computer system 300 may employ any suitable flow control mechanism for transmitting packets. For example, in one embodiment, each interface logic 318stores a count of the number of each type of buffer within the receiver at the other end of the link to which that interface logic is connected. The interface logic does not transmit a packet unless the receiving interface logic has a free buffer to store the packet. As a receiving buffer is freed by routing a packet onward, the receiving interface logic transmits a message to the sending interface logic to indicate that the buffer has been freed. Such a mechanism may be referred to as a "coupon-based" system.I/O devices 320A-320B may be any suitable I/O devices. For example, I/O devices 320A-320B may include network interface cards, video accelerators, audio cards, hard or floppy disk drives or drive controllers, SCSI (Small Computer Systems Interface) adapters and telephony cards, modems, sound cards, and a variety of data acquisition cards such as GPIB or field bus interface cards.Numerous variations and modifications will become apparent to those skilled in the art once the above disclosure is fully appreciated. It is intended that the following claims be interpreted to embrace all such variations and modifications. |
A semiconductor structure includes an electrode, a ferroelectric material adjacent the electrode, the ferroelectric material comprising an oxide of at least one of hafnium and zirconium, the ferroelectric material doped with bismuth, and another electrode adjacent the ferroelectric material on an opposite side thereof from the first electrode. Related semiconductor structures, memory cells, semiconductor devices, electronic systems, and related methods are disclosed. |
CLAIMSWhat is claimed is: 1. A semiconductor structure, comprising:an electrode;another electrode; anda ferroelectric material comprising an oxide of at least one of hafnium and zirconium, between the electrode and the another electrode, the ferroelectric material further comprising bismuth.2. The semiconductor structure of claim 1, wherein the ferroelectric material comprises hafnium bismuth oxide. 3. The semiconductor structure of claim 1, wherein the ferroelectric material comprises bismuth at between about 0.1 atomic percent and about 10.0 atomic percent of the ferroelectric material based on non-oxygen atoms of the ferroelectric material.4. The semiconductor structure of claim 1, wherein the ferroelectric material comprises bismuth at between about 0.3 atomic percent and about 1.0 atomic percent of the ferroelectric material based on non-oxygen atoms of the ferroelectric material.5. The semiconductor structure of claim 1, wherein the ferroelectric material further comprises at least one of magnesium, yttrium, strontium, niobium, tantalum, lanthanum, gadolinium, vanadium, phosphorus, potassium, scandium, ruthenium, selenium, calcium, barium, aluminum, arsenic, indium, and silicon.6. The semiconductor structure of claim 1, wherein the ferroelectric material further comprises magnesium.7. The semiconductor structure of claim 6, wherein the ferroelectric material comprises between about 0.3 part and about 10.0 parts of bismuth and magnesium for every about 100 parts of hafnium and zirconium.8. The semiconductor structure of claim 1, wherein the ferroelectric material comprises a uniform concentration of bismuth throughout a thickness thereof.9. The semiconductor structure of claim 1, wherein the ferroelectric material has an orthorhombic crystal structure.10. The semiconductor structure of any one of claims 1 through 9, wherein the oxide of at least one of hafnium and zirconium comprises hafnium zirconate (HfZr04), the hafnium zirconate doped with bismuth.11. The semiconductor structure of any one of claims 1 or 3 through 9, wherein the ferroelectric material comprises zirconium bismuth oxide.12. The semiconductor structure of any one of claims 1 through 7 or 9, wherein the ferroelectric material comprises a different atomic percent of bismuth proximate the electrode than at a location distal from the electrode.13. The semiconductor structure of any one of claims 1 through 9, wherein the ferroelectric material has a thickness between about 10 A and about 200 A.14. The semiconductor structure of any one of claims 1 through 9, wherein the ferroelectric material comprises hafnium zirconium bismuth oxide.15. The semiconductor structure of any one of claims 1 through 9, wherein the ferroelectric material comprises bismuth at between about 0.1 atomic percent and about 0.3 atomic percent of the ferroelectric material based on non-oxygen atoms of the ferroelectric material.16. The semiconductor structure any one of claims 1 through 9, wherein the ferroelectric material further comprises at least one of aluminum and magnesium.17. A method of forming a semiconductor structure, the method comprising: forming an electrode;forming a ferroelectric material comprising bismuth and at least one of hafnium oxide and zirconium oxide the electrode; andforming another electrode over the ferroelectric material.18. The method of claim 17, wherein forming a ferroelectric material comprises forming hafnium bismuth oxide.19. The method of claim 17 or claim 18, wherein forming a ferroelectric material comprises forming the ferroelectric material to comprise between about 0.1 atomic percent and about 10.0 atomic percent bismuth based on non-oxygen atoms of the ferroelectric material.20. An electronic system comprising the semiconductor structure of claim 1 , the electronic system comprising:a processor;a memory array operably coupled to the processor, the memory array comprising memory cells, each memory cell of the array of memory cells comprising a capacitor operably coupled to a conductive material in contact with a source region or a drain region, the capacitor comprising the semiconductor structure of claim 1 ; anda power supply in operable communication with the processor. |
SEMICONDUCTOR STRUCTURES, MEMORY CELLS AND DEVICES, SYSTEMS INCLUDING SAME, AND RELATED METHODSPRIORITY CLAIMThis application claims the benefit of the filing date of United States PatentApplication Serial No. 15/590,863, filed May 9, 2017, for "SEMICONDUCTORSTRUCTURES, MEMORY CELLS AND DEVICES, SYSTEMS INCLUDING SAME, AND RELATED METHODS." TECHNICAL FIELDEmbodiments disclosed herein relate to semiconductor structures including one or more ferroelectric materials, to related memory cells, to methods of forming suchsemiconductor structures and memory cells, and to memory devices and systems including such devices. More particularly, embodiments of the disclosure relate to ferroelectric semiconductor structures and memory cells including ferroelectric materials including doped hafnium oxide materials, to methods of forming such semiconductor structures and memory cells, to memory devices including such cells, and to systems including such devices.BACKGROUNDNon-volatile memory devices are an important element of electronic systems due to their ability to maintain data absent a power supply. Ferroelectric random-access memory (FeRAM, FRAM) cells have been considered for use in non-volatile memory devices. Some non-volatile memory cells include ferroelectric materials exhibiting a switchable polarization responsive to application of an electric field (e.g., a bias voltage). Ferroelectric materials may include at least two polarization states, which polarization states may be switched by the application of the electric field. The polarization state of the ferroelectric material in a FeRAM cell may be used to determine a logic state (e.g., a 1 or a 0) of the FeRAM cell. After removal of the electric field, the polarization state of the ferroelectric material may remain stable for at least some period of time. Accordingly, the ferroelectric material may be suitable for use in a non-volatile memory device, eliminating the need to refresh the cell periodically.Perovskite materials, such as lead zirconate titanate (PZT), have commonly been used as ferroelectric materials in FeRAM cells. However, such conventional ferroelectric materials often fall short in terms of bit density and scalability because perovskite materials exhibit low remnant polarization (Pr), a strength of which may correlate to a readout signal for the associated memory cell. For FeRAM cells, the thickness of the ferroelectric PZT film must be up to about 200 nm to achieve suitable properties since PZT loses its ferroelectric properties at lower thicknesses. Thus, the use of conventional ferroelectric materials for memory devices having a feature size of 20 nm or less has been limited. In addition, conventional ferroelectric materials, such as PZT, possess limited compatibility with standard semiconductor processing techniques.BRIEF DESCRIPTION OF THE DRAWINGS FIG. 1 is a cross-sectional view of a capacitor including a ferroelectric material, according to embodiments of the disclosure;FIG. 2 is a cross-sectional view of a memory cell including the capacitor, according to embodiments of the disclosure;FIG. 3 is a simplified block diagram of a system implemented according to one or more embodiments of the disclosure;FIG. 4 is a schematic of a system including FeRAM cells having a capacitor, according to embodiments of the disclosure;FIG. 5A is a graph comparing a hysteresis curve of a bismuth-doped ferroelectric material, according to embodiments of the disclosure, compared to a hysteresis curve of a conventional undoped ferroelectric material;FIG. 5B is a graph of a signal strength vs. cycle number of a memory cell including the bismuth-doped ferroelectric material, according to embodiments of the disclosure, compared to a signal strength vs. cycle number of a conventional memory cell including the conventional ferroelectric material; andFIG. 5C is a graph illustrating a crystal phase of the ferroelectric material including the bismuth compared to the conventional ferroelectric material.MODE(S) FOR CARRYING OUT THE INVENTION The illustrations included herewith are not meant to be actual views of any particular systems or semiconductor structures, but are merely idealized representations that are employed to describe embodiments herein. Elements and features common between figures may retain the same numerical designation except that, for ease of following the description, for the most part, reference numerals begin with the number of the drawing on which the elements are introduced or most fully described. The following description provides specific details, such as material types, material thicknesses, and processing conditions in order to provide a thorough description of embodiments described herein. However, a person of ordinary skill in the art will understand that the embodiments disclosed herein may be practiced without employing these specific details. Indeed, the embodiments may be practiced in conjunction with conventional fabrication techniques employed in the semiconductor industry. In addition, the description provided herein does not form a complete description of a semiconductor structure or a memory cell comprising a ferroelectric material, or a complete description of a process flow for manufacturing such semiconductor structures or memory cells. The structures described below do not form a complete semiconductor structure or a complete memory cell. Only those process acts and structures necessary to understand the embodiments described herein are described in detail below. Additional acts to form a complete semiconductor structure or memory cell including the structures described herein may be performed by conventional techniques.According to embodiments disclosed herein, a ferroelectric material may include a metal oxide doped with bismuth. In some embodiments, the metal oxide includes hafnium oxide (Hf02, also referred to in the art as "hafnia"), zirconium oxide (Zr02, also referred to in the art as "zirconia"), or a combination thereof. The metal oxide may be crystallized to form the ferroelectric material. The ferroelectric material may be doped with between about 0.1 atomic percent (at. %) bismuth and about 10.0 atomic percent bismuth, based on the metal atoms (e.g., non-oxygen atoms) in the ferroelectric material. The ferroelectric material may include hafnium bismuth oxide (HfBiOx), hafnium zirconium bismuth oxide (HfZrBiOx), zirconium bismuth oxide (ZrBiOx), hafnium zirconate (HfZr04), another hafnium-containing material, another zirconium-containing material, other ferroelectric materials doped with bismuth, or combinations thereof. The bismuth may be uniformly distributed throughout a thickness of the ferroelectric material. In other embodiments, the ferroelectric material may exhibit a varying concentration of bismuth throughout a thickness thereof. In some embodiments, the ferroelectric material may include at least another dopant, such as at least one of magnesium, yttrium, strontium, niobium, tantalum, lanthanum, gadolinium, vanadium, phosphorus, potassium, scandium, ruthenium, selenium, calcium, barium, aluminum, arsenic, and indium. The ferroelectric material may exhibit an orthorhombic crystal phase. In some embodiments, the ferroelectric material may be formed on a substrate exhibiting a crystal phase other than orthorhombic (e.g., tetragonal, cubic, hexagonal, rhombohedral). In other embodiments, the ferroelectric material may overlie (e.g., be formed on) a material exhibiting an amorphous phase and may exhibit a substantially uniform and orthorhombic crystal phase. The ferroelectric material may be used in one or more of a ferroelectric semiconductor structure, a ferroelectric memory cell, a ferroelectric field effect transistor (FeFET), a ferroelectric tunnel junction (FTJ), or another ferroelectric device.In some embodiments, the ferroelectric material may exhibit improved ferroelectric properties compared to conventional ferroelectric materials. In some embodiments, the ferroelectric material may exhibit up to a twenty -five percent increase in a magnitude of a remnant polarization and a value of 2Pr, which is equal to a difference between the positive remnant polarization and the negative polarization remnant of the ferroelectric material after removal of an external electric field. The increase in the remnant polarization may correspond to an increased readout signal during use and operation of an associated memory cell.Accordingly, a memory cell including the ferroelectric material may exhibit an improved memory readout signal compared to a memory cell including a conventional ferroelectric material. In some embodiments, a memory cell including the ferroelectric material of the present disclosure may have a useful life that is longer than a useful life of a memory cell including a conventional ferroelectric material (e.g., may be cycled more times prior to exhibiting reduced ferroelectric properties).As used herein, the term "doped" means and includes a material that includes an impurity that may alter or influence a crystal lattice of the material to modify electric properties (e.g., electric conductivity, ferroelectricity, etc.) of the material. In some instances, the dopant may inhabit lattice sites in the crystal lattice of the material.FIG. 1 illustrates a capacitor 100 including a ferroelectric material 104. The capacitor 100 may form a part of a memory cell according to embodiments of the disclosure and may include a bottom electrode 102, the ferroelectric material 104 over the bottom electrode 102, and a top electrode 106 over the ferroelectric material 104. The capacitor 100 may be, for example, a metal -insulator-metal (MIM) capacitor. While the capacitor 100 is described and illustrated as being used in ferroelectric memory cells, the disclosure is not so limited and the capacitor 100 may be used in other non-volatile memory cells.The bottom electrode 102 may include a conductive material. In some embodiments, the bottom electrode 102 includes titanium, titanium nitride (TiN), titanium aluminum nitride (TiAIN), tantalum nitride (TaN), tungsten, tungsten nitride (WN), ruthenium, iridium, platinum, a silicon-containing material (e.g., titanium silicon nitride (TiSiN), tungsten silicide (WSix)), a metal silicide, polysilicon, another conductive material, or combinations thereof. The bottom electrode 102 may be formed by sputtering, atomic layer deposition (ALD), chemical vapor deposition (CVD), physical vapor deposition (PVD), plasma enhanced chemical vapor deposition (PECVD), low pressure chemical vapor deposition (LPCVD), or other suitable process.The ferroelectric material 104 may directly overlie and contact the bottomelectrode 102. The ferroelectric material 104 may include a material that exhibits a polarization (e.g., a displacement of oppositely charged ions to create a dipole moment) that is switchable by an external electric field during use and operation of a memory cell including the ferroelectric material 104. In other words, the ferroelectric material 104 may be formed of a material formulated to exhibit a switchable polarization responsive to exposure to a switching voltage, typically in an opposite direction as the initially applied electric field. In addition, the ferroelectric material 104 may be formulated to exhibit a remnant polarization (Pr) that may remain after removal of the external electric field. In other words, the ferroelectric material 104 may be formulated to exhibit a non-zero polarization after an external electric field is removed (e.g., when the ferroelectric material 104 is not exposed to an external electric field). A direction of such polarization may be dependent on the direction and the history of the electric field previously applied to the ferroelectric material 104.Accordingly, the ferroelectric material 104 may exhibit hysteresis. As a result, the polarization of the ferroelectric material 104 may be interpreted as a logic state (e.g., a 1 or a 0) of the associated memory cell.The ferroelectric material 104 may include a metal oxide doped with bismuth. In some embodiments, the metal oxide may include one or more of hafnium oxide, zirconium oxide, hafnium zirconate (HfZr04), another hafnium-containing material, another zirconium- containing material, and combinations thereof.In some embodiments, the ferroelectric material 104 comprises hafnium oxide doped with bismuth. In some such embodiments, the ferroelectric material 104 may include a material including hafnium, bismuth, and oxygen atoms and may be referred to herein as hafnium bismuth oxide. For convenience, the composition of hafnium bismuth oxide may be abbreviated as "HfBiOx," which does not indicate the stoichiometry of the hafnium, bismuth, and oxygen atoms in the ferroelectric material 104. In other embodiments, the ferroelectric material 104 may include zirconium oxide doped with bismuth. In some such embodiments, the ferroelectric material 104 may include zirconium, bismuth, and oxygen atoms and may be referred to herein as zirconium bismuth oxide. For convenience, the composition of zirconium bismuth oxide may be abbreviated as "ZrBiOx" which does not indicate the stoichiometry of the zirconium, bismuth, or oxygen atoms in the ferroelectric material. In yet other embodiments, the ferroelectric material 104 may include hafnium zirconium bismuth oxide, which may be abbreviated as "HfZrBiOx" and which does not indicate thestoichiometry of hafnium, zirconium, bismuth, and oxygen in the ferroelectric material 104.The ferroelectric material 104 may include between about 0.1 atomic percent (at. %) and about 10.0 atomic percent bismuth, based on the metal atoms of the metal oxide and the dopant (i.e., based on non-oxygen atoms of the ferroelectric material 104). Stated another way, bismuth may constitute between about 0.1 atomic percent and about 10.0 atomic percent of the metals and the dopant material in the ferroelectric material 104 (e.g., based on the non- oxygen atoms in the ferroelectric material 104). By way of nonlimiting example, bismuth may constitute between about 0.1 atomic percent and about 0.3 atomic percent, between about 0.3 atomic percent and about 0.5 atomic percent, between about 0.5 atomic percent and about 1.0 atomic percent, between about 1.0 atomic percent and about 3.0 atomic percent, between about 3.0 atomic percent and about 5.0 atomic percent, or between about 5.0 atomic percent and about 10.0 atomic percent of the ferroelectric material 104, excluding the oxygen atoms in the ferroelectric material 104. In some embodiments, bismuth constitutes about 0.3 atomic percent of the ferroelectric material 104, excluding the oxygen atoms.Since, in some embodiments, the ferroelectric material 104 may comprise about two oxygen atoms for every metal atom (for every atom of hafnium, zirconium, bismuth, or other dopant atom), bismuth may constitute between about 0.15 atomic percent and about 5.0 atomic percent of the ferroelectric material 104, including the oxygen atoms.In some embodiments, the ferroelectric material 104 may include an oxide having the general formula HfxZr(l-x-y)BiyOz, wherein x is between about 0 and about 1.0, y is between about 0.01 and about 0.10, and z is between about 1.0 and about 3.0. In some embodiments, an atomic percent of hafnium may be greater than an atomic percent of zirconium. In some such embodiments, x may be between about 0.50 and about 0.99, such as between about 0.50 and about 0.60, between about 0.60 and about 0.70, between about 0.70 and about 0.80, between about 0.80 and about 0.90, or between about 0.90 and about 0.99, and y may be between about 0 and about 0.49, such as between about 0 and about 0.10, between about 0.10 and about 0.20, between about 0.20 and about 0.30, between about 0.30 and about 0.40, or between about 0.40 and about 0.49. A value of y may be between about 0.01 and about 0.10, such as between about 0.001 and about 0.003, between about 0.003 and about 0.005, between about 0.005 and about 0.01, between about 0.01 and about 0.03, between about 0.03 and about 0.05, or between about 0.05 and about 0.10. In some embodiments, z is equal to about 2.0. The ferroelectric material 104 may exhibit a uniform concentration of bismuth throughout a thickness thereof (e.g., throughout a vertical direction illustrated in FIG. 1). In some such embodiments, the ferroelectric material 104 may exhibit substantially the same atomic percent of bismuth proximate the bottom electrode 102 as an atomic percent of bismuth proximate the top electrode 106. Similarly, the ferroelectric material 104 may exhibit substantially the same atomic percent of bismuth at a central portion thereof (e.g., at a location located about a same distance from the top electrode 106 as from the bottom electrode 102) as an atomic percent of bismuth proximate each of the bottom electrode 102 and the top electrode 106.In other embodiments, the ferroelectric material 104 may exhibit a non-uniform atomic percent of bismuth across a thickness thereof. In some such embodiments, the ferroelectric material 104 may exhibit a gradient of bismuth. Accordingly, different portions of the ferroelectric material 104 may exhibit a different atomic percent of bismuth than other portions thereof. By way of nonlimiting example, some portions of the ferroelectric material 104 may be free of bismuth while other portions thereof may include differing atomic percentages of bismuth. In some embodiments, portions of the ferroelectric material 104 proximate the bottom electrode 102 and the top electrode 106 may include a greater atomic percent of bismuth than portions of the ferroelectric material 104 distal from the bottom electrode 102 and the top electrode 106. In other embodiments, portions of the ferroelectric material 104 proximate the bottom electrode 102 and the top electrode 106 may include a lesser atomic percent of bismuth than portions of the ferroelectric material 104 distal from the bottom electrode 102 and the top electrode 106.The ferroelectric material 104 may further include at least another dopant in addition to bismuth. The another dopant may be selected from the group consisting of magnesium, yttrium, strontium, niobium, tantalum, lanthanum, gadolinium, vanadium, phosphorus, potassium, scandium, ruthenium, selenium, calcium, barium, aluminum, arsenic, indium, and silicon. In some embodiments, the at least another dopant includes magnesium. In some such embodiments, the ferroelectric material 104 may include bismuth and magnesium. In embodiments where the ferroelectric material 104 includes hafnium oxide, the ferroelectric material 104 may comprise hafnium bismuth magnesium oxide (HfBiMgOx). In other embodiments, the ferroelectric material 104 may include hafnium zirconium bismuth magnesium oxide (HfZrBiMgOx). In yet other embodiments, the ferroelectric material 104 may include zirconium bismuth magnesium oxide (ZrBiMgOx). In other embodiments, the ferroelectric material 104 may include aluminum hafnium bismuth oxide (AlHfBiOx), aluminum zirconium bismuth oxide (AlZrBiOx), or aluminum hafnium zirconium bismuth oxide (AlHfZrBiOx).An atomic percent of the another dopant in the ferroelectric material 104 may be between about 0.1 atomic percent and about 25.0 atomic percent, based on the metal atoms of the metal oxide, the bismuth, and the another dopant (i.e., based on non-oxygen atoms of the ferroelectric material 104). In some embodiments, an atomic percent of the another dopant and the bismuth may be between about 0.1 atomic percent and about 0.3 atomic percent, between about 0.3 atomic percent and about 0.5 atomic percent, between about 0.5 atomic percent and about 1.0 atomic percent, between about 1.0 atomic percent and about 3.0 atomic percent, between about 3.0 atomic percent and about 5.0 atomic percent, between about 5.0 atomic percent and about 10.0 atomic percent, or between about 10.0 atomic percent and about 25.0 atomic percent of the ferroelectric material 104, excluding oxygen. In other embodiments, an atomic percent of each of bismuth and the another dopant may be between about 0.1 atomic percent and about 10.0 atomic percent. In other embodiments, the atomic percent of the another dopant in the ferroelectric material 104 may be between about 0.1 atomic percent and about 10.0 atomic percent, based on the metal atoms of the metal oxide, the bismuth, and the another dopant (i.e., based on non-oxygen atoms of the ferroelectric material 104). In some embodiments, such as where the ferroelectric material 104 is doped with silicon, the ferroelectric material 104 may comprise hafnium silicate (HfSiOx) doped with bismuth.The ferroelectric material 104 may exhibit a uniform concentration of the another dopant throughout a thickness thereof. In some such embodiments, the ferroelectric material 104 may exhibit substantially the same atomic percent of the another dopant proximate the bottom electrode 102 as an atomic percent of the another dopant proximate the top electrode 106. Similarly, the ferroelectric material 104 may exhibit substantially the same atomic percent of the another dopant at a central portion thereof (e.g., at a location located about a same distance from the top electrode 106 as from the bottom electrode 102).In other embodiments, the ferroelectric material 104 may exhibit a non-uniform atomic percent of the another dopant across a thickness thereof. In some such embodiments, the ferroelectric material 104 may exhibit a gradient of the another dopant. Accordingly, some portions of the ferroelectric material 104 may exhibit a different atomic percent of the another dopant than other portions thereof. By way of nonlimiting example, different portions of the ferroelectric material 104 may be free of the another dopant while other portions thereof may include a relatively greater atomic percent of the another dopant. In some embodiments, portions of the ferroelectric material 104 proximate the bottom electrode 102 and the top electrode 106 may exhibit a greater atomic percent of the another dopant than portions of the ferroelectric material 104 distal from the bottom electrode 102 and the top electrode 106. In other embodiments, portions of the ferroelectric material 104 proximate the bottom electrode 102 and the top electrode 106 may exhibit a lesser atomic percent of the another dopant than portions of the ferroelectric material 104 distal from the bottom electrode and the top electrode 106. In some embodiments, portions of the ferroelectric material 104 having a lower atomic percent of bismuth may exhibit a greater atomic percent of the another dopant. Similarly, portions of the ferroelectric material 104 having a greater atomic percent of bismuth may exhibit a lower atomic percent of the another dopant.The ferroelectric material 104 may have a thickness between about 10 A and about 200 A, such as between about 10 A and about 20 A, between about 20 A and about 30 A, between about 30 A and about 50 A, between about 50 A and about 100 A, or between about 100 A and about 200 A. In some embodiments, the thickness of the ferroelectric material 104 is about 10 A. In other embodiments, the thickness of the ferroelectric material 104 may be about 100 A. The ferroelectric material 104 may have a lesser thickness compared to conventional ferroelectric materials and still exhibit desired ferroelectric properties and may not exhibit current leakage therethrough. It is believed that the bismuth in the ferroelectric material 104 facilitates sufficient ferroelectric properties in the ferroelectric material 104, even at the lesser thicknesses relative to conventional ferroelectric materials.Compared to conventional ferroelectric materials, the ferroelectric material 104 may exhibit a lower operation voltage. Without wishing to be bound by any particular theory, it is believed that the bismuth of the ferroelectric material 104 facilitates desired ferroelectric properties at relatively lower thicknesses compared to conventional ferroelectric materials.The ferroelectric material 104 may be formed by one or more of ALD, CVD, PVD,PECVD, LPCVD, or other suitable process. In some embodiments, the ferroelectric material 104 is formed by sputtering. In some such embodiments, different components of the ferroelectric material 104 may be sputtered simultaneously. In some such embodiments, a deposition chamber (e.g., a sputtering chamber) may include a plurality of targets. The targets may include bismuth oxide (B203) and at least one metal oxide target. In someembodiments, the at least one metal oxide target may include at least one of hafnium oxide and zirconium oxide. In some embodiments, the deposition chamber may include at least a bismuth oxide target, at least a hafnium oxide target, and at least a zirconium oxide target. In some embodiments, the bismuth oxide and the at least one metal oxide target may be sputtered simultaneously to form the ferroelectric material 104 exhibiting a uniform atomic percent of bismuth therethrough and comprising at least one of hafnium oxide and zirconium oxide. One or more parameters (e.g., a power density applied to each target) may be adjusted to control a composition of the ferroelectric material formed by co-sputtering. In some embodiments, a power density applied to the at least one metal oxide target may be greater than a power density applied to the bismuth oxide target.In some embodiments, the deposition chamber may further include at least one target comprising a dopant material other than bismuth. By way of nonlimiting example, the deposition chamber may include at least one target comprising the at least another dopant and configured to resputter at least one of magnesium, yttrium, strontium, niobium, tantalum, lanthanum, gadolinium, vanadium, phosphorus, potassium, scandium, ruthenium, selenium, calcium, barium, aluminum, arsenic, indium, and silicon. In some embodiments, the target of the at least another dopant may comprise an oxide of the at least another dopant.In other embodiments, the ferroelectric material 104 may be formed by atomic layer deposition or chemical vapor deposition. In some such embodiments, an atomic percent of the bismuth may not be uniform across a thickness of the ferroelectric material 104.Similarly, an atomic percent of one or more metal oxides may not be uniform across a thickness of the ferroelectric material 104.Atomic vapor deposition and chemical vapor deposition techniques are known in the art and are, therefore, not described in detail herein. By way of nonlimiting example, a first portion of the ferroelectric material 104 may be formed by introducing at least one metal precursor into a deposition chamber. An oxidizer (e.g., oxygen, ozone, water, hydrogen peroxide, etc.) may be introduced into the deposition chamber to oxidize the at least one metal oxide precursor and form a metal on an exposed surface of a substrate or an electrode (e.g., the bottom electrode 102). One or more cycles may be performed to form a desired thickness of the at least one metal oxide. In some embodiments, one or more cycles of bismuth oxide deposition may be performed, such as by cycling a bismuth precursor followed by an oxygen source to form bismuth oxide on the surface of the material. Accordingly, in some embodiments, the ferroelectric material 104 may include a first portion (e.g., a first layer) comprising or consisting essentially of the at least one metal oxide, a second portion (e.g., a second layer) comprising or consisting essentially of bismuth oxide over the first portion, and a third portion (e.g., a third layer) comprising or consisting essentially of the at least one metal oxide over the bismuth oxide. The ferroelectric material 104 may include a plurality of distinct portions of the bismuth oxide and a plurality of portions of the at least one metal oxide. In some embodiments, the ferroelectric material 104 may include one or more distinct portions of another dopant, as described above.The ferroelectric material 104 may exhibit a crystal phase such that the ferroelectric material 104 exhibits ferroelectric properties. In some embodiments, a crystal phase of the ferroelectric material 104 may be orthorhombic. The ferroelectric material 104 may exhibit a spontaneous electric polarization that may be reversed responsive to exposure to an extemal electric field and may exhibit a nonzero remnant polarization (Pr) after removal of the extemal electric field. In other words, the ferroelectric material 104 may exhibit a hysteresis.Surprisingly, forming the ferroelectric material 104 with bismuth improves ferroelectric properties of the ferroelectric material 104, even though bismuth is not a rare earth element, as are many other materials that may induce ferroelectric properties in metal oxide materials. Without wishing to be bound by any particular theory, it is believed that the composition of the ferroelectric material 104, and the method of formation thereof, facilitates formation of an orthorhombic crystal phase and ferroelectric properties.After forming the ferroelectric material 104, the ferroelectric material may be annealed to induce a desired crystal phase in the ferroelectric material 104. In some embodiments, the ferroelectric material 104 may be annealed to form an orthorhombic crystal phase. In some embodiments, the ferroelectric material 104 is exposed to a temperature between about 400°C and about 800°C, such as between about 400°C and about 600°C, or between about 600°C and about 800°C for a time between about 10 seconds and about 1 hour, such as between about 10 seconds and about 30 seconds, between about 30 seconds and about 1 minute, between about 1 minute and about 10 minutes, between about 10 minutes and about 30 minutes, or between about 30 minutes and about 1 hour. In some embodiments, the ferroelectric material 104 is exposed to a temperature of about 600°C for about 30 seconds. However, the disclosure is not so limited and the ferroelectric material 104 may be annealed at a different temperature or for a different length of time. In other embodiments, the ferroelectric material 104 may exhibit a desired crystalline phase (e.g., the orthorhombic phase) during formation thereof. By way of nonlimiting example, the ferroelectric material 104 may be formed in an orthorhombic phase when the ferroelectric material 104 is formed by ALD or CVD.The top electrode 106 may directly overlie and contact the ferroelectric material 104. The top electrode 106 may include a conductive material. In some embodiments, the top electrode 106 includes titanium, titanium nitride, titanium aluminum nitride, tantalum nitride, tungsten, tungsten nitride, ruthenium, iridium, platinum, a silicon-containing material (e.g., titanium silicon nitride, tungsten silicide), a metal silicide, polysilicon, another conductive material, or combinations thereof. The top electrode 106 may be formed by sputtering, atomic layer deposition, chemical vapor deposition, physical vapor deposition, plasma enhanced chemical vapor deposition, low pressure chemical vapor deposition, or other suitable process.In some embodiments, the top electrode 106 may comprise the same material as the bottom electrode 102. In other embodiments, the top electrode 106 includes a material that is different than the bottom electrode 102.Accordingly, in one embodiment, a semiconductor structure comprises an electrode, another electrode, and a ferroelectric material comprising an oxide of at least one of hafnium, zirconium, and bismuth between the electrode and the another electrode.Accordingly, in one embodiment, a method of forming a semiconductor structure comprises forming an electrode, forming a ferroelectric material comprising bismuth and at least one of hafnium oxide and zirconium oxide the electrode, and forming another electrode over the ferroelectric material.Referring to FIG. 2, a memory cell 200 including the capacitor 100 is shown. The memory cell 200 includes a substrate 210 and a source region 214 and a drain region 212 formed within the substrate 210. The substrate 210 may be a semiconductor substrate, a base semiconductor material on a supporting substrate, a metal electrode, or a semiconductor substrate having one or more materials, structures, or regions formed thereon. The substrate 210 may be a conventional silicon substrate or other bulk substrate including semiconductor material. As used herein, the term "bulk substrate" means and includes not only silicon wafers, but also silicon-on-insulator ("SOI") substrates, such as silicon-on- sapphire ("SOS") substrates or silicon-on-glass ("SOG") substrates, epitaxial layers of silicon on a base semiconductor foundation, or other semiconductor or optoelectronic materials, such as silicon-germanium (Sil-xGex, where x is, for example, a mole fraction between 0.2 and 0.8), germanium (Ge), gallium arsenide (GaAs), gallium nitride (GaN), or indium phosphide (InP), among others. Furthermore, when reference is made to a "substrate" in the following description, previous process stages may have been utilized to form material, regions, or junctions in the base semiconductor structure or foundation.The memory cell 200 may include an access transistor including a gate oxide material 216 and a gate electrode 218. The capacitor 100 may be connected to the drain region 212 of the transistor via a conductive contact (e.g., a conductive plug) 220. The conductive contact 220 may overlie the drain region 212 and may directly contact the bottom electrode 102 of the capacitor 100. The conductive contact 220 may include a conductive material, such as, for example, tungsten, titanium, aluminum, copper, polysilicon, or other suitable conductive material.The gate oxide material 216 may include a suitable dielectric material. In some embodiments, the gate oxide material 216 includes silicon dioxide, or a high-k dielectric material such as zirconium oxide, hafnium oxide, aluminum oxide (A1203), yttrium oxide (Y203), or other high-k dielectrics known in the art. The source region 214 and the drain region 212 may be located on opposing sides of the gate oxide material 216.The gate electrode 218 may include a conductive material, such as, for example, titanium, tantalum, tungsten, ruthenium, nitrides thereof, polysilicon, or other suitable conductive gate electrode material.Sidewall spacers 222 may be disposed on a side of the gate oxide material 216 and the gate electrode 218. The sidewall spacers 222 may comprise a dielectric material, such as silicon dioxide or silicon nitride.An access line 224 (e.g., a digit line, a bit line, etc.) may be coupled to the source region 214 and configured to apply a voltage to the source region 214. The access line 224 may include a conductive material such as, for example, tungsten, titanium, tantalum, palladium, platinum, silicides thereof, polysilicon, or other suitable conductive material.Although the memory cell 200 has been described as comprising the capacitor 100 including the ferroelectric material 104, the disclosure is not so limited. In otherembodiments, the memory cell 200 may comprise a field effect transistor (FeFET). In some such embodiments, the gate oxide material 216 may comprise the ferroelectric material 104. Stated another way, the gate oxide material 216 may include a ferroelectric material comprising at least one of hafnium oxide and zirconium oxide doped with bismuth, as described above with reference to the ferroelectric material 104. In some embodiments, the ferroelectric material 104 may overlie and directly contact the substrate 210.In some embodiments, the memory cell 200 may include a ferroelectric material 104 at the gate oxide material 216 and the capacitor 100.In other embodiments, the ferroelectric material 104 (FIG. 1) may be incorporated in a ferroelectric tunnel junction (FTJ) or another ferroelectric device. In some such embodiments, the ferroelectric material 104 may be disposed between two metal electrodes comprising, for example, tungsten, titanium, copper, platinum, silver, gold, polysilicon, other electrode materials, and combinations thereof. The ferroelectric material 104 may have a thickness between about 5 A and about 50 A, such as between about 5 A and about 10 A, between about 10 A and about 20 A, between about 20 A and about 30 A, or between about 30 A and about 50 A.Accordingly, in one embodiment, a memory cell comprises a capacitor overlying a conductive material in contact with a source region or a drain region. The capacitor comprises a first electrode over a substrate, a ferroelectric material comprising hafnium oxide, zirconium oxide, or a combination thereof and bismuth over the first electrode, and a second electrode over the ferroelectric material.With reference to FIG. 3, depicted is a processor-based system 300. The processor- based system 300 may include various electronic devices manufactured in accordance with embodiments of the present disclosure. The processor-based system 300 may be any of a variety of types such as a computer, camera, pager, cellular phone, wireless device, display, chip set, set-top box, personal organizer, control circuit, or other electronic device. The processor-based system 300 may include one or more processors 302, such as amicroprocessor, to control the processing of system functions and requests in the processor- based system 300. The processor 302 and other subcomponents of the processor-based system 300 may include or be coupled to memory cells, memory arrays, and semiconductor devices including the ferroelectric material comprising at least one of hafnium oxide and zirconium oxide doped with bismuth in accordance with embodiments of the present disclosure.The processor-based system 300 may include a power supply 304 in operable communication with the processor 302. For example, if the processor-based system 300 is a portable system, the power supply 304 may include one or more of a fuel cell, a power scavenging device, permanent batteries, replaceable batteries, and rechargeable batteries. The power supply 304 may also include an AC adapter; therefore, the processor-based system 300 may be plugged into a wall outlet, for example. The power supply 304 may also include a DC adapter such that the processor-based system 300 may be plugged into a vehicle cigarette lighter receptacle or a vehicle power port, for example.Various other devices may be coupled to the processor 302 depending on the functions that the processor-based system 300 performs. For example, a user interface 306 may be coupled to the processor 302. The user interface 306 may include input devices such as buttons, switches, a keyboard, a light pen, a mouse, a digitizer and stylus, a touch screen, a voice recognition system, a microphone, or a combination thereof. A display 308 may also be coupled to the processor 302. The display 308 may include a liquid crystal display (LCD), a surface-conduction electron-emitter display (SED), a cathode ray tube (CRT) display, a digital light processing (DLP) display, a plasma display, an organic light-emitting diode (OLED) display, a light emitting diode (LED) display, a three-dimensional projection, an audio display, or a combination thereof. Furthermore, an RF sub-system/baseband processor 310 may also be coupled to the processor 302. The RF sub-system/baseband processor 310 may include an antenna that is coupled to an RF receiver and to an RF transmitter (not shown). A communication port 312, or more than one communication port 312, may also be coupled to the processor 302. The communication port 312 may be adapted to be coupled to one or more peripheral devices 314, such as a modem, a printer, a computer, a scanner, or a camera, or to a network, such as a local area network, remote area network, intranet, or the Internet, for example.The processor 302 may control the processor-based system 300 by implementing software programs stored in the memory. The software programs may include an operating system, database software, drafting software, word processing software, media editing software, or media playing software, for example. The memory is operably coupled to the processor 302 to store and facilitate execution of various programs. For example, the processor 302 may be coupled to system memory 316, which may include one or more types of volatile memory, such as dynamic random access memory (DRAM). The system memory 316 may further include other types of volatile memory, non-volatile memory, or a combination thereof. In some embodiments, the system memory 316 may include semiconductor devices, such as the semiconductor devices including memory cells and memory arrays including the ferroelectric materials described above.The processor 302 may also be coupled to non-volatile memory 318. The non-volatile memory 318 may include one or more of STT-MRAM, MRAM, read-only memory (ROM) such as an EPROM, resistive read-only memory (RROM), and Flash memory to be used in conjunction with the system memory 316. The size of the non-volatile memory 318 is typically selected to be just large enough to store any necessary operating system, application programs, and fixed data. Additionally, the non-volatile memory 318 may include a high capacity memory such as disk drive memory, such as a hybrid-drive including resistive memory or other types of non-volatile solid-state memory, for example.FIG. 4 is a system 400, which may also be characterized as a semiconductor device or incorporated in a semiconductor device including FeRAM cells having a capacitor, according to embodiments of the disclosure. The system 400 may include peripheral devices 412 in operable communication with a FeRAM cell 414, a grouping of which may be fabricated to form an array of memory cells in a grid pattern including a number of rows and a number of columns, or in various other arrangements, depending on the system requirements and fabrication technology. The FeRAM cell 414 may include a cell core including the capacitor 100, an access transistor 403, a conductive material that may function as a data/sense line 404 (e.g., a bit line), a conductive material that may function as an access line 405 (e.g., a word line), and a conductive material that may function as a source line 406. The peripheral devices 412 of the system 400 may include read/write circuitry 407, a bit line reference 408, and a sense amplifier 409. The capacitor 100 may be substantially the same as the capacitor described above with reference to FIG. 1.In use and operation, when a FeRAM cell 414 is selected to be programmed, a programming voltage may be applied to the FeRAM cell 414 to change a polarization state of the ferroelectric material of the capacitor 100. When the programming voltage is removed, the ferroelectric material may exhibit a polarization, as described above with reference to the ferroelectric material 104 in FIG. 1. In a read operation of the FeRAM cell 414, a voltage is used to detect a state of the ferroelectric material 104.To initiate programming of the FeRAM cell 414, the read/write circuitry 407 may generate a read voltage to the data/sense line 404 and the source line 406. The polarity of the voltage between the data/sense line 404 and the source line 406 may determine the polarization direction of the ferroelectric material in the capacitor 100. The programmed logic state of the FeRAM cell 414 may be a function of the direction of polarization of the ferroelectric material of the capacitor 100.To read the FeRAM cell 414, the read/write circuitry 407 may generate a read voltage to the data/sense line 404 and the source line 406 through the capacitor 100 and the access transistor 403. The programmed state of the FeRAM cell 414 may be related to a direction of the polarization of the ferroelectric material in the capacitor 100.Accordingly, in one embodiment, a semiconductor device comprises an array of memory cells, each memory cell of the array of memory cells comprising a capacitor coupled to a conductive material in contact with a source region or a drain region. The capacitor comprises a first electrode and a second electrode, and a ferroelectric material comprising hafnium oxide, zirconium oxide, or a combination thereof and bismuth between the first electrode and the second electrode.Accordingly, in other embodiments, an electronic system comprises a processor, a memory array operably coupled to the processor, the memory array comprising memory cells, each memory cell of the array of memory cells comprising a capacitor operably coupled to a conductive material in contact with a source region or a drain region. The capacitor comprises a first electrode, a ferroelectric material comprising hafnium oxide, zirconium oxide, or a combination thereof and bismuth adjacent the first electrode, and a second electrode adjacent the ferroelectric material on an opposite side thereof from the first electrode. The electronic system further comprises a power supply in operable communication with the processor.EXAMPLEFIG. 5A is a graph comparing a hysteresis curve of a bismuth-doped ferroelectric material, according to embodiments of the disclosure, compared to a hysteresis curve of a conventional undoped ferroelectric material. The ferroelectric material was substantially the same as the ferroelectric material 104 described above with reference to FIG. 1. The ferroelectric material included hafnium oxide doped with bismuth and comprised hafnium bismuth oxide. The conventional ferroelectric material included undoped hafnium oxide having a crystalline phase such that it exhibited ferroelectric properties. The ferroelectric materials were disposed between a pair of electrodes (e.g., a top electrode and a bottom electrode). The ferroelectric material including the bismuth exhibited a hysteresis curve 502 having a greater remnant polarization (Pr) compared to a hysteresis curve 504 of the conventional ferroelectric material. The ferroelectric material including bismuth exhibited both a positive remnant polarization and a negative remnant polarization having a greater magnitude than a respective positive remnant polarization and a negative remnant polarization of the conventional ferroelectric material.FIG. 5B is a graph of a signal strength vs. cycle number of a memory cell including the ferroelectric material including bismuth, according to embodiments of the disclosure, compared to a signal strength vs. cycle number of a conventional memory cell including the conventional ferroelectric material comprising hafnium dioxide. FIG. 5B illustrates that the memory cell including the ferroelectric material including bismuth exhibited an about twenty- five percent (25%) greater for the 2Pr value of the conventional ferroelectric material.FIG. 5C is a graph illustrating a crystal phase of the ferroelectric material including the bismuth compared to the conventional ferroelectric material. The ferroelectric material including the bismuth exhibited a greater peak at an angle of about 30.5° compared to the conventional ferroelectric material, indicating a more crystalline film with more grains oriented in the orthorhombic phase. Accordingly, the ferroelectric material comprising hafnium bismuth oxide exhibited a greater crystallinity than the conventional ferroelectric material.Additional nonlimiting example embodiments of the disclosure are set forth below. Embodiment 1 : A semiconductor structure, comprising: an electrode; another electrode; and a ferroelectric material comprising an oxide of at least one of hafnium and zirconium, between the electrode and the another electrode, the ferroelectric material further comprising bismuth.Embodiment 2: The semiconductor structure of Embodiment 1, wherein the ferroelectric material comprises hafnium bismuth oxide.Embodiment 3: The semiconductor structure of Embodiment 1 or Embodiment 2, wherein the ferroelectric material comprises bismuth at between about 0.1 atomic percent and about 10.0 atomic percent of the ferroelectric material based on non-oxygen atoms of the ferroelectric material.Embodiment 4: The semiconductor structure of any one of Embodiments 1 through 3, wherein the ferroelectric material comprises bismuth at between about 0.3 atomic percent and about 1.0 atomic percent of the ferroelectric material based on non-oxygen atoms of the ferroelectric material.Embodiment 5: The semiconductor structure of any one of Embodiments 1 through 4, wherein the ferroelectric material further comprises at least one of magnesium, yttrium, strontium, niobium, tantalum, lanthanum, gadolinium, vanadium, phosphorus, potassium, scandium, ruthenium, selenium, calcium, barium, aluminum, arsenic, indium, and silicon.Embodiment 6: The semiconductor structure of any one of Embodiments 1 through 5, wherein the ferroelectric material further comprises magnesium.Embodiment 7: The semiconductor structure of any one of Embodiments 1 through 6, wherein the ferroelectric material comprises between about 0.3 part and about 10.0 parts of bismuth and magnesium for every about 100 parts of hafnium and zirconium.Embodiment 8: The semiconductor structure of any one of Embodiments 1 through 7, wherein the ferroelectric material comprises a uniform concentration of bismuth throughout a thickness thereof.Embodiment 9: The semiconductor structure of any one of Embodiments 1 through 8, wherein the ferroelectric material has an orthorhombic crystal structure.Embodiment 10: The semiconductor structure of any one of Embodiments 1 through 9, wherein the oxide of at least one of hafnium and zirconium comprises hafnium zirconate (HfZrC ), the hafnium zirconate doped with bismuth.Embodiment 11: The semiconductor structure of any one of Embodiments 1 through 10, wherein the ferroelectric material comprises zirconium bismuth oxide. Embodiment 12: The semiconductor structure of any one of Embodiments 1 through 7 or Embodiments 9 through 11, wherein the ferroelectric material comprises a different atomic percent of bismuth proximate the electrode than at a location distal from the electrode.Embodiment 13: The semiconductor structure of any one of Embodiments 1 through 12, wherein the ferroelectric material has a thickness between about 10 A and about 200 A.Embodiment 14: The semiconductor structure of any one of Embodiments 1 through 13, wherein the ferroelectric material comprises hafnium zirconium bismuth oxide.Embodiment 15: A semiconductor device, comprising: an array of memory cells, each memory cell of the array of memory cells comprising a capacitor coupled to a conductive material in contact with a source region or a drain region, the capacitor comprising: a first electrode and a second electrode; and a ferroelectric material comprising hafnium oxide, zirconium oxide, or a combination thereof and bismuth between the first electrode and the second electrode.Embodiment 16: The semiconductor device of Embodiment 15, wherein the first electrode overlies a substrate and further comprising a ferroelectric material comprising hafnium oxide, zirconium oxide, or a combination thereof over the substrate, the ferroelectric material doped with bismuth over the substrate.Embodiment 17: The semiconductor device of Embodiment 15 or Embodiment 16, further comprising a gate electrode overlying the ferroelectric material over the substrate.Embodiment 18: The semiconductor device of any one of Embodiments 15 through17, wherein the ferroelectric material comprises bismuth at between about 0.1 atomic percent and about 0.3 atomic percent of the ferroelectric material based on non-oxygen atoms of the ferroelectric material.Embodiment 19: The semiconductor device of any one of Embodiments 15 through 18, wherein the ferroelectric material further comprises at least one of aluminum and magnesium.Embodiment 20: A method of forming a semiconductor structure, the method comprising: forming an electrode; forming a ferroelectric material comprising bismuth and at least one of hafnium oxide and zirconium oxide the electrode; and forming another electrode over the ferroelectric material.Embodiment 21 : The method of Embodiment 20, wherein forming a ferroelectric material comprises forming hafnium bismuth oxide.Embodiment 22: The method of Embodiment 20 or Embodiment 21, wherein forming a ferroelectric material comprises forming the ferroelectric material to comprise between about 0.1 atomic percent and about 10.0 atomic percent bismuth based on non- oxygen atoms of the ferroelectric material.Embodiment 23: The method of any one of Embodiments 20 through 22, wherein forming a ferroelectric material comprises forming the ferroelectric material to exhibit an orthorhombic crystal structure.Embodiment 24: An electronic system, comprising: a processor; a memory array operably coupled to the processor, the memory array comprising memory cells, each memory cell of the array of memory cells comprising a capacitor operably coupled to a conductive material in contact with a source region or a drain region, the capacitor comprising: a first electrode; a ferroelectric material comprising hafnium oxide, zirconium oxide, or a combination thereof and bismuth adjacent the first electrode; and a second electrode adjacent the ferroelectric material on an opposite side thereof from the first electrode; and a power supply in operable communication with the processor.While certain illustrative embodiments have been described in connection with the figures, those of ordinary skill in the art will recognize and appreciate that embodiments encompassed by the disclosure are not limited to those embodiments explicitly shown and described herein. Rather, many additions, deletions, and modifications to the embodiments described herein may be made without departing from the scope of embodimentsencompassed by the disclosure, such as those hereinafter claimed, including legal equivalents. In addition, features from one disclosed embodiment may be combined with features of another disclosed embodiment while still being encompassed within the scope of the disclosure. |
Some embodiments include apparatus and methods using a first ring oscillator, a second ring oscillator, and circuit coupled to the first and second ring oscillators. The first ring oscillator includes a first memory cell and a first plurality of stages coupled to the first memory cell. The second ring oscillator includes a second memory cell and a second plurality of stages coupled to the second memory cell. The circuit includes a first input node coupled to an output node of the first ring oscillator and a second input node coupled to an output node of the second ring oscillator. In one of such embodiments, the circuit can operate to generate identification information to authenticate the apparatus. |
CLAIMSWhat is claimed is:1. An electronic apparatus comprising:a first ring oscillator including a first memory cell and a first plurality of stages coupled to the first memory cell;a second ring oscillator including a second memory cell and a second plurality of stages coupled to the second memory cell; anda circuit including a first input node coupled to an output node of the first ring oscillator and a second input node coupled to an output node of the second ring oscillator.2. The apparatus of claim 1, wherein the circuit is to receive a first signal from the first ring oscillator and a second signal from the second ring oscillator and to generate information based on frequencies of the first and second signals.3. The apparatus of claim 1, wherein:the first memory cell includes a first terminal and a second terminal, the first terminal coupled to an output node of a first stage of the first plurality of stages, the second terminal coupled to an input node of a second stage of the first plurality of stages; andthe second memory cell includes a first terminal and a second terminal, the first terminal of the second memory cell coupled to an output node of a first stage of the second plurality of stages, the second terminal of the second memory cell coupled to an input node of a second stage of the second plurality of stages.4. The apparatus of claim 3, wherein the first memory cell includes a first memory element coupled to the first and second terminals of the first memory cell, and the second memory cell includes a second memory element coupled to the first and second terminals of the second memory cell.5. The apparatus of claim 4, wherein the first memory cell includes a first additional memory element coupled in series with the first memory element between the first and second terminals of the first memory cell, and the second memory cell includes a second additional memory element coupled in series with the second memory element between the first and second terminals of the second memory cell,6. The apparatus of any of claims 1-5, wherein the circuit includes a selector, the selector including a first input node coupled to the output node of the first ring oscillator and a second input node coupled to the output node of the second ring oscillator.7. The apparatus of claim 6, wherein the circuit includes a counter coupled to an output node of the selector.8. The apparatus of claim 1 or 2, wherein the first memory cell includes a memory element coupled to a terminal of the first memory ceil, and transistor including source and drain coupled to respective terminals of the memory element.9. The apparatus of claim 1 or 2, wherein the first memory cell includes a memory element coupled to a terminal of the first memory cell, and a capacitor and transistor coupled in series with the memory element.10. The apparatus of claim 1 or 2, wherein:a stage among the first plurality of stages includes a logic gate, the logic gate including a first input node coupled to an input node of the first ring oscillator, and a second input node coupled to an enable node; anda stage among the second plurality of stages includes a logic gate, the logic gate including a first input node coupled to an input node of the second ring oscillator, and a second input node coupled to the enable node.11. An electronic apparatus comprising:a first ring oscillator including a first inverter, a second inverter, and a first resistive memory element, the first resistive memory element coupled between an output node of the first inverter and an input node of the second inverter;a second ring oscillator including a third inverter, a fourth inverter, and a second resistive memory element, the second resistive memory element coupled between an output node of the third inverter and an input node of the fourth inverter; anda circuit including a multiplexor coupled to the first and second ring oscillators.12. The apparatus of claim 11, wherein the apparatus comprises a device, and the circuit is to generate identification information to authenticate the device,13. The apparatus of claim 11 or 12, wherein the first ring oscillator includes an additional first resistive memory element coupled in series with the first resistive memory element between the output node of the first inverter and the input node of the second inverter, and the second ring oscillator includes an additional second resistive memory element coupled in series with the second resistive memory element bet ween the output node of the third inverter and the input node of the fourth inverter.14. The apparatus of claim 1 1 or 12, wherein each of first and second resistive memory elements includes a dielectric portion and a conductive path in the dielectric portion.15. The apparatus of claim 1 1 or 12, wherein each of the first and second resistive memory elements includes electrodes and a dielectric portion between the electrodes, and the electrodes and the dielectric portion are arranged among each other in a direction perpendicular to a semiconductor substrate.16. The apparatus of claim 11 or 12, wherein the circuit includes a counter coupled to the multiplexor,17. The apparatus of claim 16, wherein the circuit includes a comparator coupled to the counter.18. An electronic apparatus comprising:dynamic random access memory (DRAM) device; anda processor coupled to the DRAM device, the processor including:a first ring oscillator including a first memory ceil and a first plurality of stages coupled to the first memory cell;a second ring oscillator including a second memory cell and a second plurality of stages coupled to the second memory cell; anda circuit including a first input node coupled to an output node of the first ring oscillator and a second input node coupled to an output node of the second ring oscillator.19. The apparatus of claim 18, further comprising a semiconductor substrate, wherein the DRAM is located at a first location of the semiconductor substrate, and the processor is located at a second location of the semiconductor substrate,20. The apparatus of claim 18 or 19, further comprising a connector coupled to the processor, the connector conforming with one of Universal Serial Bus (USB), High-Definition Multimedia Interface (IIDMI), Thunderbolt, and Peripheral Component Interconnect Express (PCIe).21. A method of operating an electronic apparatus, the method comprising: generating counts having count values based on frequencies of signals from ring osci llators, each of the ring oscil lators including inverter stages and at least one memory ceil coupled to the inverter stages; andgenerating information based on the count values.22. The method of claim 21, wherein generating the information includes: comparing a first count value included in the count values with a second count value included in the count values: andgenerating part of the information, the part of the information having a value based on whether the first count value is greater than the second count value.23. The method of claim 22, wherein generating the part of the information includes generating a bit, the bit having a first value if the first count value is greater than the second count value and a second value if the first count value is not greater than the second count value.24. The method of claim 21, wherein generating the information includes: comparing a first count value included in the count values with a second count value included in the count values, andgenerating part of the information, the part of the information having a value based on a difference between the first count value and the second count value.25. The method of claim 21, wherein generating the counts includes:generating a first count of the counts based on a frequency of a first signal among the signals, the first count having a first count value;generating a second count of the counts based on a frequency of a second signal among the signals, the second count having a second count value; and generating part of the information based on the first and second count values, wherein the first signal is generated by a first ring oscillator of the ring oscillators, the second signal is generated by a second ring oscillator of the ring oscillators, and the first and second ring oscillators are located immediately next to each other. |
PHYSICALLY UNCLONABLE FUNCTION CIRCUIT INCLUDING MEMORY ELEMENTS CLAIM OF PRIORITY[0001] This patent application claims the benefit of priority to U.S. Application Serial No. 15/198,124, filed June 30, 2016, which is incorporated by reference herein in its entirety. TECHNICAL FIELD[0002] Embodiments described herein pertain to generation of unique identification for electronic items. Some embodiments relate to circuitry embedded in integrated circuit devices for authentication purposes. BACKGROUND[0003] Many integrated circuit (IC) manufacturers have techniques to authenticate their ICs. For example, some manufacturers may build special circuitry in their ICs for authentication purposes. In some situations, such circuitry may be replicated (e.g., cloned) by reverse engineering. However, variations in fabrication processes usually cause the structure of the replicated circuitry to be slightly different from the original circuitry. Therefore, the function of the replicated circuitry would not be the same as the function of the original circuitry. Thus, most circuitry used for authentication purposes have an inherent physically-unclonable function (PUF). Based on this PUF feature, using PUF circuits for product authentication is favorable for manymanufacturers. However, some of these conventional PUF circuits may have constraints that are unsuitable to be built in some products (e.g., ICs). Such constraints may include large circuit area, excessive fabrication process overhead, high power consumption, and high cost. BRIEF DESCRIPTION OF THE DRAWINGS[0004] FIG. 1 shows an apparatus in the form of a device including functional units, and an authentication unit, according to some embodiments described herein.[0005] FIG. 2A shows a block diagram of the authentication unit of FIG. 1 including ring oscillators having memory cells, and an ID generator circuit coupled to the ring oscillators, according to some embodiments described herein.[0006] FIG. 2B shows a structure of a memory cell, which can be included as each of the memory cells of the ring oscillators of the authentication unit of FIG. 2A, according to some embodiments described herein.[0007] FIG. 2C shows resistance value ranges for different states that can be stored in the memory cell of FIG. 2B, according to some embodiments described herein.[0008] FIG. 2D shows a structure of part of the device of FIG. 1, including a substrate and the authentication unit formed over the substrate, according to some embodiments described herein.[0009] FIG. 3 shows a block diagram of the authentication unit of FIG. 2A including an ID generator circuit having a selector, a counter, and output circuitry, according to some embodiments described herein.[0010] FIG. 4 shows a block diagram of the authentication unit of FIG. 2A, including an ID generator circuit having a selector, multiple counters, and output circuitry having a comparator, according to some embodiments described herein.[0011] FIG. 5 shows a variation of the authentication unit of FIG. 4 where the ID generator circuit includes a calculator (e.g., logic calculating circuit), according to some embodiments described herein.[0012] FIG. 6 shows a block diagram of an authentication unit including ring oscillators and multiple memory cells in each of the ring oscillators, according to some embodiments described herein.[0013] FIG. 7 shows a block diagram of an authentication unit including ring oscillators having an enable input node in each of the ring oscillators of the authentication unit, according to some embodiments described herein. [0014] FIG. 8 shows a block diagram of a ring oscillator including stages and a memory cell having multiple memory elements, according to some embodiments described herein.[0015] FIG. 9 shows a block diagram of a ring oscillator including stages and a memory cell having memory elements and associated transistors, according to some embodiments described herein.[0016] FIG. 10 shows a block diagram of a ring oscillator including stages and a memory cell having memory elements and associated capacitors and transistors, according to some embodiments described herein.[0017] FIG. 11 shows an apparatus in the form of a system (e.g., electronic system), according to some embodiments described herein.[0018] FIG. 12 is a flowchart showing a method of operating an electronic apparatus, according to some embodiments described herein. DETAILED DESCRIPTION[0019] The techniques described herein include an authentication unit embedded in a device (e.g., an IC), a system on a chip (SoC), or other electronic items. The described authentication unit includes ring oscillators and an identification (ID) generator circuit. Each of the ring oscillators includes stages (e.g., inverter stages) and at least one memory cell coupled to the stages. The ring oscillators can have the same structure. Each of the ring oscillators can generate a signal at its output node. However, variations in fabrication processes can cause the frequency of the signal from one ring oscillator to be different from the frequency of the signal from another ring oscillator. The ID generator circuit of the described authentication unit generates unique ID information (e.g., a code) for the device based on the differences in frequencies among the signals generated by the ring oscillators. The ID information can be used to authenticate the device.[0020] As is known to those skilled in the art, generation of random and unique one-time encryption keys can be very difficult, especially if the keys are locally generated and not distributed by a master key management system.Similarly, generating unique IDs for products (e.g., semiconductor devices or systems) for authentication purposes can pose a challenge for the manufacturing industry. As mentioned above, PUF circuits are used by many manufacturers to authenticate their products. However, PUF circuits often face two challenges. For example, a correct product (e.g., IC) may be identified as a rogue product (false rejection rate, FRR). In another example, a rogue product may be identified as a correct product (false acceptance rate, FAR).[0021] As mentioned above, the techniques described herein include an authentication unit embedded in a device or a system. Some of theimprovements and benefits of the described authentication unit over some conventional PUF circuits include improved FAR (lower FAR), improved FRR (e.g., lower FRR), smaller circuit area, lower power consumption, lower cost, and other improvements and benefits, as described in more detail below.[0022] FIG. 1 shows an apparatus in the form of a device 100 including functional units 101 and 102, and authentication unit 103, according to some embodiments described herein. Device 100 can include an IC (e.g.,semiconductor chip). An example of device 100 includes a processor (e.g., general purpose processor, an application-specific integrated circuit (ASIC), or other types of processors), a memory device (e.g., dynamic random access memory (DRAM), a flash memory device, and other memory device), a system on chip (Soc), or other types of integrated circuits.[0023] Device 100 of FIG. 1 can be included in an electronic device or system, such as a computer (e.g., server, desktop, laptop, or notebook), a solid state drive (SSD), a network device (e.g., Ethernet adapter, Ethernet controller, and other network devices), a tablet, a cellular phone, a wireless communication router, a digital television, an electronic wearable item (e.g., a smart watch or other wearable devices), other electronic devices or systems, and other Internet of Things (IoT) devices or systems.[0024] In FIG. 1, functional units 101 and 102 can include any combination of logic circuitry (e.g., processing core of a processor or a control unit of a memory device), memory cells, and other components. A person skilled in the art would recognize that device 100 can include other components (e.g., components of a processor, a memory device, or other types of ICs). Such components are omitted from FIG. 1 so as to not obscure the description herein. [0025] Authentication unit 103 of device 100 can generate unique ID information (e.g., an ID code) for authenticating device 100. For example, before installing device 100 in a product or before shipping device 100 to a customer, an operation can be performed using authentication unit 103 to generate ID information. The ID information can be logged (e.g., recorded) in a record keeper. For example, the ID information of device 100 can beelectronically saved in a database in a computer or can be written on paper. Then, the ID information of device 100 can be compared with ID information obtained from a target device (e.g., target IC). The result of the comparison can indicate whether or not the target device is actually (or most likely to be) device 100. For example, the target device may be considered to be device 100 if the ID information from the target device matches the ID information from the record keeper. The target device may not be considered to be device 100 (e.g., may be considered a rouge device) if the ID information obtained from the target device does not match the ID information of device 100 from the record keeper.[0026] FIG. 2A shows a block diagram of authentication unit 103 of FIG. 1 including ring oscillators 201, 202, 203, and 204 having memory cells 211’, 212’, 213’, and 214’, and an ID generator circuit 220, according to some embodiments described herein. Each of ring oscillators 201, 202, 203, and 204 can include series-connected stages (stages coupled in series with each other) between an input node and an output node of a respective ring oscillator. For example, ring oscillator 201 includes stages 211 coupled between an input node 201’ and an output node 201’’. Ring oscillator 202 includes stages 212 coupled between an input node 202’ and an output node 202’’. Ring oscillator 203 includes stages 213 coupled between an input node 203’ and an output node 203’’. Ring oscillator 204 includes stages 214 coupled between an input node 204’ and an output node 204’’.[0027] As shown in FIG.2A, each of the stages of ring oscillators 201, 202, 203, and 204 can include an inverter INV. Inverters in the same ring oscillator or in different ring oscillators can have the same structure. For example, inverter INV in each of the stages of ring oscillators 201, 202, 203, and 204 can include a complementary metal-oxide semiconductor (CMOS) inverter. [0028] The stages of each of ring oscillators 201, 202, 203, and 204 can be arranged such that within the same ring oscillator, the output node (e.g., the output node of inverter INV) of a preceding stage is coupled to an input node (e.g., the input node of inverter INV) of a succeeding stage, and such that the output node of the last stage (the stage closest to ID generator circuit 220) is coupled (e.g., fed back) to the input node of the first stage (the stage farthest from ID generator circuit 220). The ring arrangement shown in FIG. 2A allows ring oscillators 201, 202, 203, and 204 to generate signals OSC1, OSC2, OSC3, and OSC4, respectively, such that each of signals OSC1, OSC2, OSC3, and OSC4 can be an oscillating signal (e.g., a self-oscillating signal).[0029] ID generator circuit 220 can include input nodes coupled to respective output nodes 201’’, 202’’, 203’’, and 204’’ of ring oscillators 201, 202, 203, and 204 to receive signals OSC1, OSC2, OSC3, and OSC4. ID generator circuit 220 can operate to generate ID information (ID INFO) 221 for device 100. ID information 221 can be obtained based on a signal OUT at an output node 225 of authentication unit 103. Signal OUT can be a digital signal. For example, signal OUT can carry digital information (e.g., bits) that represents the value of ID information 221. A record keeper 227 can be used to save ID information 221 for authentication of device 100.[0030] As shown in FIG. 2A, ring oscillators 201, 202, 203, and 204 can have the same components (e.g., same number of inverter stages and memory cells) and the same arrangements (e.g., same connections among thecomponents). However, due to variations in fabrication processes, signals OSC1, OSC2, OSC3, and OSC4 generated by ring oscillators 201, 202, 203, and 204, respectively, may have different frequencies. ID generator circuit 220 can generate ID information 221 based on the differences in the frequencies among signals OSC1, OSC2, OSC3, and OSC4. Different examples of ID generator circuit 220 and its operations are shown and described in detail with reference to FIG. 3, FIG.4, and FIG. 5.[0031] As shown in FIG.2A, ring oscillators 201, 202, 203, and 204 can include memory cells 211’, 212’, 213’, and 214’, respectively. Each of memory cells 211’, 212’, 213’, and 214’ can be coupled in series with respective stages (e.g., inverter stages) within the same ring oscillator. For example, in ring oscillator 201, memory cell 211’ is coupled in series with stages 211 between input node 201’ and output node 201’’. In ring oscillator 202, memory cell 212’ is coupled in series with stages 212 between input node 202’ and output node 202’’. In ring oscillator 203, memory cell 213’ is coupled in series with stages 213 between input node 203’ and output node 203’’. In ring oscillator 204, memory cell 214’ is coupled in series with stages 214 between input node 204’ and output node 204’’. Thus, in a particular ring oscillator, the memory cell can include a terminal coupled to an output node of a preceding stage of the particular ring oscillator and another terminal coupled to an input node of a succeeding stage of the particular ring oscillator.[0032] Memory cells 211’, 212’, 213’, and 214’ can have the same structure. For example, memory cells 211’, 212’, 213’, and 214’ can include memory elements 211’a, 212’a, 213’a, and 214’a, respectively, that can have the same structure. Each of memory elements 211’a, 212’a, 213’a, and 214’a can store a state. The value of the state stored in one memory element can be the same as or different from the value of the state stored in another memory element. However, storing different states in different memory elements may further increase differences in the frequencies of signals OSC1, OSC2, OSC3, and OSC4. This may further improve the strength of the value of ID information 221 generated based on differences in the frequencies of signals OSC1, OSC2, OSC3, and OSC4.[0033] Each of memory cells 211’, 212’, 213’, and 214’ can include a resistive memory element (e.g., a resistive random access memory ((ReRAM) element). The value of the state stored in the memory element of a particular memory cell (among memory cells 211’, 212’, 213’, and 214’) can be based on the resistance value of the memory element in that particular memory cell.[0034] FIG. 2A shows an example where each of memory cells 211’, 212’, 213’, and 214’ includes a ReRAM element (shown as a resistor symbol). However, memory cells 211’, 212’, 213’, and 214’ can include other types of memory cells as long as a state (e.g., information) can be stored (e.g., programmed) in memory cells 211’, 212’, 213’, and 214’. For purposes of authentication of device 100, the state stored (e.g., stored in the memory element) in each of memory cells 211’, 212’, 213’, and 214’ can be permanent. This means that the stored state can be unchangeable after it is stored. For example, a one-time programmable process can be used to store a state in each of memory cells 211’, 212’, 213’, and 214’.[0035] FIG. 2A shows an example where the memory cell in a respective oscillator is located at a certain location in the respective oscillator. However, the memory cell in a respective oscillator can be located anywhere within the respective ring oscillator. For example, the memory cell in a respective oscillator can be located at location (e.g., immediately next to ID generator circuit 220) such that the memory cell can be directly coupled to the output node of the respective ring oscillator. As an example, instead of the arrangement shown in FIG. 2A, memory cell 211’ in ring oscillator 201 can be directly coupled between inverter stage 211 and output node 201’’ of the ring oscillator 201. Similar arrangement can be applied to memory cells 212’, 213’, and 214’ of ring oscillators 202, 203, and 204, respectively.[0036] FIG. 2A shows an example where authentication unit 103 includes one memory cell in each of ring oscillators 201, 202, 203, and 204 as an example. However, authentication unit 103 can include multiple memory cells in each of ring oscillators 201, 202, 203, and 204. FIG. 2A shows authentication unit 103 including an odd number of three inverter stages in each of ring oscillators 201, 202, 203, and 204 as an example. However, authentication unit 103 can include any odd number (any odd number greater than one) of inverter stages in each of ring oscillators 201, 202, 203, and 204. Moreover, FIG. 2A shows authentication unit 103 including four ring oscillators 201, 202, 203, and 204 as an example. However, authentication unit 103 can include a different number of ring oscillators. Thus, in authentication unit 103, one or more of the number of ring oscillators, the number of stages (e.g., CMOS inverter stages) in each ring oscillator, and the number of memory cells (e.g., ReRAM cells) in each ring oscillator can be different from those shown in FIG. 2A.[0037] FIG. 2B shows a structure of a memory cell 210’, which can be included in authentication unit 103 as each of memory cells 211’, 212’, 213’, and 214’ of FIG. 2A, according to some embodiments described herein. As shown in FIG. 2B, memory cell 210’ includes electrodes 271 and 272, and a dielectric portion 273 sandwiched between (e.g., directly contacting) electrodes 271 and 272. An equivalent symbol for memory cell 210’ is also shown in FIG. 2B where electrodes 271 and 272 can correspond to terminals (e.g., two terminals) of memory cell 210’ and memory element 210’a can correspond to the resistor of memory cell 210’.[0038] Electrodes 271 and 272 and the dielectric portion 273 can have materials such that memory cell 210’ can be a ReRAM cell. As an example, each of electrodes 271 and 272 can include conducive material (e.g., a layer of conductive material), such as metal (e.g., platinum (Pt) or other metals).Dielectric portion 273 can include oxide material (e.g., a layer of oxide material) or a combination of oxide materials. As an example, dielectric portion 273 can include hafnium oxide (HfO2), titanium oxide (TiOX (e.g., TiO2)), or tantalum pentoxide (Ta2O5), or any combination of these materials (e.g., only one of HfO2, TiOX, and Ta2O5, only two of HfO2, TiOX, and Ta2O5, or all HfO2, TiOX, andTa2O5) or other dielectric material. The materials for electrodes 271 and 272, and dielectric portion 273, can be selected such that a conductive path (e.g., conductive filament) 274 can be formed in dielectric portion 273. Forming conductive path 274 can include applying voltages of different values to electrodes 271 and 272.[0039] Dielectric portion 273 (or part of dielectric portion 273) can form memory element 210’a of memory cell 210’. Memory cell 210’ can store a state in memory element 210’a. The value of the state stored in memory cell 210’ can be based on the resistance value (e.g., resistance of conductive path 274) of dielectric portion 273 between electrodes 271 and 272.[0040] FIG. 2C shows resistance value ranges for different states that can be stored in memory cell 210’ of FIG. 2B, according to some embodiments described herein. As shown in FIG. 2C, memory cell 210’ can store two different states, such as state 0 and state 1. For a number of memory cells similar to memory cell 210’, state 0 can be within a resistance value range 280, which can include resistance values from 0.20 megaRKPV^^0^^^WR^^^^^0^^^ state 1 can be within a resistance value range 281, which can include resistance values from ^^^0^^WR^^^^0^^^^Specific resistance values are used in FIG. 2C as an example. Each of resistance value ranges 280 and 281 can include resistance values different from those shown in FIG. 2C. As shown in FIG. 2C, resistance value ranges 280 and 281 include no-overlap in resistance values to allow a distinction between different states (e.g., state 0 and state 1) stored in memory cell 210'. Thus, based on the example of FIG. 2C, the resistance value (e.g., Rl) corresponding to state 1 can be at least one and a half times the resistance value (e.g., R0) corresponding to state 0 (Rl > 1.5R0). For example, as shown in FIG. 2C, the minimum resistance value corresponding to state 1 is 1.0ΜΩ, which is at least one and a half times (1.5 * 0.28 ΜΩ) the maximum resistance value corresponding to state 0. The relationship Rl > 1.5R0 is used as an example. Resistance values R0 and Rl may have a different relation (e.g., Rl = nRO, where n can be any number greater than 1).[0041] As described above, memory cell 210' of FIG, 2B can be included in authentication unit 103 as each of memory cells 21 1 ', 212', 213', and 214' of FIG. 2 A. Thus, each of memory cells 21 Γ, 212', 213', and 214' of FIG. 2 A can store a state, such as state 0 or state 1 (FIG. 2C). The states stored in memory cells 21 Γ, 212', 213', and 214' can be the same or can be different.[0042] FIG. 2D shows a structure of part of device 100 including a substrate 105 and authentication unit 103 having memory cells formed over substrate 105, according to some embodiments described herein. Substrate 105 can include a semiconductor substrate (e.g., a silicon die) where functional unit 101 (which can include logic circuitry) and functional unit 102 are formed. For simplicity, only outlines of the structures of functional units 101 and 102 and authentication unit 103 of device 100 are shown in FIG. 2D.[0043] As described above (FIG. 2A and FIG. 2B), authentication unit103 can include rine oscillators 201. 202. 203. and 204 that have inverter staaes and memory cells, in which each of the memory cells can include memory cell 210' (FIG. 2B). As shown in FIG. 2D, authentication unit 103 can include memory cells 210' formed over functional unit 101 and over substrate 105. Each memory ceil 210' can be formed such that electrodes 271 and 272 and dielectric portion 273 can be arranged in a direction perpendicular to substrate 105 (e.g., a vertical direction with respect to substrate 105). For simplicity, some parts (e.g., inverter stages and other parts) of authentication unit 103 are omitted from FIG. 2D. Moreover, FIG. 2D shows the entire authentication unit 103 located (e.g., formed) at a location over functional unit 101 and over substrate 105 as an example. However, some parts of authentication unit 103 or the entire authentication unit 103 can be located (e.g., formed) at another location. For example, only memory ceils of authentication unit 103 cells (e.g., memory cells 210') can be located over functional unit 101; other parts (e.g., inverter stages of the ring oscillators) of authentication unit 103 can be located at (e.g., formed in) substrate 105. Locating (e.g., forming) the memory cells of authentication unit 103 at one location (e.g., over substrate 105) and locating (e.g., forming) other parts (e.g., inverter stages of the ring oscillators) of authentication unit 103 at another location (e.g., in substrate 105) may avoid some fabrication process overhead associated with forming authentication unit 103.[0044] Inclusion of memory cells 211 ', 212', 213', and 214' may further improve authentication unit 103 over some conventional techniques. For example, to generate unique ID information for authentication purposes, some conventional techniques use variations in transistor threshold voltages (e.g., Vt), variations in transistor switching speeds, or variation of operating speed of inverters in oscillators. In these conventional techniques, current variations (caused by threshold voltage variations) and threshold voltage variations have a quadratic/linear relationship (e.g., quadratic/linear dependence). Besides other improvements, the techniques described herein may further improve the relationships between currents and other parameters in authentication unit 103.[0045] For example, in the techniques described herein, using memory cells 211 ', 212', 213', and 214' in ring oscillators 201, 202, 203, and 204 (FIG. 2A) may further cause a higher degree of variations in the currents among ring oscillators 201, 202, 203, and 204. This higher degree of current variations may in turn cause a higher degree of variations in the frequencies of signals OSC1 , OSC2, OSC3, and OSC4. A higher degree of variations in the frequencies of signals OSC1, OSC2, OSC3, and OSC4 may improve the reliability and the value of ID information 221 (FIG. 2 A).[0046] As an example, using memory cell 210' (e.g., ReRAM memory cell) as each of memory cells 21 Γ, 212', 213', and 214', current variations in ring oscillators 201 , 202, 203, and 204 can also be dependent on the thickness of dielectric portion 273 (e.g., thickness measuring in the direction (e.g., vertical direction) between electrodes 271 and 272). Such current variations and the thickness of dielectric portion 273 can have an exponential relationship (e.g., exponential dependence). The frequencies of signals OSC1, OSC2, OSC3, and OSC4 depend on the currents in ring oscillators 201, 202, 203, and 204.Therefore, in comparison with some conventional techniques, authentication unit 103 (FIG, 2A) may have a higher variation in the frequencies of signals OSC1, OSC2, OSC3, and OSC4 (e.g., due to an exponential relationship between variations in current and the thickness of dielectric portion 273). Higher variations in frequencies signals OSC1, OSC2, OSC3, and OSC4 may improve the reliability and value of ID information 221 of authentication unit 103 over some conventional techniques.[0047] Moreover, in comparison with some conventional techniques, authentication unit 103 may occupy a smaller area (e.g., fewer inverter stages), may consume less power, and may have a lower cost. Further, memory cells 211’, 212’, 213’, and 214’ can be programmed on the fly (e.g., to store a state (e.g., state 0 or state 1)) in order to generate different challenge-response signal pairs (CRPs). This may increase the number of challenge–response pairs (CRPs), which can be used over the lifetime of device 100. Using authentication unit 103 in device 100 may also result in a lower false acceptance rate (FAR) and a lower false rejection rate (FRR) for device 100 and devices similar to device 100. This may improve yield for devices that include authentication unit 103.[0048] FIG. 3 shows a block diagram of authentication unit 103 of FIG. 2A including ID generator circuit 220 having a selector 330, a counter 340, and output circuitry 350, according to some embodiments described herein. FIG. 3 also shows ring oscillators 201, 202, 203, and 204 of authentication unit 103. However, for simplicity, detailed description of ring oscillators 201, 202, 203, and 204 is not repeated.[0049] Selector 330 (which can include a multi-input single-output multiplexor) includes input nodes coupled to respective output nodes 201’’, 202’’, 203’’, and 204’’ of ring oscillators 201, 202, 203, and 204, respectively. Selector 330 can receive select information SEL1 to select one of signals OSC1, OSC2, OSC3, and OSC4 to be a signal OSC at an output node 331 of selector 330 (e.g., output node of the multiplexor of selector 330). Select information SEL1 can be in the form of a select signal (or select signals) that can include bits having different values to select different signals among signals OSC1, OSC2, OSC3, and OSC4. As described above with reference to FIG. 2A, signals OSC1, OSC2, OSC3, and OSC4 can have different frequencies. Thus, signal OSC can have different frequencies at different times, depending on which of signals OSC1, OSC2, OSC3, and OSC4 is selected by selector 330.[0050] Counter 340 can include an input node coupled to output node 331 of selector 330 to receive signal OSC. Counter 340 can operate to generate a count that has a value based on the frequency (e.g., the number of periods) of signal OSC. For example, counter 340 can count the periods (cycles) of signal OSC during a particular time interval (e.g., a predetermined time interval) and generate a count (e.g., a digital number). The value of the count can be proportional to the number of periods of signal OSC during that particular time interval. For example, a pulse 345 can be provided to counter 340. The width of pulse 345 can be used as a time interval (duration) for the counting operation. Counter 340 can start counting the periods of signal OSC at the rising edge of pulse 345 and stop the counting at the falling edge of pulse 345.[0051] The value of the count (count value) generated by counter 340 can include a number of bits, which can be based on the number of bits that can be handled by counter 340. For example, if counter 340 is an 8-bit counter, then the count value can include 8 bits.[0052] Output circuitry 350 can generate signal OUT at output node 225 of authentication unit 103. The value of information carried by signal OUT can be based on the count value (which is based on the frequency of signal OSC). For example, if the count value is 10101010 (8 bits), then the value of information carried by signal OUT can be 10101010 (the same 8 bits). In this example, part of ID information 221 can include the value of a set of eight bits 10101010, which is based on the frequency of signal OSC (which is one of signals OSC1, OSC2, OSC3, and OSC4). Eight bits are used here as an example. The number of bits can vary. As described above with reference to FIG. 2A, ID information 221 can be obtained based on signal OUT. [0053] The following description describes an example operation of authentication unit 103 where ID generator circuit 220 generates ID information 221 based on the frequencies of signals OSC1, OSC2, OSC3, and OSC4. In operation, in response to the value (e.g., binary value 00) of select information SEL1, selector 330 selects signal OSC1 and passes it to output node 331 as signal OSC. Counter 340 generates a count based on the frequency (e.g., the number of periods) of signal OSC (which is signal OSC1 selected by selector 330). For example, counter 340 may start to count (e.g., count up from an initial value (e.g., zero)) at the rising edge of pulse 345 and stop counting at the falling edge of pulse 345. Based on the count value generated by counter 340, output circuitry 350 generates signal OUT that carries a number of bits (e.g., 8 bits). The value of the bits is provided as part (e.g., a set of bits) of ID information 221.[0054] After part of ID information 221 is obtained based on the selection of signal OSC1, ID generator circuit 220 can repeat the operation described above for each of signals OSC2, OSC3, and OSC4. For example, values 01, 10, and 11 may be provided to information SEL1 at different times, in order to select signals OSC2, OSC3, and OSC4, respectively. Thus, in this example, ID generator circuit 220 can perform four counting operations and generate four corresponding sets of bits. Since signals OSC1, OSC2, OSC3, and OSC4 have different frequencies, the four corresponding sets of bits can have different values. These four sets of bits can be used as the value (e.g., unique ID) for ID information 221. Thus, in this example, ID information 221 can include a number of sets of bits (e.g., four sets) that can be based on (e.g., equal to) the number of ring oscillators (e.g., 201, 202, 203, and 204) of authentication unit 103.[0055] FIG. 4 shows a block diagram of authentication unit 103 of FIG. 2A including ID generator circuit 220 having a selector 430, multiple counters 441 and 442, and output circuitry 450 having a comparator 451, according to some embodiments described herein. FIG. 4 also shows ring oscillators 201, 202, 203, and 204 of authentication unit 103. However, for simplicity, detailed description of ring oscillators 201, 202, 203, and 204 is not repeated. [0056] ID generator circuit 220 of FIG. 4 can be a variation of ID generator circuit 220 of FIG. 3. As described above with reference to FIG.3, ID generator circuit 220 can generate ID information 221 based on an individual signal among signals OSC1, OSC2, OSC3, and OSC4. In FIG. 4, ID generator circuit 220 can generate ID information 221 based on comparisons between pairs of signals among signals OSC1, OSC2, OSC3, and OSC4.[0057] As shown in FIG.4, selector 430 (which can include a multi- input multi-output multiplexor) includes input nodes coupled to respective output nodes 201’’, 202’’, 203’’, and 204’’ of ring oscillators 201, 202, 203, and 204, respectively. Selector 430 can receive select information (e.g., a select signal or select signals) SEL2 to select one of signals OSC1, OSC2, OSC3, and OSC4 to be a signal OSCi and another one of signals OSC1, OSC2, OSC3, and OSC4 to be a signal OSCj. Selector 430 can include output nodes (e.g., output nodes of the multiplexor of selector 430) 431 and 432 to provide signals OSCiand OSCj, respectively. Different values (e.g., digital values) can be provided toselect information SEL2 to select different pair of signals among signals OSC1, OSC2, OSC3, and OSC4.[0058] Each of counters 441 and 442 can include an input node coupled to one of output nodes 431 and 432 of selector 430 to receive either signal OSCior OSCj. Each of counters 441 and 442 can operate in ways similar to counter 340 (FIG. 3). For example, counter 441 can operate to generate a count that has a value based on the frequency (e.g., the number of periods) of signal OSCiduring a particular time interval. Counter 442 can operate to generate a count that has a value based on the frequency (e.g., the number of periods) of signal OSCj during a particular time interval. In operation, counters 441 and 442 can concurrently start (e.g., start at the same time) their respective counting operations and concurrently stop (e.g., stop at the same time) their respective counting operations. A pulse 445 can be provided to counters 441 and 442. The width of pulse 445 can be used as a time interval (duration) for the counting operations of counters 441 and 442. For example, counters 441 and 442 can start their respective counting operations at the rising edge of pulse 445 and stop their respective counting operations at the falling edge of pulse 445. [0059] Output circuitry 450 can generate signal OUT at output node 225 of authentication unit 103. The value of information carried by signal OUT can be based on a comparison between count values generated by counters 441 and 442 within the same interval (e.g., the interval equal to the width of pulse 445). Comparator 451 can compare the count values generated by counters 441 and 442 and generate a comparison result based on the comparison. The comparison result can have a value represented by a single bit (or multiple bits). For example, the comparison result can have one value (e.g., binary“0”) if the count value generated by counter 441 is greater than (or alternatively less than) the count value generated by counter 442 and another value (e.g., binary“1”) if the count value generated by counter 441 is not greater than (or alternatively not less than) the count value generated by counter 442. Thus, in the example described here, for each comparison between the frequencies of signals OSCiand OSCj(two of signals OSC1, OSC2, OSC3, and OSC4), output circuitry 450 can generate a bit (e.g.,“0” or“1”) that can be provided as part of ID information 221. Therefore, in FIG. 4, ID information 221 can include a number of bits. Each of the bits can have a value (e.g.,“0” or“1”) based on comparison results from comparing the count values generated based on the frequencies of different pairs of signals among signals OSC1, OSC2, OSC3, and OSC4.[0060] In generation of ID information 221, ID generator circuit 220 can operate to select a pair of signals (signal pair) among signals OSC1, OSC2, OSC3, and OSC4 one at a time and perform the counting operations and count comparison based on the selected signal pair. Part (e.g., a bit) of ID information 221 can include a comparison result from each signal pair. In order to generate a complete value (e.g., a number of bits) for ID information 221, ID generator circuit 220 can repeat the same counting operations and count comparison for different signal pairs among signals OSC1, OSC2, OSC3, and OSC4.[0061] The signal pairs used for generation of ID information 221 can include signal pairs of only adjacent signals. Adjacent signals are signals (e.g., neighboring signals) from two oscillators (e.g., neighboring oscillators) that are physically located immediately next to each other. Thus, ID generator circuit 220 can generate ID information 221 based on signal pairs OSC1-OSC2 (signals OSC1 and OSC2), OSC2-OSC3 (signals OSC2 and OSC3), and OSC3-OSC4 (signals OSC3 and OSC4), which are signal pairs from only adjacent signals. In order to avoid any correlation, a signal pair (each of signal pairs OSC1-OSC2, OSC2-OSC3, and OSC3-OSC4) may be selected only one time during generation of ID information 221.[0062] In alternative configuration, ID generator circuit 220 can generate ID information 221 based on signal pairs of adjacent signals and signal pairs of non-adjacent signals. Non-adjacent signals are signals from two oscillators that are not physically located immediately next to each other. Thus, in the alterative configuration, ID generator circuit 220 can generate ID information 221 based on signal pairs (from adjacent signals, as mentioned above) OSC1-OSC2, OSC2- OSC3, and OSC3-OSC4, and signal pairs (from non-adjacent signals) OSC1- OSC3, OSC1-OSC4, and OSC2-OSC4. In order to avoid any correlation in the alternative configuration, a signal pair (each of signal pairs OSC1-OSC2, OSC2- OSC3, OSC3-OSC4, OSC1-OSC3, OSC1-OSC4, and OSC2-OSC4) may be selected only one time during generation of ID information 221.[0063] FIG. 5 shows a variation of authentication unit 103 of FIG. 4 where ID generator circuit 220 includes a calculator (e.g., logic calculating circuit) 551, according to some embodiments described herein. Authentication unit 103 of FIG. 5 can include components similar to or identical to the components of authentication unit 103 of FIG. 4, except for output circuitry 550 and calculator 551 in FIG. 5. For simplicity, detailed descriptions of similar or identical components are not repeated.[0064] Output circuitry 550 can operate to generate signal OUT based on the amount of difference (e.g., delta) in values between a signal pair. This operation is different from the operation of output circuitry 450 of FIG. 4. As described above, the value of ID information 221 in FIG. 4 can be based on whether one count value is greater than (or alternatively less than) another count value, without determining the difference between two count values. In FIG. 5, output circuitry 550 can operate to calculate the difference between two count values. Thus, the value of information carried by signal OUT in FIG. 5 can be based on a difference between two count values that are generated based on the frequencies of different pairs of signals among signals OSC1, OSC2, OSC3, and OSC4. [0065] Calculator 551 can include circuitry (e.g., logic circuits) that can calculate a difference between two count values and generate a resulting value. The resulting value can be represented by multiple bits. For example, if each of the count values generated by counters 441 and 442 has 8 bits, then the resulting value generated by calculator 551 can also have 8 bits (which indicates the difference in the two count values).[0066] Thus, in FIG. 5, ID information 221 can include a number of sets of bits generated by output circuitry 550. Each set of bits can have a value (e.g., 8-bit value) based on a difference in two count values generated from a respective signal pair. The number of sets of bits can be based on (e.g., equal to) the number of signal pairs used by counters 441 and 442. The number of signal pairs can include signal pairs from only adjacent signals or alternatively signal pairs from adjacent signals and non-adjacent signals.[0067] FIG. 6 shows a block diagram of authentication unit 603’ including ring oscillators 601, 602, 603, and 604, and multiple memory cells in each of ring oscillators 601, 602, 603, and 604, according to some embodiments described herein. Authentication unit 603’ can be a variation of authentication unit 103 of FIG. 2A, FIG. 3, FIG.4, and FIG. 5. Authentication unit 603’ of FIG. 6 can include components similar to or identical to the components of authentication unit 103. For example, ID generator circuit 220 of FIG. 6 can be ID generator circuit 220 of FIG. 2A, FIG.3, FIG.4, or FIG. 5. For simplicity, detailed descriptions of similar or identical components are not repeated.[0068] Differences between authentication unit 103 (FIG.2A) and authentication unit 603’ of FIG. 6 include the number of memory cells in each of ring oscillators 601, 602, 603, and 604 in FIG. 6. In FIG. 2A, each of ring oscillators 201, 202, 203, and 204 includes a single memory cell (e.g., 211’, 212’, 213’, or 214’). In FIG. 6, each of ring oscillators 601, 602, 603, and 604 can include two memory cells coupled in series with and interleaved among the stages (inverter stages) of a respective ring oscillator. For example, as shown in FIG. 6, ring oscillator 601 can include two memory cells 211’ coupled in series with and interleaved among the stages 211. Ring oscillator 602 can include memory cells 212’ coupled in series with and interleaved among the stages 212. Ring oscillator 603 can include memory cells 213’ coupled in series with and interleaved among the stages 213. Ring oscillator 604 can include two memory cells 214’ coupled in series with and interleaved among the stages 214.[0069] FIG. 6 shows an example where the memory cells in a respective oscillator are located at certain locations in the respective ring oscillator.However, the memory cells in a respective oscillator can be located anywhere within the respective ring oscillator. For example, a particular memory cell in a respective oscillator can be located at a location (e.g., immediately next to ID generator circuit 220) such that the particular memory cell can be directly coupled to the output node of the respective ring oscillator. In another example, the memory cells in a respective oscillator can be located next to each other such that the memory cells can be directly coupled to each other, without an inverter stage being coupled between the memory cells.[0070] FIG. 6 shows an example where authentication unit 603’ can include a certain the number (e.g., four) of ring oscillators, a certain number (e.g., three) of stages (e.g., inverter stages) in each ring oscillator, and a certain number (e.g., two) of memory cells (e.g., ReRAM cells) in each ring oscillator. However, authentication unit 603’ can include different combinations of the number of ring oscillators, the number of stages in each ring oscillator, and the number of memory cells in each ring oscillator. Authentication unit 603’ can include improvements over some conventional techniques, such as the improvements described above for authentication unit 103. However, using multiple memory cells in each of ring oscillators 601, 602, 603, and 604 in FIG. 6 may allow more combinations of states to be stored in memory cells 211’, 212’, 213’, and 214’ of authentication unit 603’. This may further improve the reliability (e.g., lower FAR and FRR) and strength of the value of IDinformation 221 generated by authentication unit 603’.[0071] FIG. 7 shows a block diagram of authentication unit 703’ including ring oscillators 701, 702, 703, and 704 having an enable node 705 in each of ring oscillators 701, 702, 703, and 704, according to some embodiments described herein. Authentication unit 703’ can be a variation of authentication unit 603’ of FIG. 6. Authentication unit 703’ of FIG. 7 can include components similar to or identical to the components of authentication unit 603’. For simplicity, detailed descriptions of similar or identical components are not repeated.[0072] Differences between authentication unit 603’ (FIG. 6) and authentication unit 703’ of FIG. 7 include a logic gate (e.g., NAND gate) in each of ring oscillators 701, 702, 703, and 704, respectively, in FIG. 7. As shown in FIG. 7, one of stages 211 of ring oscillator 701 can include a logic gate (e.g., NAND gate) 711a having an input node coupled to input node 201’ of ring oscillator 701, another input node coupled to enable node 705, and an output node coupled to the input node of a succeeding stage among stages 211 of through memory cell 211’. Similarly, one of stages 212 of ring oscillator 702 can include a logic gate 712a having an input node coupled to input node 202’, another input node coupled to enable node 705, and an output node coupled to the input node of a succeeding stage among stages 212 of through memory cell 212’. One of stages 213 of ring oscillator 703 can include a logic gate 713a having an input node coupled to input node 203’, another input node coupled to enable node 705, and an output node coupled to the input node of a succeeding stage among stages 213 of through memory cell 213’. One of stages 214 of ring oscillator 704 can include a logic gate 714a having an input node coupled to input node 204’, another input node coupled to enable node 705, and an output node coupled to the input node of a succeeding stage among stages 214 of through memory cell 214’.[0073] In operation, signal EN can be activated to enable (e.g., to start) the generation of signals OSC1, OSC2, OSC3, and OSC4 by ring oscillators of ring oscillators 701, 702, 703, and 704, respectively. Signal EN can be deactivated to disable generation of signals OSC1, OSC2, OSC3, and OSC4. Including signal EN and stages 711, 712, 713, and 714 in authentication unit 703’ may allow control of the activation (and deactivation) of signals OSC1, OSC2, OSC3, and OSC4 during generation of ID information 221. FIG. 7 shows an example where authentication unit 703’ can include two memory cells in each of ring oscillators 701, 702, 703, and 704. However, authentication unit 703’ can include only one memory cell in each of ring oscillators 701, 702, 703, and 704. Further, similar to authentication unit 603’ of FIG. 6, authentication unit 703’ of FIG. 7 can include any different combinations of the number of ring oscillators, the number of stages in each ring oscillator, and the number of memory cells in each ring oscillator.[0074] FIG. 8 shows a block diagram of a ring oscillator 801 including stages 811 and a memory cell 811’ including memory elements 811’a and 811’b, according to some embodiments described herein. As shown in FIG. 8, memory elements 811’a and 811’b can be coupled in series with each other between terminals (e.g., two terminals) of memory cell 811’ and in series with stages (e.g., CMOS inverter (INV) stages) 811 of ring oscillator 801. Each of memory elements 811’a and 811’b can include a ReRAM element. For example, each of memory elements 811’a and 811’b can include a dielectric portion (e.g., 273 of FIG. 2B) coupled between two electrodes (e.g., 271 and 272 of FIG. 2B). FIG.8 shows an example where memory cell 811’ includes two memory elements 811’a and 811’b coupled in series. However, memory cell 811’ can include more than two memory elements coupled in series.[0075] Part of ring oscillator 801 or the entire ring oscillator 801 can be included in any of the authentication units described above, such asauthentication unit 103 (FIG. 2A), authentication unit 603’ (FIG. 6), and authentication unit 703’ (FIG. 7). For example, memory cell 811’ of FIG. 8 can be substituted for each of memory cells 211’, 212’, 213’, and 214’ ofauthentication unit 103 (FIG. 2A, FIG.3, FIG. 4, and FIG. 5), authentication unit 603’ (FIG. 6), and authentication unit 703’ (FIG. 7).[0076] FIG. 9 shows a block diagram of a ring oscillator 901 including stages 911 and a memory cell 911’ including memory elements 911’a, 911’b, 911’c, and associated transistors T0, T1, and T2, according to someembodiments described herein. As shown in FIG.9, memory elements 911’a, 911’b, and 911’c can be coupled in series with each other and in series with stages (e.g., CMOS inverter (INV) stages) 911 of ring oscillator 901. Each of memory elements 911’a, 911’b, and 911’c can include a ReRAM element. For example, each of memory elements 911’a, 911’b, and 911’c can include a dielectric portion (e.g., 273 of FIG. 2B) coupled between two electrodes (e.g., 271 and 272 of FIG. 2B).[0077] As shown in FIG. 9, each of transistors T0, T1, and T2 includes transistor terminals (e.g., source and drain) coupled to respective terminals of an associated memory element. Transistors T0, T1, and T2 can be controlled (e.g., turned on or turned off) by signals (e.g., control signals) CTL0, CTL1, and CTL2, respectively. The value of the state that can be stored in memory cell 911’ can be based on the resistance value across the combination of series- connected memory elements 911’a, 911’b, and 911’c. This resistance value can be adjusted (e.g., selected) by selectively turning on (or turn off) different numbers of transistors among transistors T0, T1, and T2. FIG.9 shows an example where memory cell 911’ includes three memory elements 911’a, 911’b, and 911’c coupled in series and three associated transistors T0, T1, and T2. However, the number of memory elements and associated transistors can vary.[0078] Part of ring oscillator 901 or the entire ring oscillator 901 can be included in any of the authentication units described above, such asauthentication unit 103 (FIG. 2A), authentication unit 603’ (FIG. 6), and authentication unit 703’ (FIG. 7). For example, memory cell 911’ of FIG. 9 can be substituted for each of memory cells 211’, 212’, 213’, and 214’ of authentication unit 103 (FIG. 2A), authentication unit 603’ (FIG. 6), and authentication unit 703’ (FIG. 7).[0079] FIG. 10 shows a block diagram of a ring oscillator 1001 including stages 1011 and a memory cell 1011’ including memory elements 1011’a, 1011’b, 1011’c, and associated capacitors C and transistors T0, T1, and T2, according to some embodiments described herein. As shown in FIG. 10, memory elements 1011’a, 1011’b, and 1011’c can be coupled in series with each other and in series with stages (e.g., CMOS inverter (INV) stages) 1011 of ring oscillator 1001. Each of memory elements 1011’a, 1011’b, and 1011’c can include a ReRAM element. For example, each of memory elements 1011’a, 1011’b, and 1011’c can include a dielectric portion (e.g., 273 of FIG. 2B) coupled between two electrodes (e.g., 271 and 272 of FIG. 2B).[0080] As shown in FIG.10, each of memory elements 1011’a, 1011’b, and 1011’c can be coupled in series with an associated capacitor C and an associated transistor (one of transistors T0, T1, and T2) with respect to a ground connection. Transistors T0, T1, and T2 can be controlled (e.g., turned on or turned off) by signals (e.g., control signals) CTL0, CTL1, and CTL2, respectively. The value of the state that can be stored in memory cell 1011’ can based on the resistance value across the combination of series-connected memory elements 1011’a, 1011’b, and 1011’c. This resistance value can be adjusted (e.g., selected) by selectively turning on (or turn off) different numbers of transistors among transistors T0, T1, and T2. FIG. 10 shows an example where memory cell 1011’ includes three memory elements 1011’a, 1011’b, and 1011’c coupled in series and three associated capacitors C and transistors T0, T1, and T2. However, the number of memory elements and associated capacitors and transistors can vary.[0081] Part of ring oscillator 1001 or the entire ring oscillator 1001 can be included in any of the authentication units described above, such as authentication unit 103 (FIG. 2A), authentication unit 603’ (FIG. 6), and authentication unit 703’ (FIG. 7). For example, memory cell 1011’ of FIG. 10 can be substituted for each of memory cells 211’, 212’, 213’, and 214’ of authentication unit 103 (FIG. 2A), authentication unit 603’ (FIG. 6), and authentication unit 703’ (FIG. 7).[0082] FIG. 11 shows an apparatus in the form of a system (e.g., electronic system) 1100, according to some embodiments described herein. System 1100 can include or be included in a computer, a tablet, or other electronic systems. As shown in FIG. 11, system 1100 can include components such as a processor 1115, a memory device 1120, a memory controller 1130, a graphics controller 1140, an input and output (I/O) controller 1150, a display 1152, a keyboard 1154, a pointing device 1156, at least one antenna 1158, a connector 1159, and a bus 1160. Bus 1160 can include conductive lines (e.g., metal-based traces on a circuit board where the components of system 1100 are located).[0083] In some arrangements, system 1100 does not have to include a display. Thus, display 1152 can be omitted from system 1100. In some arrangements, system 1100 does not have to include any antenna. Thus, antenna 1158 can be omitted from system 1100.[0084] Processor 1115 can include a general-purpose processor or an application specific integrated circuit (ASIC). Processor 1115 can include a central processing unit (CPU). [0085] Memory device 1120 can include a dynamic random access memory (DRAM) device, a static random access memory (SRAM) device, a flash memory device, a phase change memory device, a combination of these memory devices, or other types of memory. FIG.11 shows an example where memory device 1120 is a stand-alone memory device separated from processor 1115. In an alternative arrangement, memory device 1120 and processor 1115 can be located on the same die. In such an alternative arrangement, memory device 1120 is an embedded memory in processor 1115, such as embedded DRAM (eDRAM), embedded SRAM (eSRAM), embedded flash memory, or another type of embedded memory.[0086] Display 1152 can include a liquid crystal display (LCD), a touchscreen (e.g., capacitive or resistive touchscreen), or another type of display. Pointing device 1156 can include a mouse, a stylus, or another type of pointing device.[0087] I/O controller 1150 can include a communication module for wired or wireless communication (e.g., communication through one or more antennas 1158). Such wireless communication may include communication in accordance with WiFi communication technique, Long Term EvolutionAdvanced (LTE-A) communication technique, or other communication techniques.[0088] I/O controller 1150 can also include a module to allow system 1100 to communicate with other devices or systems in accordance with one or more standards or specifications (e.g., I/O standards or specifications), including Universal Serial Bus (USB), DisplayPort (DP), High-Definition Multimedia Interface (HDMI), Thunderbolt, Peripheral Component Interconnect Express (PCIe), and other specifications.[0089] Connector 1159 can be arranged (e.g., can include terminals, such as pins) to allow system 1100 to be coupled to an external device (or system). This may allow system 1100 to communicate (e.g., exchange information) with the external device (or system) through connector 1159. Connector 1159 includes components (e.g., pins and conductive lines) such that it can conform with at least one of USB, DP, HDMI, Thunderbolt, PCIe, and otherspecifications. [0090] As shown in FIG.11, each of processor 1115, memory device 1120, memory controller 1130, graphics controller 1140, and I/O controller 1150 can include an authentication unit 1103. Authentication unit 1103 can include any of the authentication units described above with reference to FIG. 1 through FIG. 10.[0091] FIG. 11 shows an example where each of processor 1115, memory device 1120, memory controller 1130, graphics controller 1140, and I/O controller 1150 includes authentication unit 1103. However, in somearrangements, some of processor 1115, memory device 1120, memory controller 1130, graphics controller 1140, and I/O controller 1150 may not include authentication unit 1103.[0092] FIG. 11 shows the components of system 1100 arranged separately from each other as an example. For example, each of processor 1115, memory device 1120, memory controller 1130, graphics controller 1140, and I/O controller 1150 can be located on a separate IC (e.g., separate semiconductor die). In some arrangements, two or more components (e.g., processor 1115, memory device 1120, graphics controller 1140, and I/O controller 1150) of system 1100 can be located on the same die (e.g., same IC) that forms a system- on-chip, or located on the same IC package that forms a system-on-package (SoP) or system-in-package (SiP).[0093] FIG. 12 is a flowchart showing a method 1200 of operating an electronic apparatus, according to some embodiments described herein. The electronic apparatus used in method 1200 can include apparatuses described above with reference to FIG. 1 through FIG.11, such as device 100 and system 1100 that can include authentication units 103 and 1103. Some of the activities in method 1200 may be performed by hardware, software, firmware, or any combination of hardware, software, and firmware.[0094] As shown in FIG.12, activity 1210 of method 1200 can include generating counts having count values based on frequencies of signals from ring oscillators. Each of the ring oscillators can include inverter stages and at least one memory cell coupled to the inverter stages. The ring oscillators and the memory cell (or memory cells) can be part of an authentication unit, such as authentication unit 103 and 1103. Activity 1220 of method 1200 can include generating information based on the count values generated in activity 1210. The information generated by activity 1220 can include ID information (e.g., ID information 221 in FIG. 2A) that can be used to authenticate a device (e.g., device 100) or a system (e.g., system 1100) that contains the authentication unit.[0095] Method 1200 can include fewer or more activities relative to activities 1210 and 1220 in FIG. 12. For example, method 1200 can include activities and operations of authentication units 103 and 1103 including activities and operations of ID generator circuit 220 described above with reference to FIG. 2A through FIG. 7.[0096] The illustrations of the apparatuses (e.g., device 100 and system 1100 that can include authentication units 103 and 1103) and methods (e.g., method 1200 and operations of device 100 and system 1100 that can include operations of authentication units 103 and 1103) described above are intended to provide a general understanding of the structure of different embodiments and are not intended to provide a complete description of all the elements and features of an apparatus that might make use of the structures described herein. The apparatuses and methods described above can include or be included in high-speed computers, communication and signal processing circuitry, single- processor modules or multi-processor modules, single embedded processors or multiple embedded processors, multi-core processors, message information switches, and application-specific modules including multilayer or multi-chip modules. Such apparatuses may further be included as sub-components within a variety of other apparatuses (e.g., electronic systems), such as televisions, cellular telephones, personal computers (e.g., laptop computers, desktop computers, handheld computers, etc.), tablets (e.g., tablet computers), wearable electronic things (e.g., smart watches), workstations, radios, video players, audio players (e.g., MP3 (Motion Picture Experts Group, Audio Layer 3) players), vehicles, medical devices (e.g., heart monitors, blood pressure monitors, etc.), set top boxes, and others. Additional Notes and Examples[0097] Example 1 includes subject matter (such as a device, an electronic apparatus (e.g., circuit, electronic system, or both), or a machine) including a first ring oscillator including a first memory cell and a first plurality of stages coupled to the first memory cell, a second ring oscillator including a second memory cell and a second plurality of stages coupled to the second memory cell, and a circuit including a first input node coupled to an output node of the first ring oscillator and a second input node coupled to an output node of the second ring oscillator.[0098] In Example 2, the subject matter of Example 1 may optionally include, wherein the circuit is to receive a first signal from the first ring oscillator and a second signal from the second ring oscillator and to generate information based on frequencies of the first and second signals.[0099] In Example 3, the subject matter of Example 1 may optionally include, wherein the first memory cell includes a first terminal and a second terminal, the first terminal coupled to an output node of a first stage of the first plurality of stages, the second terminal coupled to an input node of a second stage of the first plurality of stages, and the second memory cell includes a first terminal and a second terminal, the first terminal of the second memory cell coupled to an output node of a first stage of the second plurality of stages, the second terminal of the second memory cell coupled to an input node of a second stage of the second plurality of stages.[00100] In Example 4, the subject matter of Example 3 may optionally include, wherein the first memory cell includes a first memory element coupled to the first and second terminals of the first memory cell, and the second memory cell includes a second memory element coupled to the first and second terminals of the second memory cell.[00101] In Example 5, the subject matter of Example 4 may optionally include, wherein the first memory cell includes a first additional memory element coupled in series with the first memory element between the first and second terminals of the first memory cell, and the second memory cell includes a second additional memory element coupled in series with the second memory element between the first and second terminals of the second memory cell.[00102] In Example 6, the subject matter of any Examples 1-5 may optionally include, wherein the circuit includes a selector, the selector including a first input node coupled to the output node of the first ring oscillator and a second input node coupled to the output node of the second ring oscillator.[00103] In Example 7, the subject matter of Example 6 may optionally include, wherein the circuit includes a counter coupled to an output node of the selector.[00104] In Example 8, the subject matter of Example 1 or 2 may optionally include, wherein the first memory cell includes a memory element coupled to a terminal of the first memory cell, and transistor including source and drain coupled to respective terminals of the memory element.[00105] In Example 9, the subject matter of Example 1 or 2 may optionally include, wherein the first memory cell includes a memory element coupled to a terminal of the first memory cell, and a capacitor and transistor coupled in series with the memory element.[00106] In Example 10, the subject matter of Example 1 or 2 may optionally include, wherein a stage among the first plurality of stages includes a logic gate, the logic gate including a first input node coupled to an input node of the first ring oscillator, and a second input node coupled to an enable node, and a stage among the second plurality of stages includes a logic gate, the logic gate including a first input node coupled to an input node of the second ring oscillator, and a second input node coupled to the enable node.[00107] Example 11 includes subject matter (such as a device, an electronic apparatus (e.g., circuit, electronic system, or both), or a machine) including a first ring oscillator including a first inverter, a second inverter, and a first resistive memory element, the first resistive memory element coupled between an output node of the first inverter and an input node of the second inverter, a second ring oscillator including a third inverter, a fourth inverter, and a second resistive memory element, the second resistive memory element coupled between an output node of the third inverter and an input node of the fourth inverter, and a circuit including a multiplexor coupled to the first and second ring oscillators.[00108] In Example 12, the subject matter of Example 11 may optionally include, wherein the apparatus comprises a device, and the circuit is to generate identification information to authenticate the device. [00109] In Example 13, the subject matter of Example 11 or 12 may optionally include, wherein the first ring oscillator includes an additional first resistive memory element coupled in series with the first resistive memory element between the output node of the first inverter and the input node of the second inverter, and the second ring oscillator includes an additional second resistive memory element coupled in series with the second resistive memory element between the output node of the third inverter and the input node of the fourth inverter.[00110] In Example 14, the subject matter of Example 11 or 12 may optionally include, wherein each of first and second resistive memory elements includes a dielectric portion and a conductive path in the dielectric portion.[00111] In Example 15, the subject matter of Example 11 or 12 may optionally include, wherein each of the first and second resistive memory elements includes electrodes and a dielectric portion between the electrodes, and the electrodes and the dielectric portion are arranged among each other in a direction perpendicular to a semiconductor substrate.[00112] In Example 16, the subject matter of Example 11 or 12 may optionally include, wherein the circuit includes a counter coupled to the multiplexor.[00113] In Example 17, the subject matter of Example 16 may optionally include, wherein the circuit includes a comparator coupled to the counter.[00114] Example 18 includes subject matter (such as a device, an electronic apparatus (e.g., circuit, electronic system, or both), or a machine) including dynamic random access memory (DRAM) device, and a processor coupled to the DRAM device, the processor including a first ring oscillator including a first memory cell and a first plurality of stages coupled to the first memory cell, a second ring oscillator including a second memory cell and a second plurality of stages coupled to the second memory cell, and a circuit including a first input node coupled to an output node of the first ring oscillator and a second input node coupled to an output node of the second ring oscillator.[00115] In Example 19, the subject matter of Example 18 may optionally include, further comprising a semiconductor substrate, wherein the DRAM is located at a first location of the semiconductor substrate, and the processor is located at a second location of the semiconductor substrate.[00116] In Example 20, the subject matter of Example 18 or 19 may optionally include, further comprising a connector coupled to the processor, the connector conforming with one of Universal Serial Bus (USB), High-Definition Multimedia Interface (HDMI), Thunderbolt, and Peripheral ComponentInterconnect Express (PCIe).[00117] Example 21 includes subject matter (such as a method of operating a device, an electronic apparatus (e.g., circuit, electronic system, or both), or a machine) including generating counts having count values based on frequencies of signals from ring oscillators, each of the ring oscillators including inverter stages and at least one memory cell coupled to the inverter stages, and generating information based on the count values.[00118] In Example 22, the subject matter of Example 21 may optionally include, wherein generating the information includes comparing a first count value included in the count values with a second count value included in the count values, and generating part of the information, the part of the information having a value based on whether the first count value is greater than the second count value.[00119] In Example 23, the subject matter of Example 22 may optionally include, wherein generating the part of the information includes generating a bit, the bit having a first value if the first count value is greater than the second count value and a second value if the first count value is not greater than the second count value.[00120] In Example 24, the subject matter of Example 21 may optionally include, wherein generating the information includes comparing a first count value included in the count values with a second count value included in the count values, and generating part of the information, the part of the information having a value based on a difference between the first count value and the second count value.[00121] In Example 25, the subject matter of Example 21 may optionally include, wherein generating the counts includes generating a first count of the counts based on a frequency of a first signal among the signals, the first count having a first count value, generating a second count of the counts based on a frequency of a second signal among the signals, the second count having a second count value, and generating part of the information based on the first and second count values, wherein the first signal is generated by a first ring oscillator of the ring oscillators, the second signal is generated by a second ring oscillator of the ring oscillators, and the first and second ring oscillators are located immediately next to each other.[00122] Example 26, includes subject matter (such as a device, an electronic apparatus (e.g., circuit, electronic system, or both), or machine) including means for performing any of the methods of claims 21-25.[00123] The subject matter of Example 1 through Example 26 may be combined in any combination.[00124] The above description and the drawings illustrate some embodiments to enable those skilled in the art to practice the embodiments of the invention. Other embodiments may incorporate structural, logical, electrical, process, and other changes. Examples merely typify possible variations.Portions and features of some embodiments may be included in, or substituted for, those of other embodiments. Many other embodiments will be apparent to those of skill in the art upon reading and understanding the above description. Therefore, the scope of various embodiments is determined by the appended claims, along with the full range of equivalents to which such claims are entitled.[00125] The Abstract is provided to allow the reader to ascertain the nature and gist of the technical disclosure. It is submitted with the understanding that it will not be used to limit or interpret the scope or meaning of the claims. The following claims are hereby incorporated into the detailed description, with each claim standing on its own as a separate embodiment. |
Changing operating states of a PHY interface which includes a plurality of blocks, changing operating states of a PHY interface includes: receiving parameters indicating desired feature settings of the plurality of blocks for changing the operating state of the PHY interface; and enabling the desired feature settings in a sequence, the sequence based on dependencies between the feature settings, the dependencies being stored in a dependency table. |
CLAIMSWhat is claimed is:1. A method, comprising:changing operating states of a PHY interface which includes a plurality of blocks, said changing operating states of a PHY interface comprises:receiving parameters indicating desired feature settings of the plurality of blocks for changing the operating state of the PHY interface; andenabling the desired feature settings in a sequence, the sequence based on dependencies between the feature settings, the dependencies being stored in a dependency table.2. The method of claim 1, wherein a feature wakeup finite state machine (FSM) is configured to perform the changing of operating states of a PHY interface.3. The method of claim 2, wherein the feature wakeup FSM is configured in hardware.4. The method of claim 1, wherein the dependency table is implemented in a set of registers and entries of the dependency table are software programmable.5. The method of claim 1, further comprisingforming groups of blocks with features that can be turned on in parallel based on the dependencies.6. The method of claim 5, further comprisingsetting delays between the groups of blocks based on wakeup times of blocks of each group.7. The method of claim 1, further comprisingcomparing current feature settings to the desired feature settings of the plurality of blocks to determine which feature settings of at least one block from the plurality of blocks to change.8. The method of claim 7, further comprising:enabling the desired feature settings for the at least one block which do not require traffic stall; andwaiting for the traffic stall and enabling the feature settings of the at least one block that needs to occur after the traffic stall.9. The method of claim 1, wherein the operating states of the PHY interface comprises performance states and power states.10. The method of claim 9, further comprising calculating wakeup times of each of the power states and storing the wakeup times in a wakeup time lookup table.11. The method of claim 9, further comprising performing procedures for changing the performance states of the PHY interface.12. The method of claim 11, wherein a frequency switch FSM is configured to perform the procedures for changing the performance state of the PHY interface.13. The method of claim 9, further comprising performing procedures for changing the power states of the PHY interface.14. The method of claim 13, wherein a low power switch FSM is configured to perform the procedures for changing the power states of the PHY interface.15. The method of claim 1, further comprising:receiving a desired period from a memory controller; andselecting a performance state based on the desired period received and using performance state tables.16. The method of claim 15, further comprisingdetermining and sending the parameters indicating desired feature setting for the selected performance state.17. The method of claim 1, further comprising:receiving a requested wakeup time from a memory controller; andselecting a lowest power state that meets requirements of the wakeup times.18. The method of claim 17, further comprising:determining the desired feature settings based on the selected lowest power state; andsending the desired feature settings to a feature wakeup FSM to turn off the at least one block.19. The method of claim 18, further comprising:determining the desired feature settings of when to turn the at least one block back on based on the wakeup times; andsending the desired feature settings to the feature wakeup FSM to turn off the at least one block.20. A state machine apparatus for changing an operating state of a PHY interface, the state machine apparatus configured to:receive parameters indicating feature settings of a plurality of blocks for changing the operating state of the PHY interface; andenable the feature setting in a sequence, the sequence based on dependencies between the feature settings, the dependencies being stored in a dependency table.21. The apparatus of claim 20, further comprisinga feature wakeup unit configured to form groups of blocks with features that can be turned on in parallel based on the dependencies, and to set delays between the groups of blocks based on wakeup times of blocks of each group.22. An apparatus for changing an operating state of a PHY interface, the apparatus comprising:means for receiving parameters indicating desired feature settings of a plurality of blocks for changing the operating state of the PHY interface; and means for enabling the desired feature settings in a sequence, the sequence based on dependencies between the feature settings, the dependencies being stored in a dependency table.23. The apparatus of claim 22, further comprisingmeans for comparing current feature settings to the desired feature settings of the plurality of blocks to determine which feature settings of at least one block from the plurality of blocks to change.24. The apparatus of claim 23, further comprising:means for enabling the feature settings for the at least one block which do not require traffic stall; andmeans for waiting for the traffic stall and enabling the feature settings of the at least one block that needs to occur after the traffic stall.25. A frequency and power managing system, comprising:a plurality of tables having a representation of features and properties of a plurality of blocks including frequency threshold, wakeup time requirements, interdependencies, and power requirements; anda sequencing unit configured to switch feature settings of the plurality of blocks to an operating state of a plurality of operating states of a PHY interface, including performance states and power states, based on a request from a memory controller.26. The system of claim 25, wherein the sequencing unit is hardware configured in a state machine to step through a process of enabling the feature settings used in the performance states or power states and disabling the feature settings not used in the performance states or power states.27. The system of claim 25, wherein the sequencing unit includesa feature wakeup unit configured to form groups of blocks with feature settings that can be turned on in parallel based on the interdependencies, and to set delays between the groups of blocks based on wakeup time requirements of blocks of each group.28. The system of claim 25, wherein the sequencing unit includesa wake time calculation unit configured to calculate wakeup times of each of the power states and storing the wakeup times in a wakeup time lookup table.29. The system of claim 25, wherein the sequencing unit includesa frequency switch unit configured to perform procedures for changing the performance states of the PHY interface.30. The system of claim 25, wherein the sequencing unit includesa low power switch unit configured to perform procedures for changing the power states of the PHY interface. |
Frequency and Power ManagementBACKGROUNDField[0001] This invention relates to memory control sequencers, and more specifically, to memory control sequencing for frequency and power changes.Background[0002] Frequency and power management in a double data rate (DDR) physical (PHY) interface module is becoming increasingly complicated because the DDR-PHY has a high pin count that can result in high dynamic power. The DDR-PHY also has high frequency requirements and must transmit and receive data across a wide frequency range to support low-power DDR (LPDDR) specs. High frequency data communication is facilitated by additional high performance circuitry, on-die termination, variable voltage output high (VOH), etc. However, many of the features required to transmit and receive data at high frequency are not needed at lower frequency. Therefore, feature scaling is critical to maintaining a competitive power usage profile across frequencies. In other words, some features used to enable high frequency data communication are not necessary for low frequency datacommunication. In previous generations of a DDR PHY interface, different types of ad- hoc logic blocks were used to control switching between frequencies and power modes. However, adding ad-hoc control logic for frequency and power control of blocks becomes very difficult to handle due to increased complexity of the PHY interface module. The number of PHY features, and inter-dependencies between them, has grown to the extent that an easily expandable architecture is highly desirable to control these features.SUMMARY[0003] The present invention provides for changing an operating state of a PHY interface which includes a plurality of blocks.[0004] In one embodiment, a method is disclosed. The method includes: changing operating states of a PHY interface which includes a plurality of blocks, changing operating states of a PHY interface includes: receiving parameters indicating desired feature settings of the plurality of blocks for changing the operating state of the PHY interface; and enabling the desired feature settings in a sequence, the sequence based on dependencies between the feature settings, the dependencies being stored in a dependency table.[0005] In another embodiment, a state machine apparatus for changing an operating state of a PHY interface is disclosed. The state machine apparatus is configured to: receive parameters indicating feature settings of a plurality of blocks for changing the operating state of the PHY interface; and enable the feature setting in a sequence, the sequence based on dependencies between the feature settings, the dependencies being stored in a dependency table.[0006] In another embodiment, an apparatus for changing an operating state of a PHY interface is disclosed. The apparatus includes: means for receiving parameters indicating desired feature settings of a plurality of blocks for changing the operating state of the PHY interface; and means for enabling the desired feature settings in a sequence, the sequence based on dependencies between the feature settings, the dependencies being stored in a dependency table.[0007] In yet another embodiment, a frequency and power managing system is disclosed. The system includes: a plurality of software-programmable tables having a representation of features and properties of a plurality of blocks including frequency threshold, wakeup time requirements, interdependencies, and power requirements; and a sequencing unit configured to switch feature settings of the plurality of blocks to an operating state of a plurality of operating states of a PHY interface, including performance states and power states, based on a request from a memory controller.[0008] Other features and advantages of the present invention should be apparent from the present description which illustrates, by way of example, aspects of the invention.BRIEF DESCRIPTION OF THE DRAWINGS[0009] The details of the present invention, both as to its structure and operation, may be gleaned in part by study of the appended further drawings, in which like reference numerals refer to like parts, and in which:[0010] FIG. 1 is a functional block diagram of a PHY interface residing within a system-on-chip (SoC) in accordance with one embodiment of the present invention; [0011] FIG. 2 is a detailed functional block diagram of the frequency and power manager in accordance with one embodiment of the present invention;[0012] FIG. 3A is a sample list of features of blocks that need to be controlled for a change in the operating state including corresponding properties such as wakeup times, dependencies, and stall times in accordance with one embodiment of the present invention;[0013] FIG. 3B is an example Performance State (PRFS)/Power State (PWRS) table derived from the list of features of blocks shown in FIG. 3 A;[0014] FIG. 4A shows a plurality of performance states, each performance state including a plurality of power states, and each power state having a dedicated feature enable register that defines the status (enable/disable) of the features;[0015] FIG. 4B is an example feature enable register (FER) that defines the status of13 features;[0016] FIG. 5 is an example feature wake time table (FWT) in accordance with one embodiment of the present invention;[0017] FIG. 6 is an example performance lookup table in accordance with one embodiment of the present invention;[0018] FIG. 7A is a functional flow diagram illustrating a sequencing logic based on a memory controller request for changes to the frequency and/or power state in accordance with one embodiment of the present invention;[0019] FIG. 7B is a functional flow diagram illustrating a sequencing logic for initializing all performance and power state tables in accordance with one embodiment of the present invention;[0020] FIG. 7C is a functional flow diagram illustrating a sequencing logic for performing procedures for changing the performance states in accordance with one embodiment of the present invention;[0021] FIG. 7D is a functional flow diagram illustrating a sequencing logic for performing procedures for changing the power states in accordance with oneembodiment of the present invention; and[0022] FIG. 7E is a functional flow diagram illustrating a sequencing logic for preparing settings of blocks and changing the operating states in accordance with one embodiment of the present invention. DETAILED DESCRIPTION[0023] As stated above, a DDR-PHY interface module is becoming increasingly complicated because the DDR-PHY interface module has a high pin count that can result in high dynamic power and must work across a wide frequency range. For example, the PHY interface needs to work in several different communication modes to save power wherein the data rate is different in each communication mode. To do this, a system-on-chip (SoC) which includes the PHY interface assesses the amount of data to be transferred to or from the memory and chooses the communication mode with the lowest data rate or lowest power consumption that can accomplish the task on time. For example, if there is a lot of time sensitive data that needs to be transmitted to the memory, the SoC chooses a high data rate and/or high power communication mode. Otherwise, if there is a small amount of data that needs to be transmitted to the memory, the SoC chooses a low data rate and/or low power communication mode to conserve power. In previous generations of a DDR PHY interface, different types of ad-hoc logic blocks were used to control switching between modes of the frequency and/or power. However, adding ad-hoc control logic for frequency and/or power control of blocks becomes very difficult to handle due to increased complexity of the PHY interface module. The number of PHY features, and inter-dependencies between them, has grown to the extent that an easily expandable architecture is highly desirable to control these features. The term "frequency" is used to refer to the data rate.[0024] Several embodiments as described herein provide for dynamically controlling features of an interface module, wherein properties of the features of a plurality of blocks in the interface module are defined and controlled in a table-based structure. In one embodiment, the features include frequency threshold, wakeup time requirements, interdependencies, power requirements, and other related features. Further, in one embodiment, the blocks are mixed-signal blocks. After reading this description it will become apparent how to implement the invention in various implementations and applications. Although various implementations of the present invention will be described herein, it is understood that these implementations are presented by way of example only, and not limitation. As such, this detailed description of various implementations should not be construed to limit the scope or breadth of the present invention. [0025] FIG. 1 is a functional block diagram of a PHY interface 100 residing within a system-on-chip (SoC) 130 in accordance with one embodiment of the present invention. In one embodiment, the PHY interface 100 is a DDR-PHY interface whichcommunicates with DDR dynamic random-access memory (DRAM) 140. The SoC 130 may also include processor(s) 134 and a memory controller 132.[0026] In the illustrated embodiment of FIG. 1, the PHY interface 100 includes a frequency and power manager 110 and a plurality of blocks 120. In one embodiment, the blocks include, but are not limited to, input receivers, multiplexers, a reference voltage generator , a bias current generator, a phase lock loop (PLL), a current-to- voltage converter, a calibrated delay circuit (CDC), a low-dropout (LDO) regulator, and other similarly-configured blocks. In one embodiment, the frequency and power manager 110 is implemented in hardware.[0027] The illustrated embodiment of FIG. 1 also shows the frequency and power manager 110 including tables 112 and a sequencer 114. In one embodiment, the tables 112 are contra 1/status register (CSR) tables which define features, performance states, and power states. The tables 112 may be software programmable and may include representations of features and their properties, such as frequency threshold, wakeup time requirements, interdependencies, power requirements, and other related features, for the plurality of blocks 120. The tables 112 may be programmed at boot time, and the values may be re-calculated thereafter. The sequencer 114 may include sequencing logic for switching DDR-PHY settings to a new operating state including a new performance state, a new power state, or both based on a request from the memory controller 132 for changes to frequency and/or power. In particular, sequencer 114 applies new settings to the blocks 120 to change the operation of the blocks 120 based on the request from the memory controller 132. For example, current feature settings of the blocks 120 may cause the blocks 120 to operate in a high performance state and high power state.Frequency and power manager 110 may receive a request from memory controller 132 to operate in a lower performance state and low power state. Consequently, sequencer 114 may provide different properties (parameters) to the features of blocks 120 that cause blocks 120 to operate in the low performance and low power state. Frequency and power manager 110 may determine the properties applied to the features from tables 112. The sequencing logic steps through a process of enabling features used in the new performance state or power state and disabling features not used in the new performance state or power state, while considering the timing and dependencies between the features. In one embodiment, the implementations of the tables 112 and the sequencer 114 run in the background so that frequency and power switching will not affect the blocks and the data being transmitted and received to/from DRAM 140 in the foreground until after the traffic is stalled and the switched operating states take effect. In one embodiment, an apparatus (e.g., a sequencer in this case) may enable a block (e.g., one of the blocks 120) by turning on or waking up the block, and may disable the block by turning off the block. In another embodiment, an apparatus may enable the block by changing a state of the block from a lower-power standby state to a higher- power operational state, and may disable the block by changing a state of the block from a higher-power operational state to a lower-power standby state. The sequencer 114 also minimizes the traffic stall time by handling non- stall features in a separate phase. In other words, the sequencer 114 can change the state of some blocks that are not currently involved in communicating with DRAM 140 ahead of time since changing the state of these blocks does not affect traffic communicated to/from DRAM 140. Thus, the sequencer generates control signals for a large number of features for the blocks 120. Accordingly, the control signals have different settings depending on the operating state of the device.[0028] FIG. 2 is a detailed functional block diagram of the frequency and power manager 110 in accordance with one embodiment of the present invention. As described above with respect to FIG. 1, the frequency and power manager 110 includes tables 112 and a sequencer 114. In the illustrated embodiment of FIG. 2, the tables 112 include: a Performance State (PRFS)/Power State (PWRS) table 200, a feature waketime/dependency table 202, and a performance lookup table 204; and sequencer 114 includes a wake time calculation finite state machine (FSM) 210, a frequency switch FSM 212, a low power switch FSM 214, a timer 216, and a feature wakeup FSM 218. Although the illustrated embodiments and claims use the term table or lookup table, any type of data structure that defines the sequencing for the operating state can be used in place of the table or lookup table. In some embodiments, any set of information related to the sequencing for operating state can be stored and used to make a decision. In other embodiments, real-time system variables (e.g., variables that are not stored in any data structure) can be used to provide sequencing for frequency switch or lower power switch. [0029] In the illustrated embodiment of FIG. 2, the finite state machines (FSMs) in the frequency and power manager 110 implement sequential logic circuits (e.g., sequencer 114) and are configured in hardware. An FSM can be in one of a finite number of states. However, the FSM is in only one current state at a time. It can change from one state to another when initiated by a triggering event or condition called a transition. A particular FSM is defined by a list of its states, and the triggering condition for each transition. When changing the features or parameters of the blocks 120, the signals sent from the feature wakeup FSM 218 to the blocks 120 may be very time sensitive. In some cases, digital values may be sent from the feature wakeup FSM 218 to the blocks 120, but in other cases, analog or digital signals (such as enable/wakeup signals sent on enable lines) may be sent from the feature wakeup FSM 218 to the blocks 120. These signals may be very time sensitive. Accordingly, the frequency and power manager 110 is implemented in hardware as a state machine and is physically located adjacent to the blocks 120. In contrast, if the signals were generated by a software-implemented frequency and power manager, the signals might not be presented at the right times relative to each other or would require longer wait times compared to a finite state machine implementation that would make changes in the performance state or power state very slow and inefficient.[0030] Regarding the tables 112, the PRFS/PWRS table 200 defines the enable/disable status of features of blocks 120 for each performance state and power state. Aperformance state defines a frequency (e.g.,, a data rate used to transmit data from SoC 130 to DRAM 140 or a data rate at which SoC 130 received data from DRAM 140 or both) state to which the PHY interface can switch, while a power state defines a state to which the PHY interface can switch based on a low power request. Pre-defined (e.g., at boot time) values of the features of blocks 120 for each performance and power state are identified through a characterization process of the features of blocks and are not modified during product operation. The feature wake time/dependency table 202 keeps the wakeup time and dependency between features. The wakeup time is the time required by a block to be ready for operation. For example, certain features require some time to stabilize or adjust, which is referred to as a wakeup time. A dependency is when some features depend on other features. For example, a delay-locked loop (DLL) may need to be turned on after a particular current source is turned on rather than prior to turning on the current source or simultaneously turning on the current source and the DLL. These values may also be determined during product characterization and are typically not changed during normal chip operation. The performance lookup table 204 defines the mapping between performance states and frequency ranges. The frequency ranges are a set of different frequencies or rates at which data is communicated between the SoC and the memory (e.g., a DRAM) or between the memory and the SoC. For example, in a high performance state, there are high rates and in a low performance state there are low rates. This table keeps the clock periods corresponding to each of the defined performance states.[0031] Regarding the sequencer 114, the wake time calculation FSM 210 is responsible for calculating the wakeup times of each of the power states, i.e. the amount of time it takes to switch the PHY interface (including blocks 120) from one power state to another power state. In one embodiment, a wakeup time is a time it takes to switch the PHY interface from a non- functional power state to a fully- functional state. This calculation needs to be performed for each power state of each performance state at boot time. Results of this calculation are kept in a register lookup table (i.e., power state wakeup time table) and are referred to during low power request transitions.[0032] The frequency switch (FSW) FSM 212 supports the DDR-PHY interface handshake and is based on received request from the memory controller 132. The FSW FSM 212 also selects a new performance state. The FSW FSM 212 further initiates requests to the feature wakeup FSM 218 so that the required features for the new performance state are enabled. The frequency switch interface signals are transmitted and received between the FSW FSM 212 and the memory controller 132. These signals include at least init start, init complete, and fpm_period, which are described below in detail.[0033] The low power (LP) switch FSM 214 interfaces with the LP interface signals. Upon receipt of power state change requests from the memory controller 132, the LP switch FSM 214 looks up the time required for waking up from each power state within the current performance state. If a wakeup time requested by memory controller 132 is longer than the wakeup time of one of the power states, LP switch FSM 214 initiates a request to the feature wakeup FSM 218 to transition from the current power state to a selected low power state. The LP switch FSM 214 also sets (e.g., using signal pwr_dwn_time) the timer 216 so that once the timer expires (e.g., using signal pwr dwn expire), the wakeup process initiates in time to bring the PHY back to a fully functional state prior to the wakeup time requested by memory controller 132. The LP interface signals are defined as a request/ acknowledge (i.e., req/ack) pair of signals that are transmitted and received between the LP FSMs 214 and the memory controller 132. These signals include at least lp req, lp ack, and lp wakeup, which are described below in detail.[0034] The feature wakeup FSM 218 receives requests from the FSW FSM 212 and LP switch FSM 214 to turn on/off features of blocks 120 according to the feature enable register (FER) 220. The feature wakeup FSM 218 performs a turn-on sequence based on the dependencies between different features and enables features that are not dependent in parallel and then enables a next set of features. The feature wakeup FSM 218 considers the wakeup time requirements of features and properly sets the timer 216 based on these requirements. The feature wakeup FSM 218 also considers the stall requirements of the features that require traffic stall to be turned on. The feature wakeup FSM 218 then generates another set of enable signals that are based on stall time requirements of those features. These enable signals are triggered once the init start signal coming from the memory controller 132 goes low indicating a traffic stall period. After enabling features, the feature wakeup FSM 218 will send the init_complete signal to the memory controller 132 indicating the end of the frequency switch or low power transition.[0035] As described above, the lower power interface is defined as a request/ acknowledge (i.e., req/ack) pair of signals. Thus, signals being transmitted and received between the FSMs 212, 214 and the memory controller 132 include at least init_start, init complete, fpm_period, lp req, lp wakeup, and lp ack. Signal init start refers to the PHY initialization start. When this signal is asserted, the memory controller is requesting a DDR clock frequency change or a frequency ratio change. Signals freq_ratio, legacy mode, fpm_period need to be setup prior to the assertion of init start. Signal init complete refers to the PHY initialization complete. The init complete signal indicates that the PHY is able to respond to any proper stimulus on the PHY interface. All PHY interface signals that communicate commands or status are held at their default values until the init complete signal is asserted. During a PHY re -initialization request (e.g., a frequency change), this signal will be de-asserted. For a frequency change request, the de-assertion of the init complete signal acknowledges the frequency change protocol. Signal fpm_period indicates the next target frequency and is provided by memory controller 132. Signal lp req is a low power opportunity request. This signal is used by the memory controller 132 to inform the PHY of an opportunity to switch to a low power mode. Signal lp wakeup, provided by memory controller 132 refers to a low power wakeup time. This signal indicates which one of the 16 wakeup times the memory controller 132 is requesting for the PHY. Signal lp ack refers to a low power acknowledgement. This signal is asserted to acknowledge the memory controller low power opportunity request. The PHY is not required to acknowledge this request.[0036] FIG. 3A is a sample list 300 of features of blocks 120 that may be controlled for a change in the operating state including corresponding properties such as wakeup times, dependencies, and stall times in accordance with one embodiment of the present invention. In one embodiment, these features and the corresponding properties are programmed in a set of tables 112. As described above, in one embodiment, the tables are CSR tables. The pre-defined values of the features for each performance and power state are identified through a characterization process of features of blocks and are not modified during product operation. The feature wake time/dependency keeps the wakeup time and dependency between features. These values should also be determined during product characterization and are not typically changed during normal chip operation.[0037] FIG. 3B is an example PRFS/ PWRS table 200 derived from the list of features of blocks shown in FIG. 3A. Columns labeled 0-56 (columns 5-51 have been omitted to allow the example table to be shown) in the table shown in FIG. 3B are the features of blocks as indexed in FIG. 3A. In the illustrated embodiment of the PRFS/ PWRS table 200 in FIG. 3B, each performance state (PRFS0, PRFS1, PRFS2, PRFS3, PRFS4, or PRFS5) is assigned two distinct power states (PWRS0, PWRS1). In one embodiment, the power state PWRS 1 is a fully functional state of the PHY interface where all required features for the particular PRFS are turned on and the traffic can be sent and/or received from the PHY interface. In this state, the power consumption is dictated by the traffic switching rate and will be significantly lower when traffic is idle. Further, the power state PWRSO is a low power state of the PHY interface that is non-functional (i.e. the traffic cannot be sent through the PHY interface in this state). The power state PWRSO has a significantly lower power consumption compared to the power state PWRS 1 , because no traffic is going through the PHY interface and most features of the blocks are turned off to save power. In the power state PWRSO, the blocks with longer wakeup times may be kept enabled so the power state PWRSO has a relatively low wakeup time requirement.[0038] Each power state may be allocated a dedicated feature enable register (FER). Each FER defines the features that are enabled and/or disabled for a particular performance-power state. Thus, using FERs, software has the flexibility to define performance-power characteristics of the DDR PHY.[0039] FIG. 4A shows a plurality of performance states 400, each performance state including a plurality of power states 410, and each power state having a dedicated feature enable register 420 that defines the status (enable/disable) of the features. In the illustrated embodiment of FIG. 4A, the feature enable register 420 uses n bits to define the status of n features. FIG. 4B is an example FER 430 that defines the status of 13 features.[0040] FIG. 5 is an example row of a feature wake time table (FWT) 202 in accordance with one embodiment of the present invention. The FWT 202 is a set of registers each defining the wakeup time requirements of features, as well asdependencies between features that need to be honored in terms of sequence of turning on features. The table 202 also includes fields for stall-time requirements of the features and whether feature enablement needs to happen during a traffic stall period. FWT 202 may include one row for each of the features of blocks 120.[0041] FIG. 6 is an example performance lookup table 204 in accordance with one embodiment of the present invention. The performance lookup table 204 defines the mapping between performance states and frequency ranges. The table 204 also keeps the clock periods corresponding to each of the defined performance states. In the table 204, performance state frequency threshold settings can be set based on supported frequency range of a specific product. In one embodiment, following performance state frequency threshold settings can be defined: (1) PRFSO: This performance state defines the lowest frequency (e.g., <333 MHz) and lowest power device configuration. This performance state assumes that all high performance features are disabled and are placed into a low leakage state; (2) PRFSl : This performance state defines a low frequency (e.g., 400 MHz) and low power device configuration. This performance state assumes that some higher performance features are enabled, but a majority of others are disabled and placed into a low leakage state; (3) PRFS2: This performance state defines a modest frequency (e.g., 533/667 MHz) and modest power device configuration. This performance state assumes that some higher performance features are enabled, while others are disabled and placed into a low leakage state; (4) PRFS3: This performance state defines a higher frequency (e.g., 800 MHz) and power range and assumes that required features are enabled; (5) PRFS4: This performance state defines a higher frequency (e.g., 1066/1333 MHz) and power range and assumes that required features are enabled; and (6) PRFS5: This performance state defines the highest frequency (e.g., 1600 MHz) and highest power device configuration. This performance state assumes that all higher performance features are enabled. Performance states can be mapped to device tiers and are software configurable through time boot configuration.[0042] For each performance state, multiple power states are defined to describe states to which PHY can switch based on DDR PHY interface low power requests. The features of blocks that are enabled and/or disabled for different power states are selected based on the requested wake-up times and the settling times of those features. Thus, in one embodiment, following power states can be defined: (1) PWRS2: This power state is the full functional state of PHY where all required features for the particular PRFS are turned on and traffic can be sent and/or received from PHY. In this state the power consumption is dictated by the traffic switching rate and will be significantly lower when traffic is idle; (2) PWRS 1 : This power state is a low power state of the PHY that is non- functional, i.e. traffic cannot be sent through PHY in this state. PWRS1 has a significantly lower power consumption compared to PWRS2, because no traffic is going through PHY and most features of blocks are turned off to save power. In PWRS1, blocks with longer wakeup times are kept enabled so PWRS 1 has a relatively low wakeup time requirement. The assumption is that PWRS 1 will have lower power consumption, compared to PWRS2 idle power consumption, due to disabling additional features; and (3) PWRS0: This power state is the lowest power state of the PHY and is non-functional, i.e. traffic cannot be sent through PHY in this state. PWRS0 has no dynamic or static power consumption. Leakage power is the primary powerconsumption in this state. Accordingly, if requested wakeup time is long, all PHY features including clocking features of custom macro such as master calibrated delay cell (CDC) and other blocks that have higher wakeup times can be turned off and turned back on in time to have the PHY fully enabled within the given wakeup time request. This will result in lowest power state of PHY. [0043] FIG. 7A is a functional flow diagram illustrating a sequencing logic 700 based on a memory controller request for changes to the frequency and/or power state in accordance with one embodiment of the present invention. As stated above, the sequencing logic 700 steps through the process of enabling required features and disabling non-required features, while considering the timing and dependencies between the features. In the illustrated embodiment of FIG. 7A, the tables are initialized, at step 710 (see FIG. 7B). In one embodiment, the wake time calculation FSM 210 performs the sequencing logic of initializing the tables. A determination is then made, at step 720, whether a request for changes to the performance and/or power states is received from memory controller 132. If memory controller 132 requests a change to the performance state, at step 720, procedures are performed by the frequency switch FSM 212, at step 730 (see FIG. 7C), to change the performance state. If memory controller 132 requests a change to the power state, at step 720, procedures are performed by the low power switch FSM 214, at step 750 (see FIG. 7D), to change the power state. The settings of the blocks 120 are prepared and the operating states (e.g., performance and/or power states) are changed, at step 770 (see FIG. 7E), by the feature wakeup FSM 218.[0044] FIG. 7B is a functional flow diagram illustrating a sequencing logic 710 for initializing all performance and power state tables in accordance with one embodiment of the present invention. In the illustrated embodiment of FIG. 7B, the wakeup times of each of the power states (i.e. the amount of time it takes to switch the PHY from a nonfunctional power state to a fully- functional state) is calculated, at step 712. This calculation is performed for each power state of each performance state at boot time. Thus, if it is determined, at step 714, that the wakeup times of all power states of all performance states have been calculated, the results of this calculation are stored in a power state wakeup time table 202, at step 716, and are referred to during low power request transitions. Otherwise, additional wakeup times are calculated, at step 712. Then, at step 718, all performance and power state tables are initialized, at step 718.[0045] FIG. 7C is a functional flow diagram illustrating a sequencing logic 730 for performing procedures for changing the performance states in accordance with one embodiment of the present invention. In one embodiment, the frequency switch FSM 212 performs this sequencing logic 730, which receives a request for a new performance state from memory controller 132, at step 732. The desired period of the communication rate between SoC 130 and DRAM 140 (e.g., fpm_period in FIG. 2) is also received, at step 734, from the memory controller 132. Based on the desired period received, a performance state is then selected, at step 736, using the performance state tables 204. FSM 212 determines parameters of blocks 120 needed to realize the selectedperformance state, at step 738, and sends the determined parameters, at step 740, to the feature wakeup FSM 218.[0046] FIG. 7D is a functional flow diagram illustrating a sequencing logic 750 for performing procedures for changing the power states in accordance with one embodiment of the present invention. In one embodiment, the low power switch FSM 214 performs this sequencing logic 750, which receives a request for a reduced power state from memory controller 132, at step 752. FSM 214 receives a requested wakeup time (e.g., lp wakeup in FIG. 2), at step 754, from the memory controller. The low power switch FSM 214 selects one of the plurality of power states that satisfy the requested wakeup time provided by memory controller 132 as the selected lowest power state, at step 756. A determination is made, at step 758, regarding which block(s) to turn off based on the selected lowest power state. A signal is then sent, at step 760, to the feature wakeup FSM 218 to turn the block(s) off. The low power switch FSM 214 also determines when to start turning the block(s) back on by the required wakeup time using the timer 216. In one embodiment, the low power switch FSM 214 sets the timer 216 with a power down time (e.g., pwr dwn time in FIG. 2) so that once the power down time has elapsed, the timer 216 issues a power expiration signal (e.g., pwr_dwn_expire in FIG. 2), and the wakeup process initiates in time to bring the PHY interface back to a fully functional state. Parameters for the power state prior to turning the block(s) off are sent, at step 764, to the feature wakeup FSM to turn the block(s) back on.[0047] FIG. 7E is a functional flow diagram illustrating a sequencing logic 770 for preparing settings of blocks and changing the operating states in accordance with one embodiment of the present invention. In one embodiment, the feature wakeup FSM 218 performs this sequencing logic 770. Initially, a request for changes to the performance and/or power states is monitored, at step 772. Thus, if the request is received, at step 772, the current feature settings of the block(s) are compared, at step 774, to the desired feature settings indicated by the parameters sent from the frequency switch FSM 212 or low power switch FSM 214 to determine which feature settings of the block(s) to change. The dependencies of the features are then determined, at step 776, using a dependency table 202. Based on the determined dependencies, groups of blocks with features that can be turn on in parallel are formed, at step 778. The turn-on sequence is performed based on the dependencies between different features and enables features that are not dependent in parallel and then enables next set of features. Further, based on the wakeup times, delays between the groups of blocks are set, at step 780. The wakeup time requirements of features are considered and the timer is properly set based on these requirements. The groups of blocks with features which do not require stall are enabled, at step 782.[0048] The stall requirements of the features that require traffic stall to be turned on are also considered. A set of enable signals that are based on stall time requirements of those features are then generated. These enable signals are triggered, at step 786, once the init start signal coming from the memory controller goes low indicating a traffic stall period. At step 788, the groups of blocks with features which require stall are enabled. After enabling features, the init complete signal is sent to the memory controller indicating the end of the frequency switch or low power transition.[0049] Accordingly, embodiments of the frequency and power manager of the present invention are based on architecture that supports frequency and power scaling. Further, the architecture of the frequency and power manager is independent of the feature. The architecture is also independent of DDR PHY implementation with respect to the type of circuit features that are supported. The architecture is feature expandable for higher performance and feature collapsible for lower performance designs. The architecture also supports a table-based structure for keeping information of each of the features (e.g., wakeup time requirement and dependencies on other features) and is software programmable with respect to features, wake times, and dependencies. The architecture further controls timing and sequence of enabling or disabling a large number of analog/IO features and maintaining the sequence and requirements of each of those features.[0050] Although several embodiments of the invention are described above, many variations of the invention are possible. For example, although the illustrated embodiments describe the frequency and power management with respect to a DDR- PHY application, the frequency and power management described in this application can be used in other memory control request applications. Further, features of the various embodiments may be combined in combinations that differ from those described above. Moreover, for clear and brief description, many descriptions of the systems and methods have been simplified. Many descriptions use terminology and structures of specific standards. However, the disclosed systems and methods are more broadly applicable.[0051] Those of skill will appreciate that the various illustrative blocks and modules described in connection with the embodiments disclosed herein can be implemented in various forms. Some blocks and modules have been described above generally in terms of their functionality. How such functionality is implemented depends upon the design constraints imposed on an overall system. Skilled persons can implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the invention. In addition, the grouping of functions within a module, block, or step is for ease of description. Specific functions or steps can be moved from one module or block without departing from the invention.[0052] The above description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the invention. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles described herein can be applied to other embodiments without departing from the spirit or scope of the invention. Thus, it is to be understood that the description and drawings presented herein represent presently preferred embodiments of the invention and are therefore representative of the subject matter which is broadly contemplated by the present invention. It is further understood that the scope of the present invention fully encompasses other embodiments that may become obvious to those skilled in the art and that the scope of the present invention is accordingly limited by nothing other than the appended claims. |
A method and apparatus for managing processor functionality includes receiving, by the processor, data relating to one or more environmental conditions. The processor compares the data to pre-existing parameters to determine whether or not the environmental conditions are within the pre-existing parameters for normal operation. If the data are within the pre-existing parameters for normal operation, the processor is operated in a normal operation mode. If the data are outside the pre-existing parameters for normal operation, the processor operates in a second operation mode which is dynamically determined and calibrated during power-on, boot and operation. |
CLAIMSWhat is claimed is:1. A method for managing processor functionality, comprising: receiving, by the processor, data relating to one or more environmental conditions; comparing the data to pre-existing parameters to determine whether or not the environmental conditions are within the pre-existing parameters for normal operation; operating the processor in a normal operation mode if the data are within the pre-existing parameters for normal operation; and operating the processor in a second operation mode if the data are outside the pre-existing parameters for normal operation.2. The method of claim 1, further comprising the processor receiving the one or more environmental conditions from one or more sensors.3. The method of claim 2, wherein the environmental conditions include one or more of temperature, humidity, and air pressure conditions.4. The method of claim 1, wherein the second operation mode includes operating the processor at a reduced voltage level than the normal operation mode voltage level.5. The method of claim 1, wherein the second operation mode includes operating the processor at a reduced frequency.6. The method of claim 1, wherein the second operation mode includes the processor operating with reduced functionality.7. The method of claim 6, wherein reduced functionality operation includes the processor powering down one or more cores.8. The method of claim 1, wherein the second operation mode includes the processor modifying a phase lock loop (PLL) setting to operate at reduced functional levels.9. The method of claim 1, wherein the second operation mode includes the processor modifying input/output I/O terminations to operate at reduced functional levels.10. The method of claim 9, wherein the second operation mode includes disabling one or more I/O terminations.11. The method of claim 1, wherein the receiving of the data is performed upon a powerup or bootup of the processor.12. The method of claim 1, wherein the receiving of the data is dynamically performed upon a changing of the one or more environmental conditions during operation of the processor.13. An apparatus for managing processor functionality, comprising: at least one sensor; and a processor communicatively coupled to the at least one sensor, wherein the sensor detects one or more environmental conditions and sends data regarding the one or more environmental conditions to the processor, wherein the processor: compares the data to pre-existing parameters to determine whether or not the environmental conditions are within the pre-existing parameters for normal operation; operates in a normal operation mode if the data are within the pre existing parameters for normal operation; and operates in a second operation mode if the data are outside the pre existing parameters for normal operation.14. The apparatus of claim 13, wherein the environmental conditions include one or more of temperature, humidity, and air pressure conditions.15. The apparatus of claim 13, wherein the second operation mode includes operating the processor at a reduced voltage level than the normal operation mode voltage level.16. The apparatus of claim 13, wherein the second operation mode includes operating the processor at a reduced frequency.17. The apparatus of claim 13, wherein the second operation mode includes the processor operating with reduced functionality.18. The apparatus of claim 17, wherein reduced functionality operation includes the processor powering down one or more cores.19. The apparatus of claim 13, wherein the second operation mode includes the processor modifying a phase lock loop (PLL) setting to operate at reduced functional levels.20. The apparatus of claim 13, wherein the second operation mode includes the processor modifying input/output I/O terminations to operate at reduced functional levels.21. The apparatus of claim 20, wherein the second operation mode includes the processor disabling one or more I/O terminations.22. The apparatus of claim 13, wherein the processor receives the data upon a powerup or bootup of the processor.23. The apparatus of claim 13, wherein the processor dynamically receives the data upon a changing of the one or more environmental conditions during operation of the processor.24. A non-transitory computer-readable medium for managing processor functionality in a computer system, the non-transitory computer-readable medium having instructions recorded thereon, that when executed by the processor, cause the processor to perform operations including: receiving, by the processor, data relating to one or more environmental conditions; comparing the data to pre-existing parameters to determine whether or not the environmental conditions are within the pre-existing parameters for normal operation; operating the processor in a normal operation mode if the data are within the pre-existing parameters for normal operation; and operating the processor in a second operation mode if the data are outside the pre-existing parameters for normal operation. |
METHOD AND APPARATUS FOR MANAGING PROCESSORFUNCTIONALITYCROSS REFERENCE TO RELATED APPLICATIONS [0001] This application claims the benefit of U.S. Non-Provisional Patent Application No. 16/711,875 filed December 12, 2019, the contents of which are hereby incorporated by reference herein.BACKGROUND[0002] Central Processing Units (CPUs) are typically designed to work under certain environmental conditions, such as certain temperature, humidity, and air pressure, for example. However, if some of these conditions go beyond predefined limits, the CPU can fail to operate. If the CPU is unable to execute basic code and allow changes in BIOS, then the CPU will be nonop erational unless the environmental condition is altered to be within the typical limits.BRIEF DESCRIPTION OF THE DRAWINGS [0003] A more detailed understanding can be had from the following description, given by way of example in conjunction with the accompanying drawings wherein:[0004] Figure 1 is a block diagram of an example device in which one or more features of the disclosure can be implemented; and[0005] Figure 2 is a flow diagram of an example method of managing processor functionality.DETAILED DESCRIPTION[0006] Although the method and apparatus will be expanded upon in further detail below, briefly a method for detecting environmental conditions and allowing for operation outside those conditions is described herein.[0007] A method for managing processor functionality includes receiving, by the processor, data relating to one or more environmental conditions. The processor compares the data to pre-existing parameters to determine whether or
not the environmental conditions are within the pre-existing parameters for normal operation. The processor is operated in a normal operation mode if the data are within the pre-existing parameters for normal operation, and in a second operation mode if the data are outside the pre-existing parameters for normal operation.[0008] An apparatus for managing processor functionality includes at least one sensor, and a processor communicatively coupled to the at least one sensor. The sensor detects one or more environmental conditions and sends data regarding the one or more environmental conditions to the processor. The processor compares the data to pre-existing parameters to determine whether or not the environmental conditions are within the pre-existing parameters for normal operation. The processor operates in a normal operation mode if the data are within the pre-existing parameters for normal operation, and operates in a second operation mode if the data are outside the pre-existing parameters for normal operation.[0009] A non-transitory computer-readable medium for managing processor functionality in a computer system has instructions recorded thereon, that when executed by the processor, cause the processor to perform operations. The operations include receiving, by the processor, data relating to one or more environmental conditions, comparing the data to pre-existing parameters to determine whether or not the environmental conditions are within the pre existing parameters for normal operation, operating the processor in a normal operation mode if the data are within the pre-existing parameters for normal operation, and operating the processor in a second operation mode if the data are outside the pre-existing parameters for normal operation.[0010] Figure 1 is a block diagram of an example device 100 in which one or more features of the disclosure can be implemented. The device 100 can include, for example, a computer, a gaming device, a handheld device, a set-top box, a television, a mobile phone, or a tablet computer. The device 100 includes a processor 102, a memory 104, a storage 106, one or more input devices 108, and one or more output devices 110. The device 100 can also optionally include an input driver 112 and an output driver 114. Additionally, the device 100 includes a
memory controller 115 that communicates with the processor 102 and the memory 104, and also can communicate with an external memory 116. It is understood that the device 100 can include additional components not shown in Figure 1. [0011] In various alternatives, the processor 102 includes a central processing unit (CPU), a graphics processing unit (GPU), a CPU and GPU located on the same die, or one or more processor cores, wherein each processor core can be a CPU or a GPU. In various alternatives, the memory 104 is located on the same die as the processor 102, or is located separately from the processor 102. The memory 104 includes a volatile or non-volatile memory, for example, random access memory (RAM), dynamic RAM, or a cache.[0012] The storage 106 includes a fixed or removable storage, for example, a hard disk drive, a solid state drive, an optical disk, or a flash drive. The input devices 108 include, without limitation, a keyboard, a keypad, a touch screen, a touch pad, a detector, a microphone, an accelerometer, a gyroscope, a biometric scanner, or a network connection (e.g., a wireless local area network card for transmission and/or reception of wireless IEEE 802 signals). The output devices 110 include, without limitation, a display, a speaker, a printer, a haptic feedback device, one or more lights, an antenna, or a network connection (e.g., a wireless local area network card for transmission and/or reception of wireless IEEE 802 signals).[0013] The input driver 112 communicates with the processor 102 and the input devices 108, and permits the processor 102 to receive input from the input devices 108. The output driver 114 communicates with the processor 102 and the output devices 110, and permits the processor 102 to send output to the output devices 110. It is noted that the input driver 112 and the output driver 114 are optional components, and that the device 100 will operate in the same manner if the input driver 112 and the output driver 114 are not present.[0014] The external memory 116 may be similar to the memory 104, and may reside in the form of off-chip memory. Additionally, the external memory may be memory resident in a server where the memory controller 115 communicates over a network interface to access the memory 116.
[0015] Figure 2 is a flow diagram of an example method 200 for managing processor functionality. In step 210, the processor (e.g., processor 102 of Figure 1) enters a startup mode. The processor receives data from one or more sensors regarding environmental conditions (step 220). For example, the processor 102 in system 100 may receive data from sensors such as input devices 108 via an input driver 112. The data may include temperature, humidity, and air pressure, for example.[0016] Once the processor has acquired the environmental data, the data is compared to pre-existing parameters to determine if the environmental parameters are outside of the pre-existing parameters for normal operation (step 230). If the environmental data meets the parameters for normal operation (step 230), then the processor enters normal startup mode (step 240). For example, if the environmental temperature is within a threshold for normal operation, then when the processor receives the temperature data and compares it to the temperature parameter in the pre-existing parameters, it determines that a normal startup is possible and effects a normal startup.[0017] If the environmental parameters are outside of the normal operating ranges (step 230), then the processor modifies the pre-existing parameters to continue out of normal operating startup (step 250). For example, if the temperature received from the sensor is outside the normal temperature operating range for normal startup, the processor starts up in an outside of normal startup condition. This may include operating at reduced voltages, frequencies, and/or operating at a reduced functional level (e.g., powering down one or more cores). The out of normal operation may also include modifying phase locked loop (PLL) settings to operate at reduced functionality, and input/output I/O terminations to operate at reduced functionality, for example.[0018] In this manner, the processor can continue to operate across a broader set of environmental conditions that would not be possible with a static set of initial programming pre-existing parameters that do not take environmental conditions into consideration. Accordingly, the processor does not cease to operate on account of the environmental conditions being outside a normal operating condition as defined by the pre-existing parameters.
[0019] System components including but not limited to a main processor and/or a System Management Unit (SMU) and/or Platform Security Processor (PSP) and/or other state machines, embedded controllers, and logic (both hardware and software) may adjust their own local operating parameters and/or the operating parameters of any other components of the system, including third party devices. This may occur early in a powerup or reboot sequence and be continually monitored with further adjustments possible during operation as sensor read parameters change. Additionally, if deemed necessary from sensor feedback, certain features (hardware or software) or mechanisms can be disabled, enabled, or ignored. Additionally, telemetry or feedback from external/third party sensors, devices, and the like can be utilized by this mechanism (e.g., a motherboard may incorporate sensors, embedded controllers, specialized logic, for example, which may interface with the mechanism to influence or tune parameters making changes specialized to suit a particular design, such as unique memory or bus PCB trace design and layout choices).[0020] As an example use case, many processors contain circuitry that is designed and tested to operate linearly across a set range of conditions. For instance a processor may be designed and tested to operate to temperature specifications of -40°C to 85°C and is well characterized to operate linearly for voltage and other settings across those temperature conditions. Lab testing outside of this designed-for range depicts what is required to maintain functionality at differing conditions outside of that range (e.g., a different and non linear set of voltages, timings, and settings to ensure continuous operation for these conditions). In the field, conditions are monitored and the necessary settings for voltage, timings, and the like are continually adjusted such that as external temperature conditions change out of and back into the linear operating region, functionality is maintained.[0021] The methods provided can be implemented in a general purpose computer, a processor, or a processor core. Suitable processors include, by way of example, a general purpose processor, a purpose processor, a conventional processor, a digital signal processor (DSP), a plurality of microprocessors, one or more microprocessors in association with a DSP core, a controller, a
microcontroller, Application Specific Integrated Circuits (ASICs), Field Programmable Gate Arrays (FPGAs) circuits, any other type of integrated circuit (IC), and/or a state machine. Such processors can be manufactured by configuring a manufacturing process using the results of processed hardware description language (HDL) instructions and other intermediary data including netlists (such instructions capable of being stored on a computer readable media). The results of such processing can be maskworks that are then used in a semiconductor manufacturing process to manufacture a processor which implements features of the disclosure. Further, although the methods and apparatus described above are described in the context of controlling and configuring PCIe links and ports, the methods and apparatus may be utilized in any interconnect protocol where link width is negotiated.[0022] The methods or flow charts provided herein can be implemented in a computer program, software, or firmware incorporated in a non-transitory computer-readable storage medium for execution by a general purpose computer or a processor. Examples of non-transitory computer-readable storage mediums include a read only memory (ROM), a random access memory (RAM), a register, cache memory, semiconductor memory devices, magnetic media such as internal hard disks and removable disks, magneto-optical media, and optical media such as CD-ROM disks, and digital versatile disks (DVDs). For example, the methods described above may be implemented in the processor 102 or on any other processor in the computer system 100. |
Examples include techniques for implementing read and write operations between a memory controller and a memory device. In an embodiment, the memory controller is configured to receive data bits to write to the memory device, to determine, using a memory controller ECC component and the data bits, a plurality of memory controller ECC check bits and one or more parity bits, to append the memory controller ECC check bits and the one or more parity bits to the data bits, and to send the data bits, the memory controller ECC check bits, and the one or more parity bits to the memory device during awrite operation. In an embodiment, the memory controller is configured to receive the data bits and the memory controller ECC check bits from the memory device, to check the data bits against the memory controller ECC check bits and correct errors detected, and to return the data bits during a read operation. |
1.An apparatus coupled to a memory device, comprising:a memory controller error correction code (ECC) element configured to receive a data bit to write to the memory device, the memory device including an on-die ECC element for determining a plurality of memories using the data bits a controller ECC check bit and one or more parity bits, the memory controller ECC check bit and the one or more parity bits are appended to the data bit and will be during a write operation The data bits, the memory controller ECC check bits, and the one or more parity bits are transmitted to the memory device, and the data bits and the memory controller ECC are received from the memory device A parity bit that checks the data bits for the memory controller ECC check bit and corrects the detected error and returns the data bit during a read operation.2.The apparatus of claim 1 wherein said memory controller ECC component is configured to correct for detected errors comprising said memory controller ECC component being configured to eliminate multi-bit error on-die ECC component miss correction .3.The apparatus of claim 1 wherein the memory controller ECC element is configured to correct the detected error comprising the memory controller ECC element being configured to eliminate errors confined on a single I/O data line.4.A method of writing data to and reading data from a memory device, comprising:A write operation is performed by receiving a data bit to write to the memory device, the memory device including an on-die ECC component that uses the data bits to determine a plurality of memory controller ECC check bits and one or more parity Detecting, appending the memory controller ECC check bit and the one or more parity bits to the data bit, and verifying the data bit, the memory controller ECC during a write operation Transmitting the bit and the one or more parity bits to the memory device;Performing a read operation by receiving the data bit and the memory controller ECC check bit from the memory device, checking the data bit for the memory controller ECC check bit and correcting the detected error, and Returns the data bits.5.The method of claim 4 wherein correcting the detected errors comprises eliminating multi-bit error on-die ECC component miss correction.6.The method of claim 4 wherein correcting the detected errors comprises eliminating errors that are limited to a single I/O data line.7.A system comprising:a memory controller including a memory controller error correction code (ECC) component;a memory device including an on-die ECC component and a memory;The memory controller is configured to receive a data bit to write to the memory device, to determine a plurality of memory controller ECC check bits and one or more parity using the memory controller ECC element and the data bit Detecting, appending the memory controller ECC check bit and the one or more parity bits to the data bit, and the data bit, the memory controller ECC check bit, and the Transmitting one or more parity bits to the memory device;The memory device is configured to receive the data bit, the memory controller ECC check bit, and the one or more parity bits, using the on-die ECC element and the data bit to determine more The ECC check bits are on the die and the data bits, the memory controller ECC check bits, and the on-die ECC check bits are stored into the memory.8.The system of claim 7 wherein said memory controller ECC component is configured to eliminate multi-bit error on-die ECC component miss correction.9.The system of claim 7 wherein said memory controller ECC component is configured to eliminate errors that are limited to a single I/O data line.10.At least one machine readable medium comprising a plurality of instructions responsive to being executed by a system such that the system is operative to perform the method of any one of claims 4 to 6.11.An apparatus comprising a unit for performing the method of any one of claims 4 to 6. |
Shared parity for correcting memory errorsBackground techniqueThe examples described herein generally relate to techniques for correcting errors in memory.Background techniqueAn error correction code (ECC) can be used to detect errors in the data. Among some new memory types, such as the fifth generation double data rate synchronous dynamic random access memory (DRAM) called DDR5 and the third generation high bandwidth memory (HBM) called HBM3, for "in-DRAM" The ECC circuit, called the on-die ECC component, can be included in HBM3 and DDR5 memories to increase throughput by correcting single cell defects or weak bits. The on-die ECC component uses a single miss correction (SEC) code and may miss correction or confuse bits when there are two or more errors. In DDR-type dual in-line memory modules (DIMMs), errors are limited to a single device that reads only a portion of the entire cache line and can therefore be corrected or detected by many memory controller ECC schemes, even in The ECC component on the die missed the correction. However, in HBM devices, the entire cache line is read from a single device, so errors may be distributed throughout the cache line. If there are multiple bit errors, the on-die ECC component may miss the correction bit anywhere in the ECC region on the affected die. For example, a multi-bit error limited to a column (a possible error mode for column selection failure) will become a column error and component miss correction in the on-die ECC in the ECC region on the die affected by the column selection failure Random bit errors, or single cell failures (the most common type of DRAM failure) that are aligned with soft unit errors, will become random three-bit errors scattered across the cache line.When there are multiple bit errors, there is no known solution to the on-die ECC miss correction in memory, where the cache line is the other that retrieves the entire cache line from a single device (eg, a HBM device) or from a single device Obtained by the memory device.DRAWINGSFIG. 1 shows an example memory controller and memory device arrangement.Figure 2 shows an example data flow for a read operation.Figure 3 shows an example first diagram of a data layout.Figure 4 shows an example second diagram of the data layout.Figure 5 shows an example third diagram of the data layout.Figure 6 shows an example fourth diagram of the data layout.Figure 7 shows an example of a logic flow for a write operation of a memory controller.FIG. 8 shows an example of a logic flow for a write operation of a memory device.Figure 9 shows an example of a logic flow for a read operation of a memory device.Figure 10 shows an example of a logic flow for a read operation of a memory controller.Figure 11 shows an example computing platform.Detailed waysOne method for miss correction is to have the memory controller ECC circuit attempt to correct or detect any additional errors caused by the ECC elements on the die while attempting to detect or correct the original multi-bit errors. This approach significantly and negatively affects the reliability of HBM3 or similar devices and makes the memory controller ECC unreliable for any error detection and correction. Since the on-die ECC component corrects a single bit error, the memory controller ECC is primarily exposed to multiple bit errors, and the ECC component on the die may miss the type of error being corrected. Furthermore, memory that exhibits a greater number of single cell failures and weak bits increases the likelihood of double bit errors caused by soft errors aligned with single cell failures or weak bits. The memory controller ECC needs to provide a random three-bit miss correction to completely prevent soft errors. The random three-bit miss correction causes additional delay during miss correction and requires two additional miss correction symbols compared to the random double-bit miss correction. To prevent other types of errors (eg, column selection errors, channel errors, etc.), the memory controller ECC must support correction/detection of the original multi-bit error mode and additional random unit errors caused by the on-chip ECC. Each additional error caused by the ECC elements on the die increases the correction delay and requires at least two additional ECC symbols, thereby increasing the complexity of the circuit.The ECC protected memory device may employ a symbol based ECC that computes bitwise parity on the cache line. These solutions can be implemented in a variety of ways, but impose some conditions on the data, metadata, and ECC bits written to the cache line. When the memory checks the data using the on-die ECC component, the memory can also check the parity condition to ensure that miss correction is only performed in the presence of a single bit error to prevent miss correction of the ECC component on the die. .Thus, improved error detection and correction capabilities can be provided by sharing a portion of the external ECC bits of the computing platform for memory device correction ("external" means external to the memory device) with the internal ECC correction scheme used by the memory device.Embodiments of the present invention use on-die ECC and similar parity structures to improve the reliability of HBM3 and other memory devices. The miss correction by the ECC component on the die severely impairs the ability of the memory controller ECC component to recover from multiple bit errors, or significantly increases the number of ECC bits required for calibration and the corrected delay. Embodiments of the present invention can be used to increase the reliability of HBM3 or similar memory devices and reduce the cost and latency of miss corrections.FIG. 1 shows an example memory controller and memory device arrangement 100. In some examples, as shown in FIG. 1, arrangement 100 includes a memory device 102 that includes an on-die ECC component 110 and a memory 112. Memory device 102 is communicatively coupled to memory controller 104.In some examples, memory 112 may include volatile types of memory including, but not limited to, RAM, D-RAM, DDR SDRAM, SRAM, T-RAM, or Z-RAM. One example of a volatile memory includes DRAM or some variations such as SDRAM. The memory described herein can be compatible with many memory technologies, such as HBM (High Bandwidth Memory DRAM, JESD235, originally published by the Joint Electronic Devices Engineering Council (JEDEC) Solid State Technology Association (JEDEC) in October 2013) and DDR5 (DDR Version 5) , currently discussed by JEDEC) and/or others, and techniques based on derivatives, revisions, versions, or extensions of such specifications.The on-die ECC component 110 includes logic for detecting and correcting errors in data in the memory 112. The memory controller 104 can be arranged to control access to data that is at least temporarily stored at the memory device 102. Although only one memory device is shown in the example of FIG. 1, it should be understood that in other examples, multiple memory devices may be controlled by memory controller 104. Memory controller 104 may include a memory controller ECC component 114 to detect and correct errors in data obtained from memory 112.Embodiments of the present invention provide a method and apparatus for avoiding ECC miss correction on a die by providing parity conditions to ECC elements 110 on the die, wherein parity conditions can be checked to determine if There are multiple bit errors and the on-die ECC correction is stopped when there are multiple bit errors.Figure 2 shows an example data flow for a read operation. Data and on-die ECC check bits 202 can be read from memory 112 via on-die ECC component 110. The data and on-die ECC check bits 202 can include one or more parity bits. The on-die ECC component 110 can detect and correct errors in the data based on analyzing the ECC check bits on the die to generate a single error correction code (SEC) data 204. However, an error miss correction can be introduced by the on-die ECC element 110. Memory controller ECC component 114 detects and corrects these miss corrections and produces corrected data 206.The parity bits used in the ECC scheme impose conditions on the data and write external ECC check bits to the memory 112. For example, bitwise parity on all bursts of data signaled from memory 112 over the I/O data line may be appended to the data as part of the external ECC code. The effect of this parity is the condition that the XOR operation of all bits in the burst is zero.In an embodiment of the invention, memory controller 104 may generate memory controller ECC check bits (including parity bits) when a write occurs, and these bits are stored in memory device 102 along with the data. When a read occurs, memory device 102 retrieves the data and the on-die ECC check bits and processes the data and the on-die ECC check bits through on-die ECC element 110. Next, all bits are sent to the memory controller 104 as SEC correction data 204, where the memory controller ECC element 114 checks for inconsistencies between the data and the memory controller ECC check bits and, if necessary, performs the correction. The memory controller receives the data words (typically with no restrictions on the data pattern) and generates memory controller ECC check bits based on the encoding scheme and appends them to the data bits forming the codeword. Invalid code words (ie, data and check bits are inconsistent with each other) indicate that the data, check digits, or both have errors. The memory controller ECC element attempts to find the code word that is closest to the received invalid word. If the error can be corrected by the memory controller ECC component, the original codeword can be found and the data restored. If the error is uncorrectable, one of two things happens: the invalid codeword is too far from the valid codeword and is uncorrectable (for example, detectable uncorrectable error (DUE)) or invalid code with another codeword too Proximity and codewords are corrected for misses (eg, Silent Data Corruption (SDC)). The distance between code words can depend on the code being defined differently. For example, the SEC code looks at the Hamming distance and will attempt to correct whether the invalid codeword is Hamming distance 1 from the valid codeword (in this case, the error is corrected when the codeword is Hamming distance 1 from another codeword) The child is equal to one column of the h matrix). The memory controller ECC component can use a burst error correction code, this type of code uses some information about the type of expected error to inform how to determine the distance metric. For example, one of these codes may expect the error to be limited to a 16-bit block rather than a random extension on the code word. For burst error correction codes, finding the codeword closest to the received codeword may be more complicated than Hamming code.Figure 3 shows an example first diagram of a data layout. In this example, Figure 3 shows the HBM3 1/2 cache row bit layout with bitwise parity on all bursts. In an embodiment, the data layout can be used for data and external ECC checksum parity bits or metadata bits. In this example, a set of bursts 300 includes eight bursts BL0, BL1, BL2, BL3, BL4, BL5, BL6, and BL7, labeled 302, 304, 306, 308, 310, 312, 314, and 316, respectively. Each burst includes data signaling on an I/O data line (e.g., a cache line) between memory device 102 and memory controller 104. Each I/O data line can be referred to as DQ, and numbers from DQ0 to DQ39 are used for the transmission of 40 bits of information. In this example, burst BL0 302 includes 32-bit data 318, represented as bit B0 through bit B31, an external ECC check bit, an additional parity bit, or metadata bit 320 includes seven bits, represented as bit E0 through bit E6, And the parity bit 322 includes: one bit, indicating the bit P0. The parity bit of burst BL0302 may be a bitwise XOR of all bits in the burst from DQ0 to DQ38 (becomes bit 0 to bit 38 (ie, B0 to B31 and E0 to E6)). For example, this can be specified as:P0=B0+B1+B2++3+...+B31+E0+E1+E2+...+E6In the case where B can be data bit 318 in burst BL0 302, E can be an external ECC check bit in burst BL0 302, an additional parity bit or metadata bit 320, and "+" indicates an XOR operation. .The remaining bursts in set 300 can be defined in a similar manner. For example, burst BL1 304 includes 32 bits of data, denoted as B32 to bit B63, ECC check bits include 7 bits, denoted as E7 to bit E13, and parity bits include one bit, denoted as bit P1, and so on.Similarly, parity for data transfer (such as is common for symbol correction codes) can also be calculated over multiple bit blocks. For cache line layouts like HBM3, a block width of 2 or 4 may be a common choice.Figure 4 shows an example second diagram of the data layout. For this example, calculating the parity on a two-width block in the cache line will result in that layout. In this example, Figure 4 shows an HBM3 1/2 cache row bit layout with bitwise parity on all even and odd bursts. In an embodiment, the data layout can be used for data and external ECC check bits, additional parity bits, or metadata bits. In this example, a set of bursts 400 includes eight bursts BL0, BL1, BL2, BL3, BL4, BL5, BL6, and BL7, labeled 402, 404, 406, 408, 410, 412, 414, and 416, respectively. In this example, burst BL0 402 includes 32-bit data 418, represented as bit B0 through bit B31, an external ECC check bit, an additional parity bit, or metadata bit 422 includes six bits, represented as bit E0 through bit E5, And parity bit 422 includes: two bits representing bits P0 and P1. The parity bit for burst BL0 402 in this case may be a bitwise XOR of all bits in a burst from even DQ or odd DQ (ie, even numbered DQ: DQ0 to DQ36 (even numbered) Bits B0 to B30 and even bits E0 to E4), or odd-numbered DQs: DQ1 to DQ 37 (odd numbered bits B1 to B31 and odd-numbered bits E1 to E5). For example, this can be specified as:P0=B0+B2+B4+B6+...+B30+E0+E2+E4P1=B1+B3+B5+B7+...+B31+E1+E3+E5In the case where B can be data bit 418 in burst BL0 402, E can be an external ECC check bit in burst BL0 402, an additional parity bit or metadata bit 420, and "+" indicates an XOR operation. .The remaining bursts in set 400 can be defined in a similar manner.The effect of this parity is that the XOR of all bits from the even DQ in the burst is zero, and the XOR of all bits from the odd DQ in the burst is zero. These conditions also mean that the XOR of all bits in the burst is zero.A similar scheme can be used for parity on blocks of width 4 bits. In this example, the parity bit can be calculated from every fourth bit in the burst. The effect of this parity is that the XOR of the bits from every four DQs in the burst is zero. This means that the XOR of bits from all even/odd DQs in the burst is also zero, and the XOR of all bits in the burst is also zero.These parity conditions can be communicated to the on-die ECC element 110 based on the memory controller ECC element 114 and the desired level of protection for the miss-correction of the ECC element 110 on the die. In an embodiment, the intra-DRAM ECC engine has some prior knowledge of the parity conditions being used, or can check for parity conditions. The parity conditions are different from the parity bits, but they are determined by the parity bits in the burst. For example, the equation in paragraph 28 defines the equation for P1. If P1 is appended to burst zero, the XOR of all bits in burst zero is: P1+B0+B1+...+E0+E1+...+E6=(B0+B0+B1+...+ E0+E1+...+E6)+B0+B1+...+E0+E1+...+E6=(B0+B0)+(B1+B1)+...+(E0+E0)+(E1 +E1)+...+(E6+E6)=0+0+...+0+0+...+0=0. The parity bit is just P1 and it is calculated using the parity equation. The parity condition is a property that the sum of certain portions of the bits within the burst will be equal to zero. The sum of zeros in a burst depends on which parity equation/how many parity bits are in the burst.In an embodiment, the level of parity conditions that may be transmitted will be the minimum parity condition, which may be by the on-die ECC component if presenting a multi-bit error calibratable by the memory controller ECC component 114 110 eliminates a large number of miss corrections. For example, if the memory controller ECC element 114 may only be capable of correcting faults limited to a single DQ and a random double bit error, then the parity condition given to the on-die ECC element 110 should be that the XOR of all bits in the burst will be zero. This will eliminate all double bit errors and erroneous on-die ECC miss corrections on a single DQ, but it will not eliminate three-bit or larger false miss corrections that are not limited to a single DQ, but in any case, None of the memory controller ECC elements 114 can reliably correct or detect three or more bit errors that are not limited to DQ.The on-die ECC component 110 can also calculate the parity value of any parity condition present in the data in parallel with the SEC code calculation. If there is a single bit error in the bits stored in memory 112, there will be two conditions in the result of the parity calculation performed by ECC element 110 on the die:(1)The result of the parity calculation will just show a burst that does not satisfy the parity condition.(2)The bad bits identified by the on-die ECC element 110 will be included in the burst and, if applicable, the portion of the burst that does not satisfy the parity condition (eg, the odd-numbered portion or even the portion of the burst) (ie, at Calculating the parity after correction will result in all bursts that satisfy the parity condition).If any of these conditions are not met, the on-die ECC component 110 should abandon the correction because there are multiple bit errors and the on-die ECC correction will result in additional errors in the data. These two conditions are sufficient to eliminate all miss corrections by the on-die ECC element 110 in the case of double errors and all or most of the miss corrections for larger granularity errors. Two example scenarios for double-bit errors are shown below.FIG. 5 shows an example third diagram of data layout 500. Figure 5 shows two unit errors 502, 504 in the cache line, which are in separate bursts but in the same on-die ECC correction area. Error 502 can be detected by the on-die ECC element 110, but is corrected for a miss at bit 505. The resulting parity 506 shows two, violating the first unit error condition ((1) above), and the corrected parity 508 is not all zero, violating the second unit error condition (above (2)).FIG. 6 shows an example fourth diagram of data layout 600. Figure 6 shows two unit errors 602, 604 in the same burst, where the on-die ECC element 110 erroneously detects an error at 605. The resulting parity 606 shows all zeros. The first unit error condition is violated, and the corrected parity 608 is again zero, violating the second unit error condition.In an embodiment of the invention, such miscorrection can be detected and repaired.FIG. 7 shows an example of a logic flow 700 for a write operation of the memory controller 104. The logic flow can be implemented in software, firmware, and/or hardware. In software and firmware embodiments, the logic flow may be implemented by computer executable instructions stored on at least one non-transitory computer readable medium or machine readable medium (eg, optical, magnetic or semiconductor memory). At block 702, the memory controller 104 receives the data bits to write to the memory device 102 through the cache line, as is known in the computer arts. In embodiments of the invention, data bits may be received from a processor, a hard drive, or other component within a computing platform. In an embodiment, the number of received data bits may be 512, but other quantities may be received in other computing platforms, such as 8, 16, 32, 64, 128, 256, 1024, and the like. At block 704, the memory controller 104 determines, using the memory controller ECC component 114, a plurality of memory controller ECC check bits and one or more parity bits for the received data bits. At block 706, the memory controller ECC check bits and parity bits may be appended to the data bits (as shown in the examples of Figures 3 and 4). At block 708, the memory controller transmits the data bits, the memory controller ECC check bits, and the parity bits to the memory device 102.FIG. 8 shows an example of a logic flow 800 for a write operation of memory device 102. At block 802, the memory device 102 receives a data bit, a memory controller ECC check bit, and one or more parity bits from the memory controller 104. At block 804, the memory device 102 determines the on-die ECC check bits of the received data bits using the on-die ECC component 110. In an embodiment, an XOR tree can be used to determine the ECC check bit on the die. In an embodiment, the on-die ECC component 110 can calculate eight on-die ECC check bits received from the memory controller 104 for every 128 data bits and 16 memory controller ECC check bits. In an embodiment, the on-die ECC can be (128 bits + 16 bits) / 8 bit SEC code. In another embodiment, the on-die ECC component 110 can calculate 16 on-die ECC check bits for every 256 data bits and 32 memory controller ECC check bits received from the memory controller 104. In an embodiment, the on-die ECC can be a (256-bit + 32-bit) / 16-bit SEC code. In an embodiment, the SEC code can be applied a sufficient number of times to process all of the data bits received from the memory controller. At block 806, the memory device 102 can optionally check for parity conditions for data bits in each data burst. At block 808, the memory device 102 stores the data bits, the memory controller ECC check bits, and the on-die ECC check bits in the memory 112. In an embodiment, the memory controller ECC check bits may be stored together in memory, and the memory controller ECC check bits are simply treated as data by the memory device for storage and on-die ECC correction.FIG. 9 shows an example of a logic flow 900 of a read operation of memory device 102. At block 902, upon requesting to read data from the memory device, the memory device 102 obtains the data bits, the memory controller ECC check bits, and the on-die ECC check bits from the memory 112. At block 904, the memory device uses the on-die ECC component 110 to correct the data bits for the single bit error using the on-die ECC check bits and the corrections can be applied as needed.In an embodiment, the H matrix can be multiplied by the received codeword to obtain an on-die ECC error syndrome. The on-die ECC error syndrome is typically defined as the H matrix multiplied by the received codeword (received data with respect to the error correction code check bit (relative to the error correction code)).In an embodiment, the on-die ECC error syndrome may be a vector having a length equal to the number of check bits. In an embodiment, the on-die ECC error syndrome may be generated by an SEC decoder. The ECC error syndrome on the die can then be compared to the columns of the H matrix, and if the bits are equal to the columns in the H matrix, the corresponding bits are flipped.In an embodiment, if the ECC error syndrome on the die is zero, then the memory device does not change the data bits. If the on-die ECC error syndrome is not zero and indicates a detectable uncorrectable error (DUE), the memory device does not change the data bits. DUE occurs when the error correction code identifies an error (ie, the error syndrome is not zero), but the ECC element on the die does not recognize the error location (the error syndrome does not correspond to the correctable error mode). For similar SEC codes used in ECC on the die, this can occur in approximately 50% of multi-bit errors because the error syndrome is approximately twice the bit in the codeword. In another embodiment, the on-die ECC element can check that the parity syndrome has a weight of one, and if so, the single-die ECC element corrects the error without further checking the parity condition.If the on-chip error ECC syndrome indicates a single bit error, the memory device checks the parity calculation. If the ECC syndrome on the die is equal to the column of the ECC code H matrix on the die (ie, if the received invalid codeword is a Hamming distance of 1 from the valid codeword, then the on-die ECC syndrome will indicate a single bit error. If the parity syndrome is a weight of one, the memory device can correct the error in the data bits. In an embodiment, the parity syndrome can be a Hamming weight that defines the weight of the binary vector as 1 in the vector. The number of data. If the weight of the parity syndrome is not 1, the memory device does not change the data bits. In an embodiment, the parity syndrome may be an error syndrome generated by checking the parity condition. For each segment that satisfies the parity condition (burst, even half of the burst, odd half of the burst, etc.), the parity syndrome will have one.At block 906, the memory device 102 checks the parity condition for each burst. In an embodiment, block 904 and block 906 may be processed in parallel. At block 908, the memory device 102 can optionally recalculate the parity condition after the correction and check to ensure that the parity syndrome is now zero. If the parity syndrome is now not zero at block 904, the memory device 102 reverses the correction. That is, if the parity condition is not met, then the on-die ECC element 110 should abandon the correction because there are multiple bit errors and the on-die ECC correction will result in additional errors in the data bits. In an embodiment, the check may also be performed at block 904, where the memory device may check to determine if the parity is one of the same bursts as the erroneous bits indicated by the syndrome. At block 910, the memory device transmits the data bits and the memory controller ECC check bits to the memory controller 104. In one embodiment, additional metadata bits can also be sent.FIG. 10 shows an example of a logic flow 1000 for a read operation of the memory controller 104. At block 1002, the memory controller 104 receives the data bits and the memory controller ECC check bits from the memory device 102. In one embodiment, additional metadata bits may also be received. At block 1004, the memory controller uses the memory controller ECC component 114 to check the received data bits for the received memory controller ECC check bits and perform the correction as needed. At block 1006, the memory controller returns the data bits to the computer platform component requesting the data.FIG. 11 shows an example computing platform 1100. In some examples, as shown in FIG. 11, computing platform 1100 can include circuitry 1106 that includes a memory controller 104, a memory device 102, processing elements 1108, other platform elements 1110, and a communication interface 1112.According to some examples, processing component 1108 can perform processing operations or logic. Processing component 1108 can include various hardware components, software components, or a combination of both. Examples of hardware components can include devices, logic devices, components, processors, microprocessors, graphics chips, circuits, processor circuits, circuit components (eg, transistors, resistors, capacitors, inductors, etc.), integrated circuits, ASICs Programmable logic device (PLD), digital signal processor (DSP), FPGA/programmable logic, memory cells, logic gates, registers, semiconductor devices, chips, microchips, chipsets, etc. Examples of software elements may include software components, programs, applications, computer programs, applications, device drivers, system programs, software development programs, machine programs, operating system software, middleware, firmware, software components, routines, sub-examples Program, function, method, process, software interface, application program interface (API), instruction set, calculation code, computer code, code segment, computer code segment, word, value, symbol, or any combination thereof. Determining whether to use hardware components and/or software component implementation examples may vary depending on any number of factors, such as desired computation rate, power level, thermal tolerance, processing cycle budget, input data rate, output data rate, memory resources, data Bus speed and other design or performance constraints, as expected for a given example.In some examples, other computing platform elements 110 may include common computing elements or circuits, such as one or more processors, multi-core processors, coprocessors, memory units, chipsets, controllers, interfaces, oscillators, timing devices , power supply, hard disk drive (HDD), etc. Examples of memory unit 112 may include, but are not limited to, various types of computer readable and/or machine readable storage media in the form of one or more high speed memory units, such as read only memory (ROM), RAM, DRAM, DDR DRAM, Synchronous DRAM (SDRAM), DDR SDRAM, DDR5, HBM3, SRAM, Programmable ROM (PROM), EPROM, EEPROM, Flash, Ferroelectric Memory, SONOS Memory, Polymer Memory, such as Ferroelectric Polymer Memory, Nanowires, FeTRAM Or FeRAM, Austenian memory, phase change memory, memristor, STT-MRAM, magnetic or optical card, 3D XPointTM, and any other type of storage medium suitable for storing information.In some examples, communication interface 1112 can include logic and/or features for supporting a communication interface. For these examples, communication interface 1112 can include one or more communication interfaces that operate in accordance with various communication protocols or standards to communicate over a direct or network communication link. Direct communication can be performed by using a communication protocol such as SMBus, PCIe, NVMe, QPI, SATA, SAS, or USB communication protocol. Network communication can be performed by a communication protocol using an Ethernet, Infiniband, SATA, or SAS communication protocol.The components and features of computing platform 1100 can be implemented using any combination of discrete circuitry, ASICs, logic gates, and/or single-chip architectures. Moreover, the features of computing platform 1100 can be implemented using a microcontroller, a programmable logic array, and/or a microprocessor, or any suitable combination of any of the foregoing. Note that hardware, firmware, and/or software components may be collectively referred to herein or separately as "logic" or "circuitry."It should be appreciated that the example computing platform 1100 shown in the block diagram of FIG. 1 can include: FIG. 11 can represent one functional descriptive example of many potential implementations. Accordingly, the partitioning, omission, or inclusion of the block functions depicted in the figures are not inferred that the hardware elements, circuits, software and/or components used to implement these functions must be divided, omitted, or included in the embodiments.The computing platform 1100 can be part of a computing device, which can be, for example, a user device, a computer, a personal computer (PC), a desktop computer, a laptop computer, a notebook computer, a netbook computer, a tablet computer, a smart phone, an embedded electronic device. Products, game consoles, servers, server arrays or server farms, web servers, web servers, internet servers, workstations, minicomputers, host computers, supercomputers, network devices, web devices, distributed computing systems, multiprocessor systems, based on A system of processors or a combination thereof. Accordingly, the functionality and/or specific configuration of computing platform 1100 described herein may be included or omitted in various embodiments of computing platform 1100, where appropriate.One or more aspects of at least one example can be implemented by representative instructions stored on at least one machine readable medium, which represent various logic within a processor, when operated by a machine, computing device or When the system reads, the machine, computing device, or system can be made to manufacture logic to perform the techniques described herein. Such representations may be stored on a tangible, machine readable medium and provided to various customers or manufacturing facilities for loading into the manufacturing machine of the actual manufacturing logic or processor.Various examples may be implemented using hardware elements, software elements, or a combination of both. In some examples, hardware components can include devices, components, processors, microprocessors, circuits, circuit components (eg, transistors, resistors, capacitors, inductors, etc.), integrated circuits, ASICs, PLDs, DSPs, FPGAs, Memory cells, logic gates, registers, semiconductor devices, graphics chips, chips, microchips, chipsets, and the like. In some examples, software elements can include software elements, programs, applications, computer programs, applications, system programs, machine programs, operating system software, middleware, firmware, software modules, routines, subroutines, functions, methods , process, software interface, API, instruction set, calculation code, computer code, code segment, computer code segment, word, value, symbol, or any combination thereof. Determining whether to use hardware components and/or software component implementation examples may vary depending on any number of factors, such as desired computation rate, power level, thermal tolerance, processing cycle budget, input data rate, output data rate, memory resources, data Bus speed and other design or performance constraints, as desired for a given implementation.Some examples may include an article of manufacture or at least one computer readable medium. The computer readable medium can include a non-transitory storage medium for storing logic. In some examples, a non-transitory storage medium can include one or more types of computer readable storage media capable of storing electronic data, including volatile or nonvolatile memory, removable or non-removable memory, Erase or non-erasable memory, writable or rewritable memory, and the like. In some examples, logic may include various software components such as software components, programs, applications, computer programs, applications, system programs, machine programs, operating system software, middleware, firmware, software modules, routines, sub-examples Program, function, method, process, software interface, API, instruction set, calculation code, computer code, code segment, computer code segment, word, value, symbol, or any combination thereof.According to some examples, a computer readable medium may comprise a non-transitory storage medium for storing or maintaining instructions that, when executed by a machine, computing device or system, cause a machine, computing device or system to perform a method according to the example And / or operation. Instructions may include any suitable type of code, such as source code, compiled code, interpreted code, executable code, static code, dynamic code, and the like. Instructions may be implemented in accordance with a predefined computer language, manner, or syntax for instructing a machine, computing device, or system to perform a particular function. The instructions can be implemented using any suitable high level, low level, object oriented, visual, compiled, and/or interpreted programming language.Some examples may be described using the expression "in one example" or "example" and its derivatives. These terms are meant to be included in at least one of the specific features, structures, or characteristics described in connection with the examples. The appearances of the phrase "in an example"Some examples may be described using the expression "coupled" and "connected" and their derivatives. These terms are not necessarily synonyms for each other. For example, the use of the terms "connected" and/or "coupled" may mean that two or more elements are in direct physical or electrical contact with each other. However, the term "coupled" may also mean that two or more elements are not in direct contact with each other, but still cooperate or interact with each other.It is emphasized that the Abstract of the Disclosure is provided to comply with 37 C.F.R. Section 1.72(b), requiring the Abstract to allow the reader to quickly ascertain the nature of the technical disclosure. It is submitted with the understanding that it will not be used to interpret or limit the scope or meaning of the claims. In addition, in the foregoing Detailed Description, it can be seen that various features are grouped together in a single example in order to simplify the disclosure. The method of disclosure is not to be interpreted as an Rather, as the following claims reflect, inventive subject matter lies in less than all features of a single disclosed example. Thus, the following claims are hereby incorporated into the Detailed Description In the appended claims, the terms "include" and "comprising" are used as the ordinary English equivalents of the respective terms "including" and "in". Moreover, the terms "first," "second," "third," etc. are used merely as labels and are not intended to impose numerical requirements on the subject.Although the subject matter has been described in language specific to structural features and/or methods, it is understood that the subject matter defined in the appended claims Instead, the specific features and acts described above are disclosed as example forms of implementing the claims. |
PROBLEM TO BE SOLVED: To provide a method of continuously depositing the atomic layers of respective single-layer films of a one single-layer film thickness by respectively bringing a plurality of precursors into contact with the surface of a substrate at different temperatures. SOLUTION: The deposition method has a process step of chemically adsorbing at least a first layer of the one single-layer film thickness on the substrate by bringing the substrate into contact with the first precursor at the first temperature. The first layer is brought into contact with the second precursor at a second temperature different from the first temperature, by which the second layer of at least the one single- layer film thickness is chemically adsorbed on the first layer. The temperature can be changed by adding or removing heat by a thermal temperature heat pump. The substrate temperature is changed from the first temperature to the second temperature. The second layer can be reacted with the first layer by heating the same to the third temperature higher than the second temperature. The deposition method can also be made to have a process step of depositing the atomic layer of first species at a temperature nearly optimum for deposition of the first species on the substrate. The atomic layer of second species may be deposited at nearly the optimum temperature for deposition of the second species different from the optimum temperature of the first species on the first species. |
A deposition method, comprising: contacting a substrate at a first temperature with a first precursor to chemisorb a first layer of at least one monolayer thickness on the substrate; Contacting the first layer with a second precursor at a second temperature different from the above, and chemically adsorbing a second layer having a thickness of at least one monolayer on the first layer. And the deposition method.2. The method of claim 1, further comprising heating the first and second layers to a third temperature that is higher than the second temperature.The method of claim 1, further comprising the step of reacting the second layer with the first layer.The method of claim 1, further comprising the step of changing the temperature by adding or removing heat using a thermoelectric heat pump to achieve the second temperature. Method.5. The method of claim 4, further comprising the step of thermally connecting the thermoelectric heat pump to the substrate.The method of claim 1, wherein the second temperature is achieved prior to contacting the first layer by initiating flow of the second precursor.2. The method of claim 1, wherein the second temperature is not achieved until while contacting the first layer by providing a flow of the second precursor.The method of claim 1, wherein the first temperature is higher than the second temperature.The method of claim 1, wherein the first temperature is different from the second temperature by at least about 5 ° C.The method of claim 1, wherein said first temperature differs from said second temperature by at least about 50 ° C.10. The method of claim 1, wherein the first and second temperatures are at least a portion of the substrate.10. The method of claim 1, wherein said first and second temperatures are temperatures of an outermost surface of said substrate.The method of claim 1, wherein said first and second temperatures are the temperatures of said precursor.The method of claim 1, further comprising the step of providing background heat.The method of claim 14, wherein said background heat is provided as a fourth temperature between said first and second temperatures.The method of claim 14, wherein said background heat is generated primarily from a heat source comprising a heat lamp array or a wafer chuck heater.The method of claim 1, wherein said substrate comprises a bulk semiconductor.The method of claim 1, wherein said first precursor is different from said second precursor.A method according to claim 1, wherein said first and second layers each consist essentially of a single-layer film.The method of claim 1, wherein at least one of said first and second precursors comprises a plurality of different precursor species.The method of claim 1 wherein said first and second precursors each consist essentially of a single precursor species.22. The method of claim 21, wherein said single precursor species represents only one chemical structure.The method of claim 1, further comprising the step of removing said first precursor before contacting said first layer with said second precursor.A method of deposition, comprising: depositing a first speed on a substrate at a temperature substantially optimal for the deposition of said first speed; atomic layer deposition; and depositing a second speed on said first speed. Depositing the atomic layer at a temperature that is different from the optimum temperature of the first speed and is approximately optimum for the deposition of the second speed.25. The method of claim 24, further comprising the step of removing said first speech before depositing said second speech on said first speech.The method of claim 24, further comprising the step of reacting said second and first speeds at an optimum temperature for a reaction different from an optimum temperature of said second speed. A deposition method, characterized in that:The method of claim 24, wherein the first and second chemisorption products consist essentially of a monolayer film of a deposition material.The method of claim 24, wherein the first speed is different from the second speed.The method of claim 24, further comprising the step of atomic layer depositing at least one additional splice together with at least one of said first splice deposition and said second splice deposit. And the deposition method.25. The method of claim 24, wherein the change from the optimal temperature of the first to the optimal temperature of the second is performed by adding or removing heat with a thermoelectric heat pump. Method.The method of claim 30, wherein said thermoelectric heat pump is thermally connected to said substrate.The method according to claim 24, wherein the first and second optimum temperatures are temperatures of at least a portion of the substrate.25. The method of claim 24, further comprising the step of removing said first speech before depositing said second speech on said first speech.10. A deposition method, the method comprising the steps of: chemically adsorbing a first monolayer film of a first composition on a substrate while maintaining the substrate at a first temperature by a heater; Applying or removing heat with an apparatus to set the substrate to a second temperature that is at least about 1 ° C. different from the first temperature; and providing the second substrate on the first monolayer film of the first composition At the temperature, a step of chemisorbing the monolayer film of the second composition, a step of adding or removing heat to set the substrate at substantially the second temperature, and A step of chemically adsorbing a second monolayer film of a first composition.The method of claim 34, wherein said device exhibits a thermoelectric effect.The method of claim 35, wherein the adding or removing heat to achieve the substrate at about the first temperature is the adding or removing heat from the apparatus.The method according to claim 34, wherein said apparatus provides a flow of cooling gas.The method according to claim 37, wherein said cooling gas comprises a substance inert to reacting with said first composition.The method of claim 34, wherein the second temperature is achieved prior to chemisorption of the monolayer of the second composition.The method according to claim 34, wherein the second temperature is not reached until during the chemisorption of the monolayer of the second composition.The method of claim 34, wherein said first temperature is higher than said second temperature.The method according to claim 34, further comprising the step of removing any first composition that has not been chemisorbed before the second composition is chemisorbed.The method according to claim 34, wherein at least one of said first and second compositions is formed by a plurality of different precursor species.10. A deposition method, comprising the steps of: chemisorbing a first monolayer film of a first composition on a substrate while maintaining the substrate at a first temperature by a heater; Removing the heat by reducing the amount of heat to provide a flow of cooling gas, the removing the heat to bring the substrate to a second temperature that is at least about 1 ° C. below the first temperature. And a step of chemically adsorbing a monolayer film of a second composition on the first monolayer film of the first composition at the second substrate temperature, and applying heat to bring the substrate to approximately the first temperature. A deposition method comprising: an adding step; and a step of chemically adsorbing the second monolayer film of the first composition on the monolayer film of the second composition.10. A deposition method, the method comprising the steps of: chemically adsorbing a first monolayer film of a first composition on a substrate while maintaining the substrate at a first temperature by a heater; Adding or removing heat with an apparatus to bring the substrate to a second temperature that is at least about 1 ° C. different from the first temperature; and at the second substrate temperature, on the first monolayer film of the first composition And a step of chemisorbing the monolayer film of the second composition, and applying heat to bring the substrate to a third temperature higher than the second temperature to chemisorb the chemisorbed second composition. Reacting with the first composition, adding or removing heat to bring the substrate to approximately the first temperature, and forming the first composition on the reaction layers of the first and second compositions. A step of chemically adsorbing the second monolayer film.The method of claim 45, wherein said device exhibits a thermoelectric effect.The method of claim 46, wherein the step of adding or removing heat to bring the substrate to approximately the first temperature comprises adding or removing heat with an apparatus.The method of claim 45, wherein said apparatus provides a flow of cooling gas.The method according to claim 45, wherein the third temperature is achieved after chemical adsorption of the monolayer film of the second composition is completed.The method of claim 45, wherein said third temperature is achieved during chemisorption of said monolayer film of said second composition.The method of claim 45, wherein said first temperature is higher than said second temperature.The method of claim 45, further comprising the step of removing any non-chemisorbed first composition prior to said second composition chemisorption.The method of claim 45, wherein at least one of said first and second compositions is formed from a plurality of different precursor species. |
BACKGROUND OF THE INVENTION 1. Field of the Invention The present invention relates to a variable temperature deposition method including an atomic layer deposition method and other deposition methods, and an integrated circuit formed by the method.2. Description of the Related Art Atomic layer deposition (ALD) is recognized as one of the deposition techniques for forming high-quality materials with the least defects and fine processing control. Even so, at the same time, it has also been recognized that ALD has limited application. Under certain circumstances, the theoretically expected quality of the ALD layer film may not be achieved.[0003] It has been found that a requirement for ALD methods is to form a layer without introducing unacceptable defects in the material.SUMMARY OF THE INVENTION In one aspect of the present invention, a deposition method includes contacting a substrate with a first precursor at a first temperature and forming a first layer of at least one monolayer thickness on the substrate. There is a step of adsorbing. At a second temperature different from the first temperature, the first layer is contacted with the second precursor, and a second layer having at least one monolayer thickness is chemisorbed on the first layer. As an example, the method can further include heating the first and second layers to a third temperature that is higher than the second temperature. The method can include changing the temperature by adding or removing heat using a thermoelectric heat pump to achieve a second temperature. The electric heat heat pump may be thermally connected to the substrate. The first temperature may be at least about 5 ° C. different than the second temperature.[0005] In another aspect of the invention, a deposition method includes the step of atomic layer deposition of a first species on a substrate at a temperature that is substantially optimal for the first species. A second speed is deposited on the first speed at an approximately optimum temperature for the deposition of the second speed different from the optimum temperature of the first speed. As one example, the chemisorption products of the first and second species consist essentially of a monolayer film of the deposited material.In accordance with another aspect of the present invention, a method of depositing a first composition of a first composition on a substrate while maintaining the substrate at a first temperature by a heater thermally coupled to the substrate. A step of chemisorbing the layer. Heat can be added or removed by devices that exhibit a thermoelectric effect, different from heaters. According to the apparatus, the temperature of the substrate can be set to the second temperature different from the first temperature by at least about 1 ° C. The monolayer film of the second composition may be chemisorbed onto the first monolayer film of the first composition at the second substrate temperature. In order to make the temperature of the substrate approximately the first temperature, heat may be added or removed by the apparatus. The second monolayer film of the first composition may be chemically adsorbed on the monolayer film of the second composition.[0007] As an alternative to the method described above, heat can be applied by the device to bring the temperature of the substrate to a third temperature higher than the second temperature, and the chemisorbed second temperature can be increased. The compound can be reacted with the chemisorbed first compound. Heat can be added or removed using an apparatus to bring the temperature of the substrate to approximately the first temperature. The second monolayer film of the first compound can be chemisorbed on the reaction layer of the first and second compounds.DETAILED DESCRIPTION OF THE INVENTION The disclosure of the present invention has been made in accordance with the "Promoting the Development of Science and Useful Technology" (Article 1, Paragraph 8), which is the legislative objective of the United States Patent Law. Things.[0009] Atomic layer deposition (ALD) methods involve forming successive atomic layers on a substrate. Such layers may be composed of epitaxial, polycrystalline, amorphous, or other materials. ALD is also referred to as atomic layer epitaxy, atomic layer processing methods, and the like. In addition, the present invention includes other deposition methods not conventionally referred to as ALD, such as, for example, chemical vapor deposition (CVD), but having the method steps described herein. The deposition method in this specification is used to mean that something is formed on a semiconductor wafer. However, the invention includes the scope of deposition on various substrates other than semiconductor substrates.The term "semiconductor substrate" or "semiconducting substrate" in the present specification means any structure made of semiconducting material, but is of course not limited to these. Is a bulk semi-conductive material, such as a semi-conductive wafer (either alone or in combination with other materials thereon), and a layer of semi-conductive material (alone or in combination with other materials). Any of the body). The term "substrate" is intended to mean any support structure including, but not limited to, the semiconductive substrates described above.[0011] Simply stated, ALD involves exposing the initial substrate to a first chemical species to achieve chemisorption of the species onto the substrate. Theoretically, chemisorption is the formation of a monolayer of uniform monoatomic or molecular thickness over the entire exposed initial substrate. In other words, it is a saturated monolayer film. In practice, as discussed further below, chemisorption does not need to occur on all parts of the substrate. Nevertheless, such imperfect monolayers are still monolayers herein. For many applications, only substantially saturated monolayers may be appropriate. A substantially saturated monolayer is one that forms a deposited layer that exhibits the desired quality and / or properties.The first species is removed from the substrate and a second chemical species is provided for chemisorption on the first monolayer of the first species. The second speed is then removed and the above process is repeated by exposing the second speed monolayer to the first speed. In some cases, the two monolayers may be of the same speed. Also, a third or more may be sequentially chemisorbed and removed, similar to the first and second described above.Various techniques can be used for the removal step, that is, the exhaust gas treatment step. The technique includes, but is not limited to, contacting the substrate and / or monolayer with a carrier gas and / or reducing the concentration of the species in contact with the substrate and / or the chemisorbed species. , Lowering the pressure below the deposition pressure. Examples of the carrier gas include N2, Ar, and He. Removal, in lieu of the above method, may involve removing the substrate and / or the single-phase membrane from any of the chemisorption by-products that allow the chemisorption by-product to desorb and reduce the concentration of the contact species before introducing another. Including contacting the substance. The contact speed may be reduced to an appropriate concentration or partial pressure known to those skilled in the art based on the characteristics of the product of a particular deposition process.ALD is often described as a self-organizing or self-limiting process in that there is a finite number of sites on the substrate on which the first species forms chemical bonds. The second speed only needs to be coupled to the first speed, and therefore the second speed is also self-limiting. Once all of the finite number of sites on the substrate have been coupled to the first, the first does not couple to others of the first that are already coupled to the substrate. However, processing conditions can be varied to promote such binding in ALD and render ALD not self-limiting. Therefore, ALD includes a method of forming a splice that forms more than one single layer at a time by stacking splices, thereby forming a layer that is more than one atomic or molecular thickness. The various aspects of the invention described herein apply to any condition where ALD is desired. Examples of materials deposited by ALD include silicon nitride, zirconium oxide, tantalum oxide, aluminum oxide, and the like.ALD usually occurs at temperatures and pressures within commonly used ranges. ALD conditions can vary widely depending on the particular precursor, layer composition, deposition equipment, and other factors. By maintaining the normal conditions of temperature, pressure and removal, unnecessary reactions that affect the formation of the monolayer and the quality of the resulting overall ALD layer are minimized. Therefore, there is a risk of forming a defective single-layer film in an operation outside the normal temperature and pressure ranges.The general techniques of chemical vapor deposition (CVD) include a variety of more specialized processes, including, but not limited to, plasma enhanced CVD. CVD is commonly used to non-selectively form a fully deposited material on a substrate. One feature of CVD is the simultaneous presence of multiple reacting species in the deposition chamber to form the deposited material. Such conditions are in contrast to ALD, where the substrate is contacted with a single deposition speed that chemisorbs to the substrate or to previously deposited speed. The ALD process may provide a plurality of simultaneously contacted species in a manner or under conditions such that ALD chemisorption occurs rather than a CVD reaction. Instead of reacting with each other, the plurality of species may chemisorb to the substrate or previously deposited species, whereby the next species is formed thereon to form a complete layer of the desired material. Next, a surface for chemisorption is provided. Under most CVD conditions, deposition occurs almost independently of the composition or surface properties of the underlying substrate. In contrast, chemisorption rates in ALD are affected by the composition, crystal structure and other properties of the substrate or chemisorbed species. Other processing conditions, such as pressure and temperature, also affect the rate of chemisorption.It has been observed that ALD indicates that temperature changes from about 1 ° C. to 10 ° C. and 50 ° C. have a significant effect on chemisorption rates and are prone to temperature changes such that they can potentially stop perceptible chemisorption. I understood. In addition, observations have shown that some deposition species chemisorb best at the first temperature, and that the second deposition chemisorption at a different optimal temperature. If two splices are used as the complement of a deposition pair, chemisorption is not optimally performed on both splices of the pair.Thus, according to one aspect of the present invention, the deposition method comprises contacting a substrate with a first precursor at a first temperature and forming a first layer of at least one monolayer on the substrate. A step of chemisorbing the layer. At a second temperature different from the first temperature, the first layer may be contacted with the second precursor. The second layer may be chemically adsorbed on the first layer with a thickness of at least one single-layer film. Such a method can be implemented in various ways and provided in various environments. However, preferably, the substrate is a bulk semiconductor wafer. Also, various precursors and precursor pairs can be selected, but preferably, the first precursor is different from the second precursor. Such a difference may be sufficient to produce an optimal chemisorption temperature of the first precursor that is different from the optimal chemisorption temperature of the second precursor. Preferably, each of the first and second layers is basically formed of a single-layer film.Changing the temperature can be done by various means and by various methods. For example, the method according to the present invention comprises changing the temperature by adding or removing heat using a thermoelectric heat pump (THP). THP operates according to the known principles of the thermoelectric effect, based on one or more of the Peltier effect, the Seebeck effect, the Thomson effect, and other effects. THP provides both thermoelectric cooling and thermoelectric heating. In the case of thermoelectric cooling, when current is applied through the "cold" junction of two different conductors, the heat is absorbed and transferred to the "hot" junction which is dissipated, for example, by a heat sink. In the case of thermoelectric heating, reversing the direction of current flow reverses the flow of heat such that heat is transferred from the hot junction and the heat at the cold junction increases. The THP may be formed from different metal conductors, but often has a semiconductor configuration using p-type and n-type semiconductors. As used herein, THP includes any type of heating and / or cooling device that operates by the thermoelectric effect.THP selectively heats or cools the substrate such that the first precursor is chemisorbed at approximately the optimal chemisorption temperature and the second precursor is also chemisorbed at the approximately optimal chemisorption temperature. Used as Therefore, it is desirable that the THP be thermally coupled to the substrate. For example, in the case of a bulk semiconductor wafer, such a wafer may be placed in a wafer chuck in a deposition chamber. The thermal interface between the THP and the wafer chuck is sufficient to thermally connect the substrate to the THP to change the temperature of at least a portion of the substrate.For example, a background heating source is often provided in a deposition chamber. Such background heat may be generated from a variety of heat sources, such as a reactant gas and / or carrier gas heater, a heat lamp array associated with the deposition chamber, and / or a wafer chuck separate from THP. Thus, the background heat will be provided as a fourth temperature that is different or identical to one of the first and second temperatures. Various types of heating and cooling methods are conceivable. THP is particularly useful for processing a single wafer through a series of steps as described herein. However, the various aspects described herein are also applicable to processes that do not use THP. For example, batch processing of wafers that do not use THP can be considered as achieving the effects described herein, while leaving many of the processing parameters described herein. Providing a flow of cooling gas is an alternative to cooling using THP. The cooling gas can be a substance that is inert to the reaction with the first composition. For example, the cooling gas can be an inert gas commonly used with the carrier gas being processed.The first and second temperatures are preferably temperatures of at least a part of the substrate. Alternatively, the first and second temperatures may be temperatures such as the outermost surface of the substrate, a precursor, a deposition chamber temperature, a precursor temperature, and the like. Since the first and second temperatures do not need to be substrate temperatures, the substrate temperature may change from the first temperature to the second temperature, but may not necessarily occur. The first temperature may be different from the second temperature by at least 1 ° C., although not limited to these, depending on differences in various conditions such as the characteristics of the precursor, the pressure, and the composition of the substrate. Preferably, the first temperature differs by at least 5 ° C, more preferably by at least 10 ° C. Further, the second temperature may be different from the third temperature by at least 50 ° C. The deposition pressure varies depending on the precursor speed and can be at atmospheric pressure or some vacuum.After the chemisorption of the first layer, the temperature of at least a portion of the substrate is changed so that the second layer is chemisorbed on the first chemisorbed first layer. It is necessary to take into account that the layers can be affected. If the first layer is formed at a lower temperature followed by a second layer at a higher temperature, some of the first layer may be released as the temperature increases. Therefore, the first temperature is preferably higher than the second temperature. That is, the first layer may be formed at a higher temperature followed by the second layer at a lower temperature, thereby reducing the risk of release of the first layer.It should be further noted that chemisorption of the second precursor onto the first precursor is not necessarily equivalent to reacting the first and second precursors. Also, the optimal reaction temperature of the first precursor and the second precursor may be different from the optimal chemisorption temperatures of the first and second precursors. Thus, under certain circumstances, the physical properties and / or composition of the chemisorbed first and second layers may be the desired properties that could result from the reaction of the chemisorbed materials of the first and second layers. And / or may differ from the composition. According to one feature of the invention, the deposition method can include heating the first and second layers to a third temperature that is higher than the second temperature at which the second layer was chemisorbed. . Further, the deposition method according to the present invention may further include a step of reacting the second layer with the first layer. One example of a step of reacting the first and second layers is heating to a third temperature as described above. Other methods of reacting the first and second layers are also conceivable.In the process of the present invention, the timing of changing the temperature must also be considered. For example, a second temperature must be achieved before contacting the first layer to chemisorb a second precursor thereon. Stated another way, the second temperature is not achieved while contacting the first layer to chemisorb the second precursor. Further, for example, the change to the third temperature may be achieved after the chemical adsorption of the second precursor on the first layer is completed. Alternatively, the change to the third temperature may be performed during chemisorption of the second precursor. The change to the third temperature during the chemisorption of the second precursor allows the already chemisorbed second precursor to react with the first layer, and the second precursor and its second The precursor enhances the reaction with any portion of the first layer that has not yet been chemisorbed. In practice, the most efficient chemisorption can be achieved by selecting a particular treatment method. That is, both the chemisorption rate achieved at any given temperature and the time required to achieve a temperature change to achieve such a temperature may be considered according to other factors. Thus, the THPs described herein are particularly useful as those that can rapidly change the temperature of a substrate.According to another aspect of the present invention, a method of depositing comprises depositing a first species on a substrate at a temperature substantially optimal for deposition of the first species. Next, a second speed may be deposited on the first speed at an atomic layer film at an optimum temperature for the second speed deposition that is different from the optimum temperature of the first speed. The chemisorption products of the first and second species basically consist of a monolayer film of the deposited material. As described above, the first speed may be different from the second speed. However, even if the first speed is the same as the second speed, the deposition method as described above is preferred. The atomic layer deposition of the initial species to form the first layer, a chemisorption product, may sometimes occur at a different temperature than the temperature of the subsequent layer. For example, a substantially saturated monolayer film on the substrate may be achieved at a higher temperature, after which the first and subsequent deposition splice layers are performed at a lower temperature. By doing so, the potential difficulty of initiating the formation of ALD material can be reduced.In accordance with yet another aspect of the invention, a method of depositing a first composition of a first composition on a substrate while maintaining the temperature of the substrate at a first temperature by a heater thermally coupled to the substrate. The method includes a step of chemically adsorbing the layer film. Heat may be added or removed by a device different from the heater. The device may be any device that exhibits the thermoelectric effect. The apparatus only needs to be capable of setting the temperature of the substrate to a second temperature different from the first temperature by at least 5 ° C.[0028] The deposition method of the present invention may further include a step of chemically adsorbing the single-layer film of the second composition on the first single-layer film of the first composition at the second substrate temperature. Heat may be added or removed by a device for bringing the temperature of the substrate to approximately the first temperature. A second monolayer film of the first composition may be chemisorbed on the monolayer film of the second composition.Next, the ALD deposition method of the present invention will be described with reference to FIGS. 1 to 3 show the periodic contact and release or removal of the substrate with precursor 1 (PT1) and precursor 2 (PT2). The substrate is first contacted with the precursor P1 from time (T0) to time 1 (T1). The non-chemisorbed P1 is removed and released from time T1 to time T2, and the chemisorbed P1 is then contacted with the precursor P2 from time T2 to T3. After removing excess P2 from time T3 to time T4, the cycle starts again by bringing P2 chemisorbed from time T4 to time T5 into contact with P1. The period from time T0 to time T3 therefore forms at least one monolayer film that is a chemisorption product of P1 and P2. The release from time T3 to time T4 prepares the chemisorption products of P1 and P2 because a new cycle starts at time T4. The intervals from time T0 to T1 and T2 are shown the same, but this is for the sake of convenience of the drawing. In practice, such times may be determined individually based on the knowledge of those skilled in the art, taking into account the features and advantages of the invention described.FIG. 4 shows the change in temperature, preferably as part of the described method, the change in substrate temperature. Temperature 2 (Temp2) is maintained from time T0 to T1 while the precursor P1 is in contact. Thereafter, it is lowered to Temp1 during the release from time T1 to T2, and is maintained at Temp1 during the contact with the precursor P2 from time T2 to T3. During the release from time T3 to the time T4 when the new cycle starts, the temperature is raised to Temp2. While maintaining the various features of the present invention, other methods of contacting the precursor and changing the temperature are also conceivable, some of which are specifically described below.One variation is shown in FIG. Temp2 is maintained from T0 to T1 while in contact with precursor P1. Thereafter, during the release from T1 to T2, the temperature is reduced to Temp1, and this temperature Temp1 is maintained while contacting the precursor P2 from T2 to T3. The temperature is raised to temperature 3 (Temp3) during the release from T3 to T4, and then Temp2 to prepare for the start of a new cycle at T4 and to promote the reaction of precursors P2 and P1. Descended.With reference to FIGS. 6-9, another method of processing ALD within the scope of the present invention will be described. 6 to 8 show the periodic contact and release / removal of the substrate with the precursors P1 and P2. The substrate is first contacted with precursor P1 from T0 to T1. The non-chemisorbed P1 is released from T1 to T2, and the chemisorbed P1 is contacted with the precursor P2 from T2 to T4. After the excess P2 is released from T4 to T5, a new cycle begins again by contacting the chemisorbed P2 from T5 to T6 with the precursor P1.FIG. 9 illustrates the change in temperature as part of the method described above. Temp2 is maintained from T0 to T1 while in contact with precursor P1. Thereafter, during the release from T1 to T2, the temperature is reduced to Temp1, and this temperature Temp1 is maintained for at least a portion of contacting the precursor P2 from T2 to T3. The temperature is raised to Temp3 during at least part of the contact of precursor P2 from T3 to T4. The temperature is dropped to Temp2 during the release from T4 to T5 in preparation for the start of a new cycle.As a result of achieving the optimum deposition temperature while maintaining the various features and advantages of the invention described herein, the material is deposited at a higher rate and / or with improved quality. Accordingly, devices formed using such methods have configurations formed of materials of improved quality and / or reduced dimensions. That is, the thickness of materials such as barrier materials, insulating materials, etc., can be efficiently reduced if the high quality associated with ALD and the deposition methods described herein are achieved.As one example of the method described herein, Si3N4 can be formed. Dichlorosilane (DCS) is chemisorbed at a first temperature, and then ammonia is chemisorbed at a second temperature lower than the first temperature. Si3N4 can be formed from the chemisorbed components by reacting DCS with ammonia at a third temperature higher than the first and second temperatures.As described above, the present invention has been described with respect to the specific embodiments of the structure and method. However, the features of the present invention are not limited to the embodiments described and illustrated as the preferred embodiments. Various modifications are possible. Therefore, the scope of the present invention should be construed on an equivalent principle according to the claims.BRIEF DESCRIPTION OF THE FIGURESFIG. 1 is a diagram showing timing of bringing a substrate into contact with a precursor 1 in an atomic layer deposition process.FIG. 2 is a diagram showing the timing of bringing the substrate into contact with the purge gas in the atomic layer deposition process.FIG. 3 is a diagram showing the timing of bringing the substrate into contact with the precursor 2 in the atomic layer deposition process.FIG. 4 is a diagram showing a temperature change timing during the contact described with reference to FIGS.FIG. 5 is a diagram showing a temperature change timing according to another embodiment during the contact described with reference to FIGS.FIG. 6 is a diagram showing the timing of bringing the substrate into contact with the precursor 1 in the atomic layer deposition process.FIG. 7 is a diagram showing the timing of bringing the substrate into contact with the purge gas in the atomic layer deposition process.FIG. 8 is a diagram showing the timing of bringing the substrate into contact with the precursor 2 in the atomic layer deposition process.FIG. 9 is a diagram showing a temperature change timing during the contact described with reference to FIGS. 6 to 8. |
Disclosed are methods and apparatuses for preventing memory violations. In an aspect, a fetch unit accesses, from a branch predictor of a processor, a disambiguation indicator associated with a blockof instructions of a program to be executed by the processor, and fetches, from an instruction cache, the block of instructions. The processor executes load instructions and/or store instructions in the block of instructions based on the disambiguation indicator indicating whether or not the load instructions and/or the store instructions in the block of instructions can bypass other instructionsof the program or be bypassed by other instructions of the program. |
1.A method for preventing memory violations, comprising:Accessing, by the fetch unit, a disambiguation indicator associated with an instruction block of a program to be executed by the processor from a branch predictor of the processor;Acquiring the instruction block from an instruction cache by the fetch unit of the processor;Passing by the processor, based on the disambiguation indicator, indicating that a load instruction and/or a store instruction in the instruction block can bypass other instructions of the program or be bypassed by other instructions of the program, performing the The load instruction and/or the store instruction in an instruction block.2.The method of claim 1 wherein said instructing said load instruction in said block of instructions and/or said other instructions of said store instruction to bypass said program are said to be bypassed by other instructions of said program The disambiguation indicator includes the disambiguation indicator indicating that all of the load instructions in the block of instructions should be blocked until an unknown store instruction has been resolved, andThe performing includes preventing all of the load instructions in the instruction block from being executed until the unknown store instruction has been resolved.3.The method of claim 2, wherein the unknown store instruction comprises a store instruction, wherein a target memory address of the unknown store instruction is unknown until the unknown store instruction is resolved, andWherein in the program execution order, the unknown store instruction precedes any load instruction in the instruction block.4.The method of claim 1 wherein said instructing said load instruction in said block of instructions and/or said other instructions of said store instruction to bypass said program are said to be bypassed by other instructions of said program The disambiguation indicator includes the disambiguation indicator indicating that all of the load instructions in the instruction block should be blocked until an unknown load instruction has been resolved, andThe performing includes preventing all load instruction execution in the instruction block from being executed until the unknown load instruction has been resolved.5.The method of claim 4, wherein the unknown load instruction comprises a load instruction, wherein a target memory address of the unknown load instruction is unknown until the unknown load instruction is resolved, andWherein in the program execution order, the unknown load instruction precedes any load instruction in the instruction block.6.The method of claim 1 wherein said instructing said load instruction in said block of instructions and/or said other instructions of said store instruction to bypass said program are said to be bypassed by other instructions of said program The disambiguation indicator includes the disambiguation indicator indicating that all of the load instructions in the instruction block should be blocked until an unknown store instruction and an unknown load instruction have been resolved, andThe execution includes the following:Blocking all load instructions in the instruction block from executing until the unknown store instruction has been resolved, andAll load instructions in the instruction block are blocked from executing until the unknown load instruction has been resolved.7.The method of claim 1 wherein said instructing said load instruction in said block of instructions and/or said other instructions of said store instruction to bypass said program are said to be bypassed by other instructions of said program The disambiguation indicator includes the disambiguation indicator indicating that all unknown storage instructions in the instruction block should be marked as non-routable, andThe execution includes waiting for execution of the other instructions of the program until all of the unknown stored instructions in the block of instructions have been resolved.8.The method of claim 1 wherein said instructing said load instruction in said block of instructions and/or said other instructions of said store instruction to bypass said program are said to be bypassed by other instructions of said program The disambiguation indicator includes the disambiguation indicator indicating that all unknown load instructions in the instruction block should be marked as non-routable, andThe execution includes waiting for execution of other instructions of the program until all unknown load instructions in the block of instructions have been resolved.9.The method of claim 1 wherein said instructing said load instruction in said block of instructions and/or said other instructions of said store instruction to bypass said program are said to be bypassed by other instructions of said program The disambiguation indicator includes indicating that all unknown storage instructions in the instruction block and all of the unknown load instructions should be marked as non-bypassable.The execution includes the following:Waiting for execution of other instructions of the program until all of the unknown stored instructions in the block of instructions have been resolved, andWaiting for execution of other instructions of the program until all of the unknown load instructions in the block of instructions have been resolved.10.The method of claim 1 wherein the disambiguation indicator is a multi-bit field associated with each instruction block of the program being executed.11.The method of claim 1 further comprising:The disambiguation indicator is set to a default value prior to the execution of the instruction block for the first time.12.The method of claim 1 further comprising:The disambiguation indicator is updated based on a load instruction or a store instruction in the instruction block that causes a memory violation during execution of the instruction block.13.The method of claim 1, wherein the instruction block is associated with a plurality of entries in the branch predictor, each entry of the plurality of entries in the branch predictor corresponding to the instruction block a branch in, andEach of the plurality of entries in the branch predictor has a corresponding disambiguation indicator indicating how the load instruction and the store instruction in the instruction block should be executed for the branch in the instruction block.14.A device for preventing memory violations, comprising:processor;An acquisition unit configured to acquire an instruction block of a program to be executed by the processor from an instruction cache;a branch predictor configured to provide the processor with a disambiguation indicator associated with the instruction block,Wherein the processor is configured to, based on the disambiguation indicator, indicate that a load instruction and/or a store instruction in the instruction block can bypass other instructions of the program or be bypassed by other instructions of the program, executing The load instruction and/or the store instruction in the instruction block.15.The apparatus of claim 14 wherein said indication of said load instruction in said block of instructions and/or said other instructions of said store instruction being bypassable by said program are still bypassed by other instructions of said program The disambiguation indicator includes the disambiguation indicator indicating that all of the load instructions in the block of instructions should be blocked until an unknown store instruction has been resolved, andThe processor, wherein configured to execute, includes the processor configured to prevent execution of all of the load instructions in the instruction block until the unknown store instruction has been resolved.16.The apparatus of claim 15 wherein said unknown store instruction comprises a store instruction, wherein a target memory address of said unknown store instruction is unknown until said unknown store instruction is resolved, andWherein in the program execution order, the unknown store instruction precedes any load instruction in the instruction block.17.The apparatus of claim 14 wherein said indication of said load instruction in said block of instructions and/or said other instructions of said store instruction being bypassable by said program are still bypassed by other instructions of said program The disambiguation indicator includes the disambiguation indicator indicating that all of the load instructions in the instruction block should be blocked until an unknown load instruction has been resolved, andThe processor, wherein configured to execute, includes the processor configured to prevent execution of all load instructions in the instruction block until the unknown load instruction has been resolved.18.The apparatus of claim 17, wherein the unknown load instruction comprises a load instruction, wherein a target memory address of the unknown load instruction is unknown until the unknown load instruction is resolved, andWherein in the program execution order, the unknown load instruction precedes any load instruction in the instruction block.19.The apparatus of claim 14 wherein said instruction to indicate said load instruction in said block of instructions and/or said store instruction is bypassable by said other program of said program or bypassed by other instructions of said program The disambiguation indicator includes the disambiguation indicator indicating the following:All of the load instructions in the instruction block should be blocked from execution until an unknown store instruction has been resolved, orAll of the load instructions in the instruction block should be blocked from execution until an unknown load instruction has been resolved, andThe processor configured to execute includes the processor configured to perform the following operations:Blocking all load instructions in the instruction block until the unknown store instruction has been resolved, or preventing all load instructions in the instruction block from executing until the unknown load instruction has been resolved.20.The apparatus of claim 14 wherein said indication of said load instruction in said block of instructions and/or said other instructions of said store instruction being bypassable by said program are still bypassed by other instructions of said program The disambiguation indicator includes the disambiguation indicator indicating that all unknown storage instructions in the instruction block should be marked as non-routable, andThe processor, wherein configured to execute, includes the processor configured to wait for execution of the program other instructions until all of the unknown storage instructions in the block of instructions have been resolved.21.The apparatus of claim 14 wherein said indication of said load instruction in said block of instructions and/or said other instructions of said store instruction being bypassable by said program are still bypassed by other instructions of said program The disambiguation indicator includes the disambiguation indicator indicating that all unknown load instructions in the instruction block should be marked as non-routable, andThe processor, wherein configured to execute, includes the processor configured to wait for execution of other instructions of the program until all unknown load instructions in the block of instructions have been resolved.22.The apparatus of claim 14 wherein said instruction to indicate said load instruction in said block of instructions and/or said store instruction is bypassable by said other program of said program or bypassed by other instructions of said program The disambiguation indicator includes the disambiguation indicator indicating the following:All unknown storage instructions in the instruction block shall be marked as non-passable, orAll unknown load instructions in the instruction block should be marked as non-passable, andThe processor configured to execute includes the processor configured to perform the following operations:Waiting for execution of other instructions of the program until all of the unknown stored instructions in the block of instructions have been resolved, orWaiting for execution of other instructions of the program until all of the unknown load instructions in the block of instructions have been resolved.23.The apparatus of claim 14, wherein the disambiguation indicator is a multi-bit field associated with each instruction block of the program being executed.24.The device of claim 14, wherein the processor is further configured to:The disambiguation indicator is set to a default value prior to the execution of the instruction block for the first time.25.The device of claim 14, wherein the processor is further configured to:The disambiguation indicator is updated based on a load instruction or a store instruction in the instruction block that caused a memory violation during execution of the instruction block.26.The apparatus of claim 14, wherein the instruction block is associated with a plurality of entries in the branch predictor, each entry of the plurality of entries in the branch predictor corresponding to the instruction block a branch in, andEach of the plurality of entries in the branch predictor has a corresponding disambiguation indicator indicating how the load instruction and the store instruction in the instruction block should be executed for the branch in the instruction block.27.A device for preventing memory violations, comprising:Device for processing;Means for obtaining, configured to acquire, from an instruction cache, an instruction block of a program to be executed by the processor;Means for branch prediction configured to provide the processor with a disambiguation indicator associated with the instruction block,Wherein the means for processing is configured to be based on the disambiguation indicating whether a load instruction and/or a store instruction in the instruction block can bypass other instructions of the program or be bypassed by other instructions of the program An indicator that executes the load instruction and/or the store instruction in the instruction block.28.The apparatus of claim 27 wherein said means for processing is further configured to:The disambiguation indicator is set to a default value prior to the execution of the instruction block for the first time.29.A non-transitory computer readable medium storing computer executable code for preventing memory violations, the computer executable code comprising:Having the acquisition unit of the processor acquire at least one instruction of the instruction block of the program to be executed by the processor from the instruction cache;At least one instruction that causes the acquisition unit to access a disambiguation indicator associated with the instruction block from a branch predictor of the processor;Causing the processor to perform the said based on the disambiguation indicator indicating that a load instruction and/or a store instruction in the instruction block can bypass the program or other instructions bypassed by other instructions of the program The load instruction and/or the at least one instruction of the store instruction in the instruction block.30.The method of claim 1 wherein the computer executable code further comprises:The acquisition unit is caused to set the disambiguation indicator to at least one instruction of a default value before the instruction block is executed for the first time. |
Memory violation predictionTechnical fieldThe present invention relates to memory violation prediction.Background techniqueThe processor provides load and store instructions to access information located in the processor cache (eg, L1, L2, etc.) and/or main memory. The load instruction can include a memory address (provided directly in the load instruction or provided using an address register) and identifies the target register. When the load instruction is executed, data stored at the memory address can be retrieved (e.g., from a cache, from a main memory, or from another storage mechanism) and placed in the identified target register. Similarly, the store instruction can include the memory address and identifier of the source register. When a store instruction is executed, data from the source register can be written to the memory address. Load and store instructions can utilize data cached in the L1 cache.Processors can use instruction level parallelism (ILP) to improve application performance. Out-of-order execution is a technique that uses the frequent use of ILP. In out-of-order execution, the instructions that are ready to execute are identified and executed, usually in a different order than the one specified by the von-Neumann programming model. This can result in memory operations, such as loading and storage, being performed in an out-of-order manner. For example, an "older" store instruction may not be ready to execute until after a "younger" load instruction has been executed, because the older data and address in the program compute latency. An "older" instruction is an instruction that appears earlier in the program order than a "younger" instruction.Younger instructions may depend on older instructions. For example, two instructions can access the same memory address, or a younger instruction can require the result of an older instruction. Thus, continuing with the above example, the younger load instruction may depend on the older store instruction being executed first, but due to the earlier wait time in program execution, older storage before the younger load instruction is executed The instruction is not executed, resulting in an error.To resolve this error, the executed load instruction and subsequently issued instructions are flushed from the pipeline and reissued and re-executed by each of the flush instructions. Although the load instruction and the subsequently issued instruction are invalidated and re-issued, the L1 cache can be updated with the data stored by the store instruction. When the reissued load instruction is executed a second time, the load instruction can then receive the correctly updated data from the L1 cache.After a load-store conflict, executing the load instruction and subsequent executed instructions, invalidating the load instruction and subsequent executed instructions, and re-issuing the subsequent execution of the load instruction may take many processor cycles. Because the initial results of the load and subsequently issued instructions are invalid, the time taken to execute them is essentially wasted. Therefore, load-store conflicts can cause the processor to be inefficient.Summary of the inventionA simplified summary related to one or more aspects disclosed herein is presented below. Therefore, the following summary should not be considered as a broad overview of the various aspects of the invention, and the following summary should not be considered as identifying a critical or important element in the Accordingly, the following summary is for the purpose of illustration and description of the embodiments of the inventionIn one aspect, a method for preventing a memory violation includes: accessing, by an acquisition unit, a disambiguation indicator associated with an instruction block of a program to be executed by a processor from a branch predictor of a processor; obtaining by a processor a unit fetching the instruction block from an instruction cache; and, by the processor, indicating, based on the disambiguation indicator, that a load instruction and/or a store instruction in the instruction block can bypass other instructions of the program or The other instructions of the program are bypassed to execute the load and/or store instructions in the block of instructions.In an aspect, an apparatus for preventing a memory violation includes: a processor; an acquisition unit configured to acquire an instruction block of a program to be executed by the processor from an instruction cache; and a branch predictor configured Determining, to the processor, a disambiguation indicator associated with the instruction block, wherein the processor is configured to indicate that a load instruction and/or a store instruction in the instruction block are bypassable based on the disambiguation indicator Other instructions of the program are also bypassed by other instructions of the program to execute load and/or store instructions in the block of instructions.In one aspect, an apparatus for preventing a memory violation includes: means for processing; means for obtaining, configured to acquire an instruction block of a program to be executed by a processor from an instruction cache; An apparatus for branch prediction configured to provide a disambiguation indicator associated with the instruction block to a processor, wherein the means for processing is configured to indicate the instruction block based on the disambiguation indicator The load and/or store instructions in the block of instructions can be executed by bypassing other instructions of the program or by other instructions of the program.In one aspect, a non-transitory computer readable medium storing computer executable code for preventing memory violations includes computer executable code comprising: causing an acquisition unit of a processor to obtain from an instruction cache At least one instruction of an instruction block of a program to be executed by a processor; at least one instruction to cause an acquisition unit to access a disambiguation indicator associated with the instruction block from a branch predictor of the processor; and to cause processing Based on the disambiguation indicator indicating whether a load instruction and/or a store instruction in the instruction block can bypass other instructions of the program or be bypassed by other instructions of the program to execute in the instruction block Loading instructions and/or storing at least one instruction of the instructions.Other objects and advantages associated with the aspects disclosed herein will be apparent to those skilled in the <RTIgt;DRAWINGSThe aspects of the present disclosure will be better understood from the following detailed description of the embodiments of the invention. in:FIG. 1 is a block diagram depicting a system in accordance with at least one aspect of the present disclosure.2 is a block diagram depicting an exemplary computer processor in accordance with at least one aspect of the present disclosure.FIG. 3 illustrates an exemplary system for branch prediction and memory disambiguation prediction.4 illustrates an exemplary system for memory violation prediction in accordance with at least one aspect of the present disclosure.FIG. 5 illustrates an exemplary flow for preventing memory violations in accordance with at least one aspect of the present disclosure.Detailed waysMethods and apparatus for preventing memory violations are disclosed. In one aspect, the fetch unit accesses a disambiguation indicator associated with an instruction block of a program to be executed by the processor from a branch predictor of the processor and retrieves the block of instructions from the instruction cache. The processor executes the instruction based on the disambiguation indicator indicating whether a load instruction and/or a store instruction in the instruction block can bypass other instructions of the program or be bypassed by other instructions of the program Load instructions and/or store instructions in the block.These and other aspects of the disclosure are disclosed in the following description of the specific aspects of the disclosure and related drawings. Alternative aspects can be devised without departing from the scope of the present disclosure. In addition, well-known elements of the present disclosure will not be described in detail, or the elements may be omitted in order to avoid obscuring the relevant details of the present disclosure.The word "exemplary" and/or "example" is used herein to mean "serving as an example, instance, or illustration." Any aspect described herein as "exemplary" and/or "example" is not necessarily construed as preferred or advantageous over other aspects. Also, the term "aspect of the disclosure" does not require that all aspects of the disclosure include the features, advantages, or modes of operation discussed.In addition, many aspects are described in terms of a sequence of actions to be performed by, for example, the elements of the computing device. It will be appreciated that the various actions described herein can be performed by a particular circuit, such as an application-specific integrated circuit (ASIC), program instructions being executed by one or more processors, or a combination of both. Additionally, the sequence of actions described herein can be considered to be embodied entirely in any form of computer readable storage medium having stored therein a corresponding set of computer instructions, the computer instructions being executed The associated processor will be caused to perform the functionality described herein. The various aspects of the invention may be embodied in a variety of different forms, all of which are considered within the scope of the claimed subject matter. In addition, for each of the aspects described herein, a corresponding form of any such aspect may be described herein as, for example, "logic" configured to perform the described acts.FIG. 1 is a block diagram depicting a system 100 in accordance with at least one aspect of the present disclosure. System 100 can be any computing device, such as a cellular telephone, a personal digital assistant (PDA), a pager, a laptop, a tablet, a desktop computer, a server computer, a compact flash device, an external or internal modem, wireless or wired Phone and more.System 100 can include: system memory 102 for storing instructions and data; graphics processing unit 104 for graphics processing; input/output (I/O) interface for communicating with external devices; storage device 108, It is used to store instructions and data for long periods; and a processor 110 for processing instructions and data. Processor 110 may have an L2 cache 112 and a plurality of L1 caches 116, with each L1 cache 116 being utilized by one of a plurality of processor cores 114.2 is a block diagram depicting processor 110 of FIG. 1 in greater detail. For simplicity, FIG. 2 depicts and is described with respect to a single core 114 of processor 110.L2 cache 112 may contain portions of the instructions and data that processor 110 is using. As shown in FIG. 2, the L1 cache 116 can be divided into two parts, an L1 instruction cache 222 (I cache 222) for storing I lines, and an L1 data cache 224 for storing D lines ( D cache 224). The L2 cache access circuit 210 can retrieve the set of instructions from the L2 cache 112. The I line retrieved from the L2 cache 112 can be processed by the predecoder and scheduler 220 and placed into the I cache 222. To further improve the performance of the processor 110, instructions are typically pre-decoded when the I-line is retrieved, for example, from the L2 cache 112 (or higher). This pre-decoding may include various functions such as address generation, branch prediction, and scheduling (determining the order in which the instructions should be issued) as a scheduling information capture (a set of flags) for control instruction execution.Instruction fetch circuitry 236 can be used to fetch instructions for core 114. For example, instruction fetch circuitry 236 can include a program counter that tracks the current instruction being executed in core 114. A branch unit (not shown) within core 114 can be used to change the program counter when a branch instruction is encountered. The I line buffer 232 can be used to store instructions fetched from the L1I cache 222. The issue and dispatch circuitry 234 can be used to group the instructions in the I-line buffer 232 into groups of instructions that are then issued in parallel with the core 114. In some cases, the publish and dispatch circuitry 234 can use the information provided by the predecoder and scheduler 220 to form the appropriate set of instructions.In addition to receiving instructions from the issue and dispatch circuitry 234, the core 114 can receive data from a variety of locations. In the event that core 114 requires data from a data register, register file 240 can be used to obtain data. In the event that core 114 requires data from a memory location, cache load and store circuitry 250 can be used to load data from D cache 224. In the event that this loading is performed, a request for the required data can be issued to the D cache 224. If the D cache 224 does not contain the desired data, a request for the desired data can be issued to the L2 cache 112 (e.g., using the L2 access circuit 210).In some cases, the data can be modified in core 114. The modified data can be written to the register file 240 or stored in memory. Write back circuit 238 can write data back to register file 240, or can utilize cache load and store circuit 250 to write data back to D cache 224. Optionally, core 114 may directly access cache load and store circuitry 250 to perform storage. In some cases, write back circuit 238 can also be used to write instructions back to I cache 222.Processor 110 may utilize instruction level parallelism (ILP) to improve application performance. Out-of-order execution is a technique that uses the frequent use of ILP. In out-of-order execution, the instructions that are ready to execute are identified and executed, usually in a different order than the program specified by the von Newman programming model. This can result in memory operations, such as loading and storage, being performed in an out-of-order manner.For example, a store instruction can be executed that stores data to a particular memory address, but due to the latency in different sets of instructions that out of the program are executed out of order, the stored data is not immediately available for "younger "Related load instructions. Thus, if a younger load instruction that loads data from the same memory address is executed shortly after the "older" store instruction, the younger load is before the L1 cache 116 is updated with the result of the older store instruction. The instructions can receive data from the L1 cache 116, resulting in a memory violation. A similar problem occurs with storage-storage and load-load instruction ordering.Table 1 illustrates an example of two common instruction sort violations.Table 1A memory violation creates a functional failure that needs to be resolved. In general, younger load instructions should have been resolved by the address of the previous memory operation before they begin execution. Because it is not waiting, this load instruction and all its dependent instructions must be reevaluated to maintain functionality. The younger load instruction that caused the error is considered a precision fault where the machine state is restored at the boundary of the younger load and the processor 110 resumes fetching the instruction from the younger load that will be re-executed. As with any exact fault (like branch misprediction), there is a high performance and power penalty associated with such memory violations.To address such memory violations, many processors utilize load and store queues that maintain the life of the load and store. A load instruction checks the store queue to identify the youngest older store with address overlap. If there is an overlap, the store instruction forwards the data to the load instruction to ensure functionality. If there is no overlap, the load instruction proceeds to load data from the data cache (e.g., L1 cache 116, L2 cache 112, etc.).If there are any older store instructions whose destination address has not been resolved, then the load instruction must determine if it depends on this unresolved store instruction. This is often referred to as memory disambiguation. Current methods for performing memory disambiguation include:1.Always block unknown storage addresses.2.Always bypass - assumes that none of the unresolved store instructions are forwarded to the load instruction.3.Based on its instruction address or other uniqueness, the predicted load instruction will not depend on any unresolved store instructions based on the history of the load instruction.4.Based on its instruction address or other uniqueness, it is predicted that a particular store instruction will never be forwarded to any load instruction that was executed earlier than the store instruction itself. Therefore, if its address is unknown, any younger load instruction can bypass this store instruction.FIG. 3 illustrates an exemplary conventional system 300 for branch prediction and memory disambiguation prediction. System 300 includes a branch predictor 302, a front end (FE) pipe 304, a back end (BE) pipe 306, a load/store (SU) pipe 308, and a memory disambiguation (MD) predictor 310. Branch predictor 302 can be part of the "front end" and provide the next instruction address to the fetch unit (eg, instruction fetch circuitry 236). Branch predictor 302, memory disambiguation predictor 310, front end tube 304, back end tube 306, and load/store tube 308 may be components of core 114.In system 300, branch predictor 302 sends the next program counter (PC) to front end pipe 304 in core 114. The memory disambiguation predictor 310 can send its predictions as to whether the load and/or store instructions being executed depend on unresolved load and/or store instructions to the front end pipe 304, the back end pipe 306, and the load/store tube 308. Any or all.The present disclosure presents a method for memory disambiguation that integrates disambiguation prediction with branch prediction. For simplicity, this combined prediction method is referred to herein as "memory violation prediction."FIG. 4 illustrates an exemplary system 400 for memory violation prediction in accordance with at least one aspect of the present disclosure. System 400 includes a branch and memory violation predictor (MVP) 402, a front end (FE) pipe 304, a back end (BE) pipe 306, and a load/store (SU) pipe 308. Branch and memory violation predictor 402 can be a component of instruction fetch circuitry 236 in FIG. 2, while front end pipe 304, back end pipe 306, and load/store pipe 308 can be components of core 114.The branch and memory violation predictor 402 includes a branch predictor 404 and a memory violation predictor 406. Branch predictor 404 and memory violation predictor 406 store PC and memory violation predictor codes (also referred to herein as "disambiguation indicators"), respectively. The branch and memory violation predictor 402 takes the next PC from the branch predictor 404 and the memory violation predictor code from the memory violation predictor 406 as an entry 408 (eg, one or more bits representing the PC and memory violation predictor code). Send to front end tube 304. Entry 408 is also passed to backend pipe 306 and load/store pipe 308.In the present disclosure, for simplicity, branch predictor 404 is assumed to be a decoupled branch predictor, but branch predictor 404 may instead be a coupled branch predictor without altering the operations of the memory violation predictions disclosed herein. The coupling of the branch predictor is with the acquisition pipeline. In the coupling pipeline, tending to be in the relationship of the request-response type, and in the decoupling branch predictor (for example, the branch predictor 404), the relationship is more producer-consumption with a certain back pressure mechanism Type. The decoupling branch predictor continues to generate a possible next acquisition group address based on the current acquisition group address. The get group address is the first address of the contiguous block of instructions and is pointed to by the PC. This can be part of the cache line of the instruction cache (eg, in an ARM class model) or an instruction block (eg, in a block-based ISA, such as E2).As shown in FIG. 4, memory violation predictor 406 enhances entries from branch predictor 404 by adding a disambiguation indicator (ie, memory violation predictor code) to the PC sent to front end pipe 304 as entry 408. The disambiguation indicator in entry 408 will be valid for all load and/or store instructions within the block of instructions fetched by the PC. The disambiguation indicator provides a historical context of disambiguation prediction and avoids blocking load instructions when not needed. That is, if the load instruction in the block instruction block is not rendered as expected, the memory violation predictor can change how the prediction of the load instruction in the instruction block should be performed. This provides finer disambiguation of the load instruction. Forecast, and improve performance without increasing load mispredictions.This historical context can be used for disambiguation when a younger load instruction (which is faulty due to its early execution) is in a different control domain than the older storage or load instruction. A variety of factors, such as the presence, location, and resolution time of older instructions, determine the likelihood of a violation, and the branching context (provided by branch predictor 404) helps the older loading instructions to pass unresolved The situation is narrowed when an older memory operation actually fails.The memory violation predictor 406 uses the current instruction fetch group address to access the branch predictor 404, and along with the direction and/or target prediction provided by the branch predictor 404, the updated branch prediction provides the current block with respect to the instruction (its address is used Information to find the possibility of memory violations in branch predictions. More specifically, branches can have different consequences depending on how one or more previous branches are executed/resolved. Thus, there may be multiple entries in the branch predictor 404 for the same instruction block, which correspond to how the branches in the current block of instructions relate to previous branch consequences. The same instruction block can have multiple successors, depending on its previous branch instruction. Similarly, for the same instruction block, there may also be multiple disambiguation indicators, where each disambiguation indicator corresponds to an entry in the branch predictor 404 for the instruction block. For example, for a given instruction block, if there are two entries in the branch predictor 404, there may be two corresponding disambiguation indicators in the memory violation predictor 406, each for each entry in the branch predictor 404 . When the instruction block is decoded, any load and/or store instructions internal to the instruction block will follow memory disambiguation predictions as provided by the updated branch predictor for those load and/or store instructions. Thus, the memory violation predictor of the present disclosure permits multiple different disambiguation predictions for the same instruction block (ie, for the same static branch PC). Thus, depending on how program execution reaches a given branch PC, the memory violation predictor can select one disambiguation code instead of another disambiguation code.Note that in the discussion above, the branch predictor 404 is not indexed by only the PC. In fact, as described above, it is a combination of PC and historical context. However, even without a historical context, branch predictor 404 will still provide predictions.Memory violation predictor 406 can provide one or more status bits to indicate disambiguation prediction for the instruction block. These may be, but are not limited to, the disambiguation indicator/memory violation predictor code illustrated in Table 2. The initial value of the disambiguation indicator can be any of these states, depending on the conservative nature of the design. Note that the codes in Table 2 are merely exemplary. Many large (or smaller) details may be expressed in terms of behavior and interaction between memory instructions in an instruction block within each predicted range.Table 2The disambiguation indicator in entry 408 applies to all load and store instructions in the instruction block. That is, all load and store instructions in the instruction block are marked with the disambiguation indicator of the instruction block. The disambiguation indicator indicates whether execution of any storage and/or load instructions within the instruction block previously caused a memory violation. Thus, in this type of memory disambiguation, memory dependency behavior is described across instruction blocks relative to individual memory instructions (e.g., load and store). This allows this group behavior to be expressed very early in the pipeline (even before the load and storage affected by the prediction in the instruction block has been decoded).Note that older conflicting load and/or store instructions (which lack a solution before the execution of load and/or store instructions in the current instruction block results in a memory violation) need not be in the same/current instruction block. In fact, older conflicting load and/or store instructions may have been executed in one or more previous instruction blocks.Additionally, the memory violation predictor 406 need not know if the instruction block contains load and/or store instructions. In fact, when the instruction block is executed for the first time, the memory violation predictor 406 can simply assign an initial/default value to the disambiguation indicator of the instruction block. The disambiguation indicator can then be updated depending on whether execution of the instruction block caused a memory violation. The next time the instruction block is retrieved, the updated disambiguation indicator will more reflect how the instruction block should be executed (eg, as indicated by the disambiguation indicator in Table 2) to prevent memory violations.Additionally, each time an instruction block completes execution, the memory violation predictor 406 will be updated with the disambiguation state of the instruction block (eg, from 0, 1, 2, or 3 of Table 2). Additional updates may also exist when the disambiguation flush occurs in the machine to update the corresponding entry in the memory violation predictor 402.The disambiguation indicator can be manipulated as a sticky entry (eg, once set, not changed for a particular block of instructions), manipulated to a linear threshold (eg, greater than a threshold will disable disambiguation), manipulated as a hysteresis threshold (eg, fast ramped to Disabled, slowed down to re-enable, or the like.How the instruction block is resolved relative to the memory operation may result in selecting a different disambiguation indicator for the instruction block (eg, as described in Table 2). Specifically, a branch queue entry corresponding to an instruction block can maintain a memory violation state of the instruction block. If there is a memory violation, the corresponding branch queue entry can be updated with an appropriate solution. The disambiguation indicator (or memory violation of the predictor code) depends on the type of violation. For example, if a store instruction invalidates multiple load instructions, the disambiguation indicator of the block of instructions containing the store instruction may be set to memory violation predictor code 4 (from Table 2). However, if there is only one load instruction, the disambiguation indicator of the instruction block containing the store instruction is set to memory violation predictor code 1. The disambiguation indicator of the instruction block can be updated to a different memory violation predictor code. For example, where a store instruction flushes multiple load instructions, and if the disambiguation indicator of one of the instruction blocks containing one of the load instructions is set to memory violation predictor code 1, then include The disambiguation indicator of the instruction block storing the instruction may be set to the memory violation predictor code 4, and the disambiguation indicator of the instruction block containing the load instruction may be cleared. Alternatively, if the disambiguation indicator of the instruction block is set to memory violation predictor code 4 and a memory violation is detected, wherein the older load instruction flushes the younger load instruction, the disambiguation indicator of the instruction block may be updated The predictor code 6 is violated for more restrictive memory.FIG. 5 illustrates an exemplary process 500 for preventing memory violations in accordance with at least one aspect of the present disclosure.At 502, an acquisition unit (eg, instruction fetch circuitry 236 in FIG. 2) accesses from a branch predictor (eg, branch and memory violation predictor 402 in FIG. 4) to be processed by a processor (eg, core 114 in FIG. 2). The disambiguation indicator associated with the instruction block of the executed program, such as the disambiguation indicator in entry 408 in FIG. Alternatively, the branch predictor provides a disambiguation indicator associated with the instruction block to the processor. In an aspect, the disambiguation indicator can be a multi-bit field associated with each instruction block of the program being executed. At 504, the fetch unit retrieves the instruction block from an instruction cache (eg, L1I cache 222 in FIG. 2).At 506, the processor, based on the disambiguation indicator, indicates that a load instruction and/or a store instruction in the instruction block can bypass other instructions of the program or be bypassed by other instructions of the program to execute A load instruction and/or a store instruction in the instruction block.In an aspect, the instruction block can be associated with a plurality of entries in the branch predictor, each of the plurality of entries in the branch predictor corresponding to a branch in the instruction block (thus providing the instruction block Historical situation). In such a case, each of the plurality of entries in the branch predictor may have a corresponding disambiguation indicator indicating how the load and store instructions in the instruction block should be executed for the branches in the instruction block. .In one aspect, the disambiguation indicator indicating whether the load instruction and/or the store instruction in the instruction block can bypass other instructions of the program or other instructions bypassed by the program can include indicating that all load instructions in the instruction block should be prevented from executing until The disambiguation indicator until the unknown storage instruction has been resolved is shown in Table 2 as the disambiguation indicator "1". In that case, execution at 506 may include blocking execution of all load instructions in the instruction block until an unknown store instruction has been resolved. The "unknown" store instruction may be a store instruction in which the target memory address of the unknown store instruction is unknown until the unknown store instruction is resolved. In the program execution order, an unknown store instruction can precede any load instruction in the instruction block.In one aspect, the disambiguation indicator indicating whether the load instruction and/or the store instruction in the instruction block can bypass other instructions of the program or other instructions bypassed by the program can include indicating that all load instructions in the instruction block should be prevented from executing until The disambiguation indicator until the unknown load instruction has been resolved is shown in Table 2 as the disambiguation indicator "2". In that case, execution at 506 may include blocking execution of all load instructions in the instruction block until an unknown load instruction has been resolved. The "unknown" load instruction may be a load instruction in which the target memory address of the unknown load instruction is unknown until an unknown load instruction is resolved. In the program execution order, an unknown load instruction can precede any load instruction in the instruction block.In one aspect, the disambiguation indicator indicating whether the load instruction and/or the store instruction in the instruction block can bypass other instructions of the program or other instructions bypassed by the program can include indicating that all load instructions in the instruction block should be prevented from executing until The disambiguation indicator until the unknown store instruction and the unknown load instruction have been resolved is shown as disambiguation indicator "3" in Table 2. In such a case, execution at 506 may include blocking execution of all load instructions in the instruction block until an unknown store instruction has been resolved; and blocking execution of all load instructions in the instruction block until an unknown load instruction is resolved .In one aspect, the disambiguation indicator indicating whether the load instruction and/or the store instruction in the instruction block can bypass other instructions of the program or other instructions bypassed by the program can include indicating that all unknown storage instructions in the instruction block should be marked A non-obviate disambiguation indicator is shown as Disambiguation Indicator "4" in Table 2. In that case, execution at 506 may include waiting for other instructions to execute the program until all unknown storage instructions in the instruction block have been resolved.In one aspect, the disambiguation indicator indicating whether the load instruction and/or the store instruction in the instruction block can bypass other instructions of the program or other instructions bypassed by the program can include indicating that all unknown load instructions in the instruction block should be marked A non-obscurable disambiguation indicator is shown as Disambiguation Indicator "5" in Table 2. In that case, execution at 506 may include waiting for other instructions to execute the program until all unknown load instructions in the instruction block have been resolved.In one aspect, the disambiguation indicator indicating whether the load instruction and/or the store instruction in the instruction block can bypass other instructions of the program or other instructions bypassed by the program can include all unknown storage instructions and all of the indicated instruction blocks. The disambiguation indicator of the unknown load instruction should be marked as a non-bypassable disambiguation indicator, as shown in Table 2 as the disambiguation indicator "6." In that case, execution at 506 may include waiting for other instructions to execute the program until all unknown store instructions in the block of instructions and all unknown load instructions in the block of instructions have been resolved.Although not illustrated in FIG. 5, the process 500 can further include setting the disambiguation indicator to a default value (eg, by a branch predictor) prior to the first execution of the instruction block. In that case, the process 500 can further include updating the disambiguation indicator based on a load instruction or a store instruction in the instruction block that caused a memory violation during execution of the instruction block.Those skilled in the art will appreciate that information and signals may be represented using any of a variety of different technologies and techniques. For example, data, instructions, commands, information, signals, bits, symbols that may be referenced throughout the above description may be represented by voltages, currents, electromagnetic waves, magnetic fields or magnetic particles, light fields or light particles, or any combination thereof. And chips.In addition, the various illustrative logical blocks, modules, circuits, and algorithm steps described in connection with the aspects disclosed herein may be implemented as electronic hardware, computer software, or combinations of both. To clearly illustrate this interchangeability of hardware and software, the functionality of the various illustrative components, blocks, modules, circuits, and steps are generally described above. Whether such functionality is implemented as hardware or software depends on the particular application and design constraints imposed on the overall system. A person skilled in the art can implement the described functionality in different ways for each particular application, but such implementation decisions should not be construed as causing a departure from the scope of the invention.The various illustrative logical blocks, modules, and circuits described in connection with the aspects disclosed herein may be implemented or carried out by a general purpose processor, a digital signal processor (DSP), an application specific integrated circuit (ASIC), a field. A programmable gate array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components or any combination thereof designed to perform the functions described herein. A general purpose processor may be a microprocessor; but in the alternative, the processor may be any conventional processor, controller, microcontroller, or state machine. The processor may also be implemented as a combination of computing devices, e.g., a combination of a DSP and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration.The methods, sequences and/or algorithms described in connection with the aspects disclosed herein may be embodied directly in hardware, in a software module executed by a processor, or in a combination of hardware and software modules. Software modules can reside in random access memory (RAM), flash memory, read only memory (ROM), electrically programmable ROM (EPROM), electrically erasable programmable ROM (EEPROM), registers, hard disk, removable Disk, CD-ROM or any other form of storage medium known in the art. An exemplary storage medium is coupled to the processor, such that the processor can read information from the storage medium and write the information to the storage medium. In the alternative, the storage medium may be integral to the processor.In one or more exemplary aspects, the functions described can be implemented in hardware, software, firmware, or any combination thereof. If implemented in software, the functions may be stored on or transmitted as one or more instructions or code on a computer readable medium. Computer-readable media includes both computer storage media and communication media including any medium that facilitates transfer of a computer program from one place to another. The storage medium can be any available media that can be accessed by a computer. By way of example and not limitation, such computer-readable media may comprise a RAM, ROM, EEPROM, CD-ROM or other optical disk storage device, disk storage device or other magnetic storage device, or may be used to carry or store an instruction or data structure. A form of required program code and any other medium that can be accessed by a computer. Also, any connection is referred to as a computer-readable medium as appropriate.While the foregoing disclosure shows illustrative aspects of the invention, it should be understood that various changes and modifications can be made herein without departing from the scope of the invention as defined by the appended claims. The functions, steps and/or actions of the method claims according to the aspects of the invention described herein are not necessarily performed in any particular order. In addition, although elements of the invention may be described or claimed in the singular, the singular |
An integrated circuit (IC) package process is provided that includes forming a first via hole in a first substrate. Patterning signal lines on a first surface and a second surface of the first substrate. Attaching a second substrate to the first surface of the first substrate. Electronically connecting a portion of the signal lines of the first substrate and the second substrate. Attaching an electrical element to the first surface of the first substrate. Forming a via hole in a third substrate. Introducing conductive material over a first surface and a second surface of the third substrate. Forming a second circuit pattern on the first surface and the second surface of the third substrate. Additionally, attaching the third substrate to the first substrate with a second layer of adhesive. In an alternative embodiment, a process includes forming a via hole in a first substrate. Introducing conductive material over a first surface and a second of the first substrate, wherein the introducing conductive material over the first surface and the second surface of the first substrate fills the via hole to form a via and a through hole. Forming a first circuit pattern on the first surface and the second surface of the first substrate. Forming solder pads on the first circuit pattern. Attaching a second substrate to the first substrate. Attaching an electrical element to the first substrate. Forming a via hole in a second substrate. Introducing conductive material over a first surface and a second of the second substrate. Forming a second circuit pattern on the first surface and the second surface of the second substrate, and attaching the first substrate to the second substrate. |
What is claimed is: 1. A method comprising:forming a first via hole in a first substrate; introducing metal on a first surface and a second surface of the first substrate, the metal introduced on the first surface of the first substrate is layered over the first via hole and comes in contact with the metal introduced on the second surface of the first substrate; patterning signal lines in the introduced metal on the first surface and the second surface of the first substrate, wherein the patterned signal lines in the introduced metal on the first surface and the second surface of the first substrate forms a first metal pattern; attaching a first dielectric to the first surface of the first substrate, wherein the dielectric is in contact with the first metal pattern and the first surface of the first substrate; electronically connecting a portion of the signal lines of the first substrate and the dielectric; attaching an electrical element to the second surface of the first substrate, wherein the electrical element is one of a passive and an active electrical element; removing a portion of the electrical element to reduce the thickness of the electrical element; attaching a second dielectric to the second surface of the first substrate, wherein the second dielectric is in contact with the active electrical element and the second surface of the first substrate and the second dielectric surrounds the electrical element and has a thickness equivalent to the remaining portion of the electrical element. 2. The method of claim 1, further comprising forming a second via hole in a second substrate;introducing metal on a first surface and a second surface of the second substrate, the metal introduced on the first surface of the second substrate is layered over the second via hole and comes in contact with the metal introduced on the second surface of the second substrate; patterning signal lines in the introduced metal on the first surface and the second surface of the second substrate, wherein the patterned signal lines in the introduced metal on the first surface and the second surface of the second substrate forms a second metal pattern; attaching a third dielectric to the first surface of the second substrate, wherein the third dielectric is in contact with the second metal pattern and the first surface of the second substrate; and attaching the second substrate to the first substrate, wherein the dielectric is in contact with the metal pattern and the first surface of the first substrate. 3. The method of claim 2, wherein the first via hole and the second via hole are layered with a conductive material forming a first new via hole and a second new via hole, wherein the depth of the first new via hole is less than the depth of the first via hole and the depth of the second new via hole is less than the depth of the second via hole.4. The method of claim 2, wherein the electrical element is attached to the first substrate with a first layer of adhesive.5. The method of claim 4, further comprising attaching the second substrate to the first substrate with a second layer of adhesive, wherein the active electrical element is covered by the adhesive.6. The method of claim 4, wherein the first substrate, the second substrate and the dielectric are a polymide.7. The method of claim 5, further comprising forming a plurality of via holes in the first substrate and the second substrate, wherein at least one of the plurality of via holes forms an opening from the first substrate to the second substrate;introducing conductive material over the plurality of via holes; wherein the introducing conductive material over the plurality of via holes forms a plurality of new holes and forms a plurality of bond pads, wherein the depth of the plurality of new via holes are less than the depth of the plurality of via holes. 8. The method of claim 7, further comprising attaching the plurality of new via holes with a contact pad of the electrical element.9. The method of claims 8, further comprising forming solder balls on the plurality of bond pads.10. The method of claim 1, wherein the forming of via holes is accomplished by one of mechanical drilling, laser drilling and etching.11. A method comprising:forming a first via hole and a second via hole in a first substrate; introducing conductive material over a first surface and a second surface of the first substrate; wherein introducing conductive material over the first surface and the second surface of the first substrate fills the first via hole to form a new via hole and a portion of the conductive material introduced over the second surface of the first substrate is removed from the second via hole forming a via through hole, wherein the depth of the new via hole is less than the depth of the first via hole and the width of the, new via through hole is less than the width of the second via hole and the conductive material comes in contact with the metal introduced on the second surface of the first substrate through the new via hole; forming a first circuit pattern on the introduced conductive material on the first surface and the second surface of the first substrate; forming solder pads on the first circuit pattern; attaching a dielectric to the first substrate; attaching an active electrical element to the first substrate with a first layer of adhesive; forming a hole in the first layer of adhesive to expose a contact pad of the electrical element; forming a via hole in a second substrate; introducing conductive material over a first surface and a second surface of the second substrate; forming a second circuit pattern on the first surface and the second surface of the second substrate; and attaching the first substrate with the second substrate, wherein the active electrical element is disposed between the first substrate and the second substrate and electrically coupled through the new via through hole. 12. The method of claim 11 further comprising forming solder pads on the second circuit pattern.13. The method of claim 12, wherein the first substrate is attached to the second substrate with a second layer of adhesive.14. The method of claim 12 wherein a conductive adhesive is attached to a via hole and a contact pad of the electrical element.15. The method of claim 12, wherein solder balls are attached to solder pads of the first substrate.16. The method of claim 13, wherein metallic solder ink is attached to a via hole and a contact pad of the electrical element.17. A method comprising:forming a first via hole in a first substrate; introducing metal on a first surface and a second surface of the first substrate; patterning signal lines on the introduced metal on a first surface and a second surface of the first substrate; attaching a dielectric to the first surface of the first substrate, wherein the dielectric is in contact with the signal lines and the first surface of the first substrate; electronically connecting a portion of the signal lines of the first substrate and the dielectric; attaching an active electrical element to the first surface of the first substrate with a first layer of adhesive; removing a portion of the active electrical element to reduce the thickness of the active electrical element; forming a second via hole in a second substrate; introducing metal on a first surface and a second surface of the second substrate; patterning signal lines on the introduced metal on the first surface and the second surface of the second substrate; and attaching the second substrate to the first substrate with a second layer of adhesive, wherein the active electrical element is covered by the adhesive. 18. The method of claim 17, wherein the first via hole and the second via hole are layered with a conductive material forming a first new via hole and a second new via hole, wherein the depth of the first new via hole is less than the depth of the first via hole and the depth of the second new via hole is less than the depth of the second via hole.19. The method of claim 18, wherein the first substrate, the second substrate and the dielectric are a polyimide.20. The method of claim 19, further comprising forming a plurality of via holes in the first substrate and the second substrate;introducing conductive material over the plurality of via holes; wherein the introducing conductive material over the plurality of via holes forms a plurality of new via holes and forms a plurality of bond pads, wherein the depths of the plurality of new via holes are less than the depths of the plurality of via holes. 21. The method of claim 20, further comprising attaching the plurality of new via holes with a contact pad of the electrical element.22. The method of claim 21, further comprising forming solder balls on the plurality of bond pads.23. The method of claim 17, wherein the electrical element is one of a passive device and an active device. |
This application is a continuation-in-part of U.S. patent application Ser. No. 09/225,418, filed Jan. 5, 1999 and U.S. patent application Ser. No. 09/538,327, filed Mar. 29, 2000 now U.S. Pat. No. 6,365,962.BACKGROUND OF THE INVENTION1. Field of the InventionThe invention relates to a process for an integrated circuit package that contains a flexible circuit board.2. Background of the InformationIntegrated circuits (IC's) are typically assembled into a package that is mounted to a printed circuit board. The printed circuit board may be, for example, the motherboard of a computer. The IC may be mounted to a substrate or interposer and encapsulated with a plastic or epoxy material. A process known to those skilled in the art as flip-chip technology may be used to attach an IC to a substrate with the IC's I/O (input/output) side facing the substrate. One method that may be. used to attach the flip-chip to the substrate is known as C4 (controlled-collapse chip connection) attachment. With C4, solder bumps are placed on metal terminals on the chip and a matching area of solder terminals on the substrate. The chip is then aligned to the substrate, and all solder connections are made simultaneously by reflowing the solder. The substrate is typically a printed circuit board (PCB) that has a number of pins, known as pin grid array (PGA), or solder balls, known as a ball grid array (BGA), that can be connected to a motherboard.A substrate such as a PCB typically contains a number of routing traces, vias and solder pads that electrically connect the integrated circuit to the motherboard. The routing traces and solder pads may be separated by one or more layers of dielectric material.The substrate/printed circuit board is fabricated before the integrated circuit is mounted to the substrate. The substrate must be thick enough to provide enough structural integrity to support the integrated circuit during the mounting process.For CMOS (complementary metal oxide semiconductor) logic applications, the integration of an IC chip into a single package is typically accomplished through a multi-chip module using a two-dimensional array. This type of package, however, suffers from longer inter-chip connection length. Some of the problems arising from such a package are high propagation delay, high inductance, and cross-talking noise. In a case where a three-dimensional array integration package is used, chips are stacked on top of each other and the inter-chip interconnection is achieved through edge wire bonding. A problem with this type of package is that the total I/O is limited.In an array interconnect package, alignment and attachment are typically difficult to accomplish. For de-coupling needs, discrete de-coupling capacitors are typically mounted on the die-side or land-side of the package after die attachment. For die-side capacitors, a larger package is typically required which increases cost. For land-side capacitors, a typical package has a large die-to-capacitor separation and a large current loop, which leads to large inductance and degraded system performance.Because of the limitation in making high performance and fine pitch wiring on an IC board, however, the power signal wire on the IC board are not dense enough to connect directly to the contact bumps concentrated in a small chip area. A redistribution layer, i.e. interposer layer, needs to be inserted between the chip and the PC board to provide pitch adjustment and connection routing. Such an interposer layer is used only to solve what is called an "escape problem" in flip-chip mounting. Therefore the interposer layer functions only in a passive mode. The only function of the passive interposer, therefore, is to provide more efficient and fast signal/clock routing and power distribution. Presently, organic land grid array substrates or flexible circuitry substrates are used as a passive interposer layer which, provides an interconnect function between the IC chip and the IC board.BRIEF DESCRIPTION OF THE DRAWINGSFIG. 1 schematically illustrates a cross-sectional side view of an embodiment of two integrated circuits and a pin grid array (PGA).FIG. 2 schematically illustrates an embodiment of two integrated circuits and a solder ball grid array (BGA).FIG. 3 schematically illustrates an embodiment of a microprocessor and a decoupling capacitor with a pin grid array (PGA).FIG. 4 schematically illustrates an embodiment of a microprocessor A and either a Microprocessor B or a memory with a pin grid array (PGA).FIG. 5 schematically illustrates an embodiment of a microprocessor and a converter with a pin grid array (PGA).FIG. 6 schematically illustrates an embodiment of a microprocessor comprising logic memory circuits and coupled with clock circuits.FIG. 7 schematically illustrates a top perspective view of an embodiment of an active interposer.FIG. 8 schematically illustrates a cross-sectional side view of a first substrate with a via hole formed according to an embodiment of the invention.FIG. 9 schematically illustrates the structure of FIG. 8 with metal introduced on the top and bottom surfaces.FIG. 10 schematically illustrates the structure of FIG. 9 having a circuit pattern formed thereon.FIG. 11 schematically illustrates a second substrate with similar processes performed as was to the first substrate illustrated in FIGS. 8-10.FIG. 12 schematically illustrates an adhesive attached to the first substrate.FIG. 13 schematically illustrates an electrical element attached to an adhesive layer.FIG. 14 schematically illustrates a third substrate attached to adhesive to surround an electrical element.FIG. 15 schematically illustrates a first substrate attached to a second substrate with adhesive.FIG. 16 schematically illustrates forming of vias in a attached substrate layers.FIG. 17 schematically illustrates introduction of metal form vias.FIG. 18 schematically illustrates solder balls formed on bond pads on a substrate.FIG. 19 schematically illustrates forming of vias on a first substrate.FIG. 20 schematically illustrates introducing a metal on surfaces of the first substrate.FIG. 21 schematically illustrates a pattern formed on first substrate.FIG. 22 schematically illustrates a second substrate with similar processes performed as was to the first substrate illustrated in FIGS. 19-21.FIG. 23 schematically illustrates an adhesive attached to the first substrate and forming of a hole.FIG. 24 schematically illustrates a third substrate attached to adhesive to surround an electrical element attached to the first substrate.FIG. 25 schematically illustrates a first substrate attached to a second substrate with adhesive.FIG. 26 schematically illustrates a conductive adhesive introduced to a via.FIG. 27 schematically illustrates solder balls formed on bond pads on a substrate.DETAILED DESCRIPTIONThe invention generally relates to an active interposer and a method of fabricating an active interposer. In one embodiment, a suitable active interposer according to the invention includes a multi-layer structure having contact nodes or points on opposing surfaces and signal lines therethrough. Embodiments of active interposers according to the invention further include structures having additional circuitry such as logic circuitry or electrical elements.Referring to the figures, exemplary embodiments of the invention will now be described. The exemplary embodiments are provided to illustrate the invention and should not be construed as limiting the scope of the invention.FIG. 1 shows an embodiment of integrated circuit (IC) package 10 of the present invention. Package 10 includes two electrical elements, element 12 and 40. The active interposer is formed by interposer layer 14 and electrical element 12 or electrical element 40. In one embodiment of the invention, electrical element 12 is a main system IC chip and electrical element 40 is an auxiliary chip which, with interposer layer 14, forms an active interposer. In another embodiment of the invention, the function of electrical element 12 and electrical element 40 is reversed, i.e., electrical element 12 is an auxiliary chip forming part of the active interposer and electrical element 40 is the main system IC chip that the active interposer supports. It should be mentioned that the auxiliary chip may also be a passive device, such as a de-coupling capacitor. Active interposer 14 includes a plurality of solder pads 16, routing traces 18, vias 20 and land pads 22 that connect top interposer surface 24 with bottom interposer surface 26 and electrical element 40. Top interposer surface 24 is separated from bottom surface 26 by one or more layers of dielectric. The dielectric may be a flexible (FLEX) material such as a polyimide. A polyimide is commonly used to construct flexible circuit boards. Although a flexible polyimide material is described, it is to be understood that other types of material may be employed including a more rigid material. Embedding IC 12 in the FLEX and connecting it through micro-via technology can reduce the connection pitch and allow more input/output (I/O).Electrical element 40 may be mounted to solder pads 16 of active interposer 14 with solder bumps 28 in a process commonly referred to as controlled collapsed chip connection (C4). The solder bumps 28 may be structurally reinforced with an underfill epoxy material. Integrated circuit 12 is encapsulated with encapsulant 32. Encapsulant 32 is, for example, a plastic or epoxy material. Encapsulant 32 may also be attached to the active interposer 14 in a manner that seals the integrated circuit 12.Package 10 may include a plurality of electrical contacts that are attached to corresponding land pads of active interposer 14. Each contact may include a pin 36 that is attached to a corresponding land pad 22 with solder ball 38. Pins 36 can be soldered to solder pads or plated through holes of a PCB (not shown), such as the motherboard of a computer. Alternatively, the PCB may be the substrate of an electronic cartridge such as a single edge contact cartridge (SECC) sold by Intel, Corp., the assignee of the invention.FIG. 2 shows an embodiment of IC package 10 where the contacts to a PCB are solder balls 21 that are reflowed onto the motherboard using known ball grid array (BGA) processes. Alternatively, active interposer 14 may be attached to a PCB with a plurality of solder bumps.Referring back to FIG. 1 and FIG. 2, package 10 includes electrical element 40 mounted to second surface 26 of active interposer 14. Element 40 may be mounted to active interposer 14 using C4 flip-chip processes and under-fill protection. In one embodiment in which electrical element 40 is an auxiliary chip, electrical element 40 may be a passive or active device. By way of example, as illustrated in FIG. 3, integrated circuit 12 may be a microprocessor and electrical element 40 may be a de-coupling capacitor. Alternatively, as illustrated in FIG. 4, electrical element 40 may be a memory device or another microprocessor (Microprocessor B) as illustrated by element 50 that is directly connected to microprocessor 12 (Microprocessor A). The direct attachment of both microprocessor 12 and element 50 to the active interposer provides an assembly with a relatively short electrical path between the devices. The short path length reduces the inductance, which can be important for high-speed memory busses between the processor and memory. With a memory device embedded on active interposer layer 14, a memory device can be distributed across the whole chip area and be closely coupled with a processing circuit coupled on top. This allows for the design hierarchy of the memory device as a whole, instead of fragmented units randomly distributed on the IC chip. The advantage becomes more significant with memory and processing circuits, such as embedded DRAM applications. In this case, the active interposer will provide high logic-memory communication bandwidth, save processing and testing costs, and improve yield. This is because memory and logic devices can be fabricated separately with separate optimization technology and then assembled with a memory chip as part of the active interposer. Alternatively, an electrical element 60 may be a power delivery circuit(s) that includes power management, regulator/converter, etc., as illustrated in FIG. 5.As illustrated in FIG. 6, integrated circuit 12 may be a microprocessor that contains logic and memory circuits 68. Active interposer 14 may contain driver circuits 62 that are connected to the output pads of the microprocessor. Driver circuits 62 can regenerate output signals that are generated by the logic/memory circuit 68 of the microprocessor. Moving driver circuits 62 onto active interposer 14 may reduce the amount of electrical noise on the power rail of the microprocessor created by circuits 62 switching states. Although driver circuits 62 are illustrated and described, it is to be understood that active interposer 14 may contain other circuitry such as buffer circuits (not shown) that are connected to the die pads of the integrated circuit 12.Active interposer 14 may also have clock circuit(s) 66 which provides a clock signal to logic/memory circuit 68. Moving clock circuit 66 to active interposer 14 allows clock 66 to be created with a fabrication process that is more robust than the process used to form the integrated circuit 12. That is, more layers of clock distribution networks can be implemented on the interposer layer instead of onto a chip. More repeater circuits can be implemented with little die-size penalty. Since a clock distribution network in the interposer layer can adopt more flexible wire pitch, routing and more frequent repeating/regeneration, less delay will occur. Therefore, clock skew will be alleviated and a faster clock network can be implemented. By introducing clock control logic into clock distribution, afforded by the active interposer technique, unique designs, for example, local synchronization, and a gated clock for power management, can also be implemented on the active interposer layer. Active interposer 14 can be constructed with known integrated circuit fabrication processes to construct the transistors, etc. required to create driver circuits 62 and clock circuit 66.FIG. 7 shows a top perspective view of an embodiment of active interposer 14. In this embodiment, active interposer 14 has internal power plane 79 and internal ground plane 78. Internal power plane 79 and internal ground plane 78. may be connected to corresponding power and ground planes (not shown) of the printed circuit board by, for example, solder balls 21 illustrated in FIG. 2.In the embodiment of FIG. 7, active interposer 14 has a number of interconnected power busses 74 and a plurality of interconnected ground busses 72 located on external surface 77. The power and ground pins of driver circuits 62 (see FIG. 6), for example, can be connected to internal power plane 79 and internal, ground plane 78, respectively. The power 74 and ground 72 busses may be connected to the power 79 and ground 78 planes by vias 71.Power buss 74 and ground buss 72 may be connected to contact pads 22P and 22G that are dedicated to power and ground, respectively. Active interposer 14 may also have I/O contact pads 22I that are connected to corresponding I/O die pads of the integrated circuit. I/O contact pads 22I may be coupled to the circuit board by vias 71 in active interposer 14. Power buss 74 and ground buss 72 may be formed in an alternating pattern so that ground busses 72 provide an electrical "shield" to noise created on power busses 74.Internal ground plane 78 may be separated from internal power plane 79 and power busses 74 by dielectric material 76, which together form filtering capacitors. The capacitors filter noise in the power rail of active interposer 14. Forming the filtering capacitors within active interposer 14 eliminates the need to form the capacitors within integrated circuit 12 and thus reduces the complexity and increases the yield of mass producing integrated circuit 12. Additionally, internal ground plane 78 may be located between internal power plane 79 and integrated circuit 12 to provide a shield for noise generated within the power plane of active interposer 14.FIGS. 8-18 show an embodiment for fabricating an active interposer, such as active interposer 14 described in the preceding embodiments. FIG. 8 shows a cross-section of a portion of a first substrate 82 having formed therein one or more via holes 80. First substrate 82 is, for example, a dielectric material such as polyimide material that is typically used in the fabrication of flexible PCBs (FLEX circuits). Via holes 80 may be formed by mechanical drilling, laser drilling, etching or other processes known in the art. As shown in FIG. 9, metal material 92 such as copper may be introduced onto top 84 and bottom 86 surfaces of the first substrate 82. Suitable introduction methods include deposition or plating. In one embodiment, metal 92 also fills via hole 80 to create a via.As illustrated in FIG. 10, a circuit pattern is formed in metal 92 on both top 84 and bottom 86 surfaces of first substrate 82. The circuit pattern may be formed,for example, according to known photolithographic processes. Following patterning, dielectric 100 is introduced to top surface 84 of first substrate 82. Where desired, the dielectric introduction may be followed by a planarization to planarize a surface of the substrate.As illustrated in FIG. 11, the process shown in FIGS. 8-10 may be repeated for second substrate 110. In one embodiment, second substrate 110 is a dielectric material such as polyimide material that is typically used in the fabrication of FLEX circuits and has a pattern of metal 116, vias 112 and a bottom layer of dielectric 114. In this embodiment, bottom dielectric 114 includes opening 118 to expose vias 112.As illustrated in FIGS. 12-13, electrical element 130 is attached to the first substrate, 82 with, in one embodiment, a layer of adhesive 132. Suitable material for adhesive 132 includes epoxy. In one embodiment of the invention, in which electrical element 130 is the auxiliary chip, electrical element 130 may be either a passive or active device. By way of example, electrical element 130 may be an integrated circuit that provides one or more of the following functions: power delivery network, I/O driver, clock generation/synchronization/repeater network, switching network and control logic for re-configurable and high performance interconnect, and embedded localized/distributed memory. Electrical element 130 may also include or contain active transistors, sensors, de-coupling capacitors, inductors and micro-cooling such as a peltier element. Embedding these functions within the interposer reduces the overall size of the system. Additionally, electrical element 130 is in close physical proximity to electrical element 12 illustrated in FIG. 1. The distance is typically in the order of 25-200 [mu]m. The close proximity reduces the line lengths and corresponding inductances between IC 12 and the devices within the electrical element 130. Electrical element 130 may include a plurality of contact pads 134.As illustrated in FIG. 14, third dielectric material 142 is introduced to first substrate 82 over adhesive 132 to surround electrical element 130. A portion of electrical element 130 may also be removed, for example through a planarization process, to reduce the thickness of the element 130. Suitable material for third dielectric 142 includes polyimide material that is typically used in the fabrication of FLEX circuits.Second substrate 110 is attached to first substrate 82 by introducing a layer of adhesive 150 over third dielectric 142 and electrical element 130. FIG. 15 illustrates the composite structure. A suitable material for adhesive 150 includes epoxy.As illustrated in FIG. 16, via holes 160 are formed in the composite structure. Metal is applied and removed to form vias 170 and corresponding bond pads 172 illustrated in FIG. 17, with known plating and photolithographic processes. Vias 170 are connected to contact pads 134 of the electrical element 130. As illustrated in FIG. 18, solder balls 180 are formed on bond pads 172 with processes known in the art to complete the fabrication of an embodiment of an active interposer 14 according to the invention.FIGS. 19-27 illustrate an alternate method for fabricating an active interposer 14' according to the invention. As illustrated in FIG. 19, via holes 190 are initially formed in a first substrate that is, for example, a polyimide material that is typically used in the fabrication of FLEX circuits. Metal material 200, such as copper, is introduced, for example by suitable introduction methods including deposition or plating, onto first substrate 192 and into via holes 190 as illustrated in FIG. 20. Metal material 200 may be plated in a manner to provide through hole 202 in one of via holes 190. The metal may be etched into a pattern as illustrated in FIG. 21. As illustrated in FIG. 22, a second substrate 220, that is, for example, a polyimide material that is typically used in the fabrication of FLEX circuits, is drilled, plated and etched to create vias 222 and solder pads 224.Electrical element 130 can be attached to first substrate 192 by a layer of adhesive 230 as illustrated in FIGS. 23 and 24. Hole 232 is formed in the adhesive 230 to expose contact pad 134 of electrical element 130. The second substrate 220 is attached to the first substrate 192 with another layer of adhesive 250, as illustrated in FIG. 25. As illustrated in FIG. 26, conductive adhesive 260 is placed in via hole 190 to interconnect contact pad 134 of electrical element 130 with the via. Solder balls 270 are then formed onto the solder pads as illustrated in FIG. 27 with processes known in the art.While certain exemplary embodiments have been described and illustrated in the accompanying drawings, it is to be understood that such embodiments are merely illustrative of and not restrictive on the broad invention, and that this invention not be limited to the specific constructions and arrangements illustrated and described, since various other modifications may occur to those ordinarily skilled in the art. |
Techniques for port selection are described herein. The techniques may include an apparatus a transceiver including a plurality of ports. The apparatus includes a selector to select a port from amongthe plurality of ports. The port is selected to receive a repair operation to repair a basic input output system. |
1.A device for port selection, including:Transceivers, including multiple ports;A component to select a port from among the plurality of ports for receiving operations to repair a basic input output system.2.The apparatus of claim 1, wherein the transceiver is configured in isolation mode during the repair operation.3.The apparatus of claim 2, wherein the isolation mode includes restricting the operation of the system on a chip until the repair operation is completed.4.The apparatus of any combination of claims 2-3, wherein the isolation mode includes receiving the repair operation without a handshake operation on the selected port.5.The apparatus according to any combination of claims 1-3, wherein the selection of the selected port is based on the detection of a signal at a voltage bus of the selected port, the signal indicating that the port is provided at the port Repair operation.6.The apparatus of claim 5, wherein the selected port is a first one of the integral ports, the one-piece port including a second port associated with a different orientation than the orientation of the first port.7.The apparatus of claim 6, wherein the selection of the selected port is further based on the detection of a signal at an orientation pin associated with the first port.8.The apparatus of claim 6, wherein the integral port is one of a plurality of integral ports, and wherein the selected port is from a plurality of first respectively associated with each of the plurality of integral ports And choose between the second port.9.The apparatus of any combination of claims 1-3, wherein the receiver is configured to receive a clock signal associated with the repair operation independent of other operations on the port.10.The apparatus of any combination of claims 1-3, wherein the means for selecting the port from among the plurality of ports includes logic of a physical layer of the apparatus, the logic being at least partially Includes hardware logic.11.A method for port selection includes:Selecting a port from among a plurality of ports of a transceiver to receive an operation configured to repair a basic input/output system;Receive download and run operations at the selected port.12.The method of claim 11, further comprising placing the transceiver in isolation mode during the repair operation.13.The method of claim 12, wherein the isolation mode includes restricting the operation of the system-on-chip until the repair operation is completed.14.The method according to any combination of claims 12-13, wherein the isolation mode includes receiving the repair operation without a handshake operation on the selected port.15.The method of any combination of claims 11-13, wherein the selection of the selected port is based on the detection of a signal at a voltage bus of the selected port, the signal indicating that the providing the port at the port Repair operation.16.The method of claim 15, wherein the selected port is a first port of an integral port, the integral port including a second port associated with an orientation different from the first port.17.The method of claim 16, wherein the selection of the selected port is further based on the detection of a signal at an orientation pin associated with the first port.18.The method of claim 16, wherein the integral port is one of a plurality of integral ports, and wherein the selected port is from a plurality of first and each associated with each of the plurality of integral ports, respectively Select from the second port.19.The method of any combination of claims 11-13, wherein receiving the download and run operations includes receiving a clock signal associated with the repair operation that is independent of other operations on the port.20.The method according to any combination of claims 11-13, wherein selecting the port is performed at a physical layer associated with the selected port.21.A system for port selection includes:Basic input output system;Transceivers, including multiple ports;A selector is used to select a port from among the plurality of ports for receiving an operation for repairing the basic input output system.22.The system of claim 21, wherein the transceiver is configured in an isolation mode during the repair operation, wherein the isolation mode includes restricting the operation of the system-on-chip until the repair operation is completed, and wherein the isolation mode is included in the The repair operation is received without a handshake operation on the selected port.23.A system according to any combination of claims 21-22, wherein the selection of the selected port is based on the detection of a signal at a voltage bus of the selected port, the signal indicating the provision of the repair operation at the port .24.The apparatus of claim 23, wherein the selected port is a first port of an integral port, the integral port including a second port associated with a different orientation than the first port.25.The apparatus of claim 24, wherein the selection of the selected port is further based on the detection of a signal at an orientation pin associated with the first port. |
The port selection on the computing deviceCross reference for related applicationsThis application claims the benefit of the filing date of US Patent Application No. 14/752,042, invented by Amit Kumar Srivastava, filed June 26, 2011, which is incorporated herein by reference.Technical fieldThe present disclosure generally relates to techniques for port selection on a computer bus. In particular, the present disclosure relates to selecting a port for receiving operation of a repair computing device.Background techniqueThe computing device may include a basic input output system (BIOS) that is activated during the boot process when the computing device is turned on. In some cases, the BIOS may become corrupted, causing the computing device to be at least partially inoperable. The Download and Run (DnX) operation can be used to download and repair BIOS or binary via a computer bus such as Universal Serial Bus (USB). For example, when the BIOS of a computer tablet has become corrupted, the computer tablet may be connected to a host device such as a laptop computer. DnX can include a new BIOS or repair that can run after it is properly verified. During DnX, if the receiver's dual role mode is available, the receiver's physical layer (PHY) can be configured in device mode.Description of the drawingsFIG. 1 is a block diagram illustrating a peripheral computing device and a host computing device having multiple ports.FIG. 2 is a block diagram illustrating a computing device configured to select one port from a plurality of ports.FIG. 3 is a flowchart showing a port selection for downloading a repair operation.FIG. 4 is a port selection flowchart for downloading a repair operation based on voltage detection.FIG. 5 is a port selection flow diagram in an integrated port for voltage-based download repair operations.The same numbers and components are used throughout the disclosure and drawings to refer to the same components and features. The numbers in the 100 series refer to the features originally found in Figure 1; the numbers in the 200 series refer to the features originally found in Figure 2 and so on.detailed descriptionThe present disclosure generally relates to techniques for port selection in repair operations on a computer bus. As discussed above, when the BIOS of the computing device is corrupted, the repair operation may include download and run (DnX) operations for downloading BIOS or binary on a computer bus such as Universal Serial Bus (USB).In some cases, a given computing device with a damaged BIOS may include more than one port. For example, a tablet computer may include multiple ports, of which only one of the multiple ports is configured to receive repair operations such as DnX operations. In this case, only one port can be connected to the device controller that can handle the repair operation. Thus, if the computing device includes multiple ports, the user may need to check each port to determine if a repair operation is supported at each port, or have prior knowledge of the port that supports the repair operation.In the techniques described herein, a device such as a computing device may include a receiver with multiple ports. The logic may include a selector to select one of the ports for receiving a repair operation to repair the BIOS. The selection of the port can be based on detecting a voltage signal on the computer bus's voltage line. An example of a computer bus may include a universal serial bus (USB), indicated in the specification standard "USB 3.1 Specification released on July 26, 2013 and ECNsapproved through August 11, 2014," referred to herein as the "USB specification." The Universal Serial Bus (USB).In some cases, the port may include an integral port. The integrated port may provide a power interface, may be at least partially or fully reversible, and may include a generic data interface as well as additional data specific interfaces such as a display interface, an audio interface, and the like. An example of an all-in-one port may include a Universal Serial Bus (USB) "Type C" connector, named "USBType-C Cable and Connector Specification Revision 1.0, August 11, 2014," (referred to herein as "USB Type-C Specification"). The Universal Serial Bus (USB) "Type C" connector is indicated in the specification standard. As discussed in more detail below, the USB Type-C connector may include a reversible plug connector. The other integral ports may be described herein as debug techniques. However, for the sake of simplicity, the integral port may be interchangeably referred to herein as or simply as an integral port or as a USB Type-C connector.The reversibility of an integral connector, such as a USB Type-C connector, can characterize two different ports: one port in a first orientation and a second port in an opposite orientation. In this case, the techniques described herein may select the ports based on the orientation by detecting the signal at the directional pins and detecting the voltage signal at the voltage bus associated with the port. In some cases, the computing device may include a plurality of integral ports, each integral port having a plurality of orientation-based ports. In this case, the techniques described herein may select the ports by detecting the signals at the directional pins from among the multiple directional pins and the voltage signals detected at the ports.As discussed in more detail below, once the ports are selected based at least on voltage detection, the techniques described herein enable a repair operation such as DnX to be run at a computing device. In some cases, the repair operation will repair the damaged BIOS. In addition, the repair operation can be performed during isolation mode. By enabling repair operations in isolation mode, self-enable booting can be initiated without having to provide a handshake with the host device. More specifically, the techniques described herein include performing a repair operation by providing a separate clock, such as a ring oscillator. The isolation mode (as referred to herein) includes running a repair operation while limiting at least some other operations of the computing device until the repair operation is completed. For example, components of a system-on-chip (SOC), such as a device controller or a host controller, during the operation of a repair operation. In addition, in some cases, the device in isolation mode may not require a reset. In other words, only the components of the physical layer of the computing device needed to run the repair operation may be enabled.FIG. 1 is a block diagram illustrating a peripheral computing device and a host computing device having multiple ports. Computing system 100 may include a computing device 102 having a host controller 104 . Host computing device 102 may be connected to peripheral device 108 via computer bus 106 .The peripheral device may include a receiver 110 having a selector 112 and a plurality of ports 114 . As shown in FIG. 1, the peripheral device 108 may include a device controller 116. In some cases, receiver 110 is implemented as a transceiver configured to transmit and receive signals including a signal related to port selection in a repair operation on computer bus 106 .The selector 112 may include logic that at least partially includes hardware logic such as electronic circuitry. In some cases, selector 112 may be any combination of electronic circuit logic, firmware of a microcontroller, and the like. As discussed above and discussed in more detail below, the selector 112 can detect which of the ports 114 has a signal on the voltage bus indicating that the detected port is to be enabled for port repair operations. Upon detection of a voltage signal on a given one of the ports 114, the port is placed in isolation mode and the repair operation is initialized and completed.In some cases, one or more of the ports 114 may include an integral port such as a USB Type-C port. In this case, if an all-in-one port is used to connect the host computing device 102 to the peripheral device 108, an orientation pin may be detected which indicates which orientation and which orientation-based port of the integrated port is being used, as described in more detail below. describe. Once the directional-based port is determined, the port will be selected if a voltage signal on the voltage bus is detected.In some cases, controllers such as host controller 104 and device controller 116 may be dual role controllers. In this situation, either controller is first configured as a host or device. In the case of a repair operation, the host controller 104 is configured in a host mode, and the device controller 116 is configured in a device mode. In some cases, when the BIOS of the peripheral device 108 is damaged, the device controller 116 may default to the device mode.FIG. 2 is a block diagram illustrating a computing device configured to select one port from a plurality of ports. In FIG. 2, a computing device, such as peripheral computing device 108 of FIG. 1, may include a selector, such as selector 112 of FIG. 1, as indicated by dashed box 112. In FIG. 2, selector 112 may include repair logic 202 and glue logic 204. Ports such as the plurality of ports 114 of FIG. 1 may be via a bus including a first positive data line (DP1), a first negative data line (DN1), a second positive data line (DP2), and a second negative data line (DP2). The embedded controller 208 is connected to the connector 206.In FIG. 2, DP1 and DN1 form a first differential pair, and DP2 and DN2 form a second differential pair. In the case where the connector 206 is a USB Type-C connector, the additional bus may include a first configuration channel line (CC1), a second configuration channel line (CC2), a first sideband use channel (SBU1), and a second side With channel (SBU2). In either case, the embedded controller 208 is also connected to a voltage bus (Vbus).During the initialization of device 108, an early bring-up phase may be achieved. As discussed in more detail below, the power management controller 210 enables a system-on-chip (SOC) (not shown) based on the presence of manual power up or charging activity indicated on Vbus. The presence of a voltage on Vbus may indicate that the repair operation is available and pending. In some cases, the repair operation may be detected if the power button is held for a predetermined period of time. In any event, the security controller 212 may determine whether the repair operation is valid based on the key pair associated with the repair operation to be processed. If the pending repair operation is valid, the safety controller 212 will signal the embedded controller to register the status change. Embedded controller 208 may include a status register for each of ports 114 . The repair logic 202 may be configured to detect which status register changes occur for the corresponding port. The ports from among the ports 114 with the detected status register changes can then be configured in device mode and in isolation mode while running the repair operation and until the repair operation is completed. Upon completion of the repair operation, the security controller 212 may configure the physical layer associated with the detected port in host mode, and the SOC may be directed to complete the boot process.As discussed above, the connector 206 may be an integral connector that is at least partially reversible. In other words, connector 206 may receive a reversible plug (where the orientation may be detected). Each orientation can be considered as a separate port among the ports 114. In this case, the embedded controller 208 can detect which of the CC1 or CC2 pins for a given port 114 has a voltage signal before detecting the presence of a voltage signal on Vbus. In some cases, these CC1 and CC2 pins can be described as directional pins. In some cases, orientation detection may be provided from the embedded controller 208 to the bus logic 214 . Bus logic 214 may be configured to broadcast direction detection back to receiver 110 via bus interface 216 . Once the orientation is detected, and thus the port has a voltage signal at the CC1 pin or CC2 pin, the process can continue as described above, where the presence of a voltage on Vbus can indicate that a repair operation is available and pending.In either case, a port having a voltage on Vbus may be selected as a port for performing repair operations, such as for downloading and running operations of a system such as a BIOS (not shown) of device 108. As discussed above, the selection of one port from among the ports 114 and the operation of the repair operation may be performed using the selected port in an isolation mode with a separate clock such as a ring oscillator. A separate clock may enable the repair operation to complete without enabling the device controller 116. In some cases, signaling between the glue logic 204 and the power management controller 210 may be provided to suspend operations such as other components of the device controller 116 during repair operations. In other words, even when the computing device 108 includes a damaged boot component such as a broken BIOS or a fault, the receiver 110 can select the correct port that is being used to deliver the repair operation.FIG. 3 is a flowchart showing a port selection for downloading a repair operation. As discussed above, at 302, the power management controller 210 enables a system-on-chip (SOC) based on the presence of charging activity indicated at Vbus or manually powering on. At 304, the power rail associated with the CRO is enabled and the ring oscillator clock is enabled. At block 306, early stage enablement is initiated with an embedded debug boot sequence at the embedded controller 208. During block 304, device controller 116 may be inaccessible during the isolated state. At block 308, the power management controller 210 and the safety controller 212 are started. The presence of a voltage on Vbus of FIG. 2 may indicate that a repair operation is available and pending. Therefore, Vbus is checked at 310 to determine if there is a voltage signal. If there is no Vbus signal at 310, process 300 continues to enable the SOC and configure the physical layer as a host at block 312. However, if the Vbus signal is detected at 310, the repair operation is initiated and completed at 314, and the SOC may then be enabled (as indicated in FIG. 3).FIG. 4 is a port selection flowchart for downloading a repair operation based on voltage detection. As discussed above, the techniques described herein include selecting a port based on the detection of the voltage at Vbus (eg, Vbus of FIG. 2) for delivering a repair operation. In FIG. 4, a process 400 is shown when the connector is not a reversible connector. At block 402, it is determined that the connector is not a reversible connector. The determination at block 402 may be based on the absence of CC1 or CC2 signals. Similar to block 310 of FIG. 3, at block 404, it is determined if a signal is present on Vbus. If the Vbus signal is not detected, then similar to block 312 of FIG. 3, the process 400 continues to enable the SOC of a subject computing device, such as the computing device 108 of FIGS. 1 and 2, as indicated at block 406.If the Vbus signal is detected at 404, the port is determined at 408 for which Vbus is detected. If the first port is detected as having a Vbus signal, at block 410, the first port is enabled at 410. In some cases, the physical layer of the computing device 102 is configured in device mode so that repair operations can be received. At 412, the first port is enabled in isolation mode, and similar to block 314 of FIG. 3, a repair operation is performed at 414. However, if the Vbus signal for the first port is not detected, but the Vbus signal for the second port is detected, at block 416, the second port is enabled. At block 418, the second port is enabled in isolation mode and a repair operation is run at 414. Once the repair operation is completed and completed on the first or second port, the SOC is enabled at 406 .FIG. 5 is a port selection flow diagram in an integrated port for voltage-based download repair operations. As discussed above, in some cases, the connector can be configured to be reversible because the plug can be received in more than one orientation. In this case, the orientation may indicate the port to be detected. Therefore, at 402, detection of CC1 and/or CC2 pins is performed. If no CC1 or CC2 pin is detected, process 500 returns to block 402 of FIG. If the directional signal is present on the CC1 pin or the CC2 pin, the port associated with the signal is detected at 504 . In this case, port 1 is considered as the first orientation and port 2 is considered as the second orientation associated with the integral port. In other words, each integral port may include multiple ports (each associated with a different supported orientation).For example, if a signal is detected at CC1 of FIG. 2, the first port is enabled at 506, and the Vbus detection mode is awaited at 508. Once the Vbus detection mode is enabled, a determination is made as to whether there is a Vbus signal for the first port, as indicated at block 510 . If no Vbus signal is detected at 510, the SOC is enabled at 512, similar to block 408 of FIG. 4, and block 312 of FIG.However, if the Vbus signal is detected at block 510, port 1 is enabled in isolation mode in block 514. Then, at block 516, the repair operation is run and completed. Once the repair operation is completed at 516, the SOC is enabled (as indicated at 512). On the other hand, if a signal is detected at CC2, for example, the second port is enabled at 518, and the Vbus detection mode is awaited at block 520. Once the Vbus detection mode has been enabled, a determination is made as to whether the Vbus signal is present on Vbus, as indicated at 522. If no Vbus signal is detected, the SOC is enabled at 512. However, if the Vbus signal is detected at 522, the second port is enabled in isolation mode at 524 and is run at 516 and completes the repair operation. Once at 516 and completing the repair operation, the SOC is enabled at 512 .In some cases, a computing device such as computing device 108 of FIGS. 1 and 2 may include a plurality of integral ports (each having two orientation-based ports). In this situation, detecting whether there is a signal on the directional pin at 502 may include determining which configuration channel of which integrated port the signal is occurring. In this case, the computing device 108 may include CC1_1 and CC2_1 channels indicating the configuration channel of the first integrated port, and CC1_2 and CC2_2 may indicate the configuration channels of the second integrated port. Therefore, port detection may be enabled at 504 to determine which directional port has a directional signal.The embodiments are implementations or examples. References throughout this specification to "an embodiment," "one embodiment," "some embodiments," "various embodiments," or "other embodiments" mean that a particular feature, structure, or characteristic described in connection with the embodiment is included in At least some embodiments of the present technology, but not necessarily all embodiments. The various appearances of "an embodiment," "one embodiment," or "some embodiments" do not necessarily all refer to the same embodiment.Example 1 is a device for port selection. In this example, the wireless charging apparatus may include a transceiver including a plurality of ports and a selector to select a port from among the plurality of ports for receiving an operation of repairing the basic input output system.Example 2 includes the device of Example 1. In this example, the transceiver is configured in isolation mode during the repair operation.Example 3 includes any of the combinations of devices of Examples 1-2. In this example, the isolation mode may include limiting the operation of the system on a chip until the repair operation is completed.Example 4 includes any combination of the devices of Examples 1-3. In this example, the isolation mode may include receiving a repair operation without a handshake operation on the selected port.Example 5 includes any of the combined devices of Examples 1-4. In this example, the selection of the selected port is based on the detection of a signal at the voltage bus of the selected port, which indicates that a repair operation is provided at the port.Example 6 includes any combination of the devices of Examples 1-5. In this example, the selected port is a first one of the integrated ports that includes a second port that is associated with a different orientation than the first port.Example 7 includes any of the combinations of devices of Examples 1-6. In this example, the selection of the selected port is also based on the detection of the signal at the orientation pin associated with the first port.Example 8 includes any of the combinations of devices of Examples 1-7. In this example, the integral port is one of a plurality of integral ports, and wherein the selected port is selected from a plurality of first and second ports respectively associated with each of the plurality of integral ports.Example 9 includes any of the combinations of devices of Examples 1-8. In this example, the receiver is configured to receive a clock signal associated with a repair operation that is independent of other operations on the port.Example 10 includes any combination of Examples 1-9. In this example, the selector may include the logic of the device's physical layer, which at least partially includes hardware logic.Example 11 is a method for port selection. In this example, the wireless charging device may include selecting a port from among a plurality of ports of the transceiver to receive an operation configured to repair a basic input output system, receive a download and a running line operation at the selected port.Example 12 includes the method of Example 11. This example includes putting the transceiver into isolation mode during a repair operation.Example 13 includes the method of any combination of Examples 11-12. In this example, the isolation mode may include limiting the operation of the system on a chip until the repair operation is completed.Example 14 includes any combination of Examples 11-13. In this example, the isolation mode may include receiving a repair operation without a handshake operation on the selected port.Example 15 includes any combination of Examples 11-14. In this example, the selection of the selected port is based on the detection of a signal at the voltage bus of the selected port, which indicates that a repair operation is provided at the port.Example 16 includes any combination of Examples 11-15. In this example, the selected port is a first one of the integrated ports that includes a second port that is associated with a different orientation than the first port.Example 17 includes the method of any combination of Examples 11-16. In this example, the selection of the selected port is also based on the detection of the signal at the orientation pin associated with the first port.Example 18 includes the method of any combination of Examples 11-17. In this example, the integral port is one of a plurality of integral ports, and wherein the selected port is selected from a plurality of first and second ports respectively associated with each of the plurality of integral ports.Example 19 includes any combination of Examples 11-18. In this example, receiving download and run operations may include receiving a clock signal associated with a repair operation that is independent of other operations on the port.Example 20 includes any combination of Examples 11-19. In this example, the selection port is performed at the physical layer associated with the selected port.Example 21 is a system for port selection. In this example, the wireless charging apparatus may include a basic input output system; a transceiver including a plurality of ports; a selector to select a port from among the plurality of ports for receiving operations of repairing the basic input output system.Example 22 includes the system of Example 21. In this example, the transceiver is configured in isolation mode during the repair operation.Example 23 includes any combination of Examples 21-22. In this example, the isolation mode may include limiting the operation of the system on a chip until the repair operation is completed.Example 24 includes any combination of Examples 21-23. In this example, the isolation mode may include receiving a repair operation without a handshake operation on the selected port.Example 25 includes any combination of Examples 21-24. In this example, the selection of the selected port is based on the detection of a signal at the voltage bus of the selected port, which indicates that a repair operation is provided at the port.Example 26 includes any combination of Examples 21-25. In this example, the selected port is a first one of the integrated ports that includes a second port that is associated with a different orientation than the first port.Example 27 includes any combination of Examples 21-26. In this example, the selection of the selected port is also based on the detection of the signal at the orientation pin associated with the first port.Example 28 includes any combination of Examples 21-27. In this example, the integral port is one of a plurality of integral ports, and wherein the selected port is selected from a plurality of first and second ports respectively associated with each of the plurality of integral ports.Example 29 includes any combination of Examples 21-28. In this example, the receiver is configured to receive a clock signal associated with a repair operation that is independent of other operations on the port.Example 30 includes the system of any combination of Examples 21-29. In this example, the selector may include the logic of the device's physical layer, which at least partially includes hardware logic.Example 31 is a device for port selection. In this example, the wireless charging apparatus may include a transceiver including a plurality of ports, and a component to select a port from among the plurality of ports for receiving an operation of repairing the basic input output system.Example 32 includes the device of Example 31. In this example, the transceiver is configured in isolation mode during the repair operation.Example 33 includes any combination of the devices of Examples 31-32. In this example, the isolation mode may include limiting the operation of the system on a chip until the repair operation is completed.Example 34 includes any combination of the devices of Examples 31-33. In this example, the isolation mode may include receiving a repair operation without a handshake operation on the selected port.Example 35 includes any combination of the devices of Examples 31-34. In this example, the selection of the selected port is based on the detection of a signal at the voltage bus of the selected port, which indicates that a repair operation is provided at the port.Example 36 includes any combination of the devices of Examples 31-35. In this example, the selected port is a first one of the integrated ports that includes a second port that is associated with a different orientation than the first port.Example 37 includes any combination of the devices of Examples 31-36. In this example, the selection of the selected port is also based on the detection of the signal at the orientation pin associated with the first port.Example 38 includes any combination of the devices of Examples 31-37. In this example, the integral port is one of a plurality of integral ports, and wherein the selected port is selected from a plurality of first and second ports respectively associated with each of the plurality of integral ports.Example 39 includes any combination of the devices of Examples 31-38. In this example, the receiver is configured to receive a clock signal associated with a repair operation that is independent of other operations on the port.Example 40 includes any combination of the devices of Examples 31-39. In this example, the component to select a port from among a plurality of ports may include logic of a physical layer of the device, the logic including at least partially hardware logic.Example 41 is a system for port selection. In this example, the wireless charging apparatus may include a basic input output system; a transceiver including a plurality of ports; and a component for selecting a port from among the plurality of ports for receiving an operation of repairing the basic input output system.Example 42 includes the system of Example 41. In this example, the transceiver is configured in isolation mode during the repair operation.Example 43 includes the system of any combination of Examples 41-42. In this example, the isolation mode may include limiting the operation of the system on a chip until the repair operation is completed.Example 44 includes any combination of Examples 41-43. In this example, the isolation mode may include receiving a repair operation without a handshake operation on the selected port.Example 45 includes a system of any combination of Examples 41-44. In this example, the selection of the selected port is based on the detection of a signal at the voltage bus of the selected port, which indicates that a repair operation is provided at the port.Example 46 includes any combination of Examples 41-45. In this example, the selected port is a first one of the integrated ports that includes a second port that is associated with a different orientation than the first port.Example 47 includes a system of any combination of Examples 41-46. In this example, the selection of the selected port is also based on the detection of the signal at the orientation pin associated with the first port.Example 48 includes a system of any combination of Examples 41-47. In this example, the integral port is one of a plurality of integral ports, and wherein the selected port is selected from a plurality of first and second ports respectively associated with each of the plurality of integral ports.Example 49 includes any combination of Examples 41-48. In this example, the receiver is configured to receive a clock signal associated with a repair operation that is independent of other operations on the port.Example 50 includes a system of any combination of Examples 41-49. In this example, the means for selecting a port from among a plurality of ports may include logic of a physical layer of the device, the logic including at least partially hardware logic.Not all components, features, structures, characteristics, etc. described and shown herein need to be included in one or more particular embodiments. For example, if the specification states that a component, feature, structure, or characteristic "may," "might," "can," or "could," is included, that particular component, feature, Structures or characteristics are included. If the specification or claim refers to "a" or "an" element, that does not mean there is only one of the elements. If the specification or claim refers to "additional" elements, that does not exclude the presence of more than one additional element.It is to be noted that while some embodiments have been described with reference to particular implementations, other implementations are possible according to some embodiments. Furthermore, the arrangement and/or order of drawing or other features of circuit elements shown in the drawings and/or described herein need not be arranged in the particular manner shown and described. According to some embodiments, many other arrangements are possible.In each of the systems shown in the drawings, elements may in some cases each have the same reference numbers or different reference numbers to indicate that the elements represented may be different and/or similar. However, the elements may be flexible enough to have different implementations and work with some or all of the systems shown or described herein. The various elements shown in the figures may be the same or different. Which one is called the first element and which is called the second element is arbitrary.It is to be understood that the details in the aforementioned examples may be used anywhere in one or more embodiments. For example, all optional features of the computing device described above may also be implemented with respect to any of the methods or computer-readable media described herein. In addition, although the embodiments may have been described herein using flowcharts and/or state diagrams, the techniques are not limited to those diagrams or the corresponding descriptions herein. For example, the flow need not move through each illustrated box or state, or move through them in exactly the same order as shown and described herein.The present technology is not limited to the specific details listed herein. Indeed, those skilled in the art with the benefit of this disclosure will realize that many other variations can be made from the foregoing description and drawings within the scope of the technology. Accordingly, the following claims, including any amendments thereto, define the scope of the technology. |
A multi-conductor interconnect for a microelectronic device incorporates multiple conductors and integrated shielding for the conductors. The multi-conductor interconnect includes first and second groups of conductors interleaved with one another within a dielectric structure. One of the groups of conductors may be coupled to a reference voltage node to provide shielding for the other group of conductors. The multi-conductor interconnect may further include a shield layer extending over some portion, or all, of the conductors of the first and second groups. |
Claims 1 . An interconnect for a microelectronic device, comprising:a first group of multiple conductors extending in spaced relation to one another, with each conductor having contact surfaces on opposing ends;a second group of multiple conductors interleaved between theconductors of the first group of multiple conductors, the conductors of the second group of multiple conductors electrically coupled with one another; anda dielectric structure electrically isolating the conductors of the first group from the conductors of the second group, and retaining the first and second groups of multiple conductors in their respective orientations. 2. The interconnect of claim 1, further comprising a shield extending along a first side of the first group of multiple conductors, wherein at least a portion of the conductors of the second group of multiple conductors are electrically coupled to one another and to the shield. 3. The interconnect of claim 2, wherein at least some conductors of the second group of multiple conductors extend to the shield along at least a portion of their length. 4. The interconnect of claim 2, wherein the dielectric structure comprises: a first dielectric layer extending over an outer surface of the shield; and a second dielectric layer extending between the interleaved conductors of the first and second groups of multiple conductors. 5. The interconnect of claim 4, further comprising:a first shielding contact surface in electrical communication with at least one of a conductor of the second group of multiple conductors and the shield proximate a first end of the interconnect; and a second shielding contact surface in electrical communication with at least one of a conductor of the second group of multiple conductors and the shield proximate a second end of the interconnect.6. The interconnect of any of claims 1-5:wherein the conductors of the first group of multiple conductors extend generally parallel to one another; andwherein the conductors of the second group of multiple conductors extend generally parallel to one another.7. A microelectronic device, comprising: a support structure having a first group of contacts;a first semiconductor die extending over at least a portion of thesupport structure, the first semiconductor die having a second group of contacts on a surface facing away from the support structure; anda first multi-conductor interconnect, including,a first group of multiple conductors extending in spaced relation to oneanother, with each conductor having first and second contact surfaces on first and second ends, the first contact surfaces coupled to respective contacts of the first group of contacts on the support structure, and the second contact surfaces coupled to respective contacts of the second group of contacts on the first semiconductor die,a second group of multiple conductors extending in spaced relation to theconductors of the first group, and interleaved between the conductors of the first group, anda dielectric structure electrically isolating the conductors of the first group from the conductors of the second group and retaining the conductors of the first and second groups in the interleaved orientations.8. The microelectronic device of claim 7, wherein the support structure comprises a second semiconductor die.9. The microelectronic device of claim 7, wherein the support structure comprises a substrate.10. The microelectronic device of claim 7, wherein the dielectric structure of the multi-conductor interconnect comprises dielectric material formed around the first and second set of conductors.11. The microelectronic device of claim 10, wherein the multi-conductor interconnect structure further comprises a shield structure, and wherein the dielectric structure further comprises dielectric material formed over the shield structure.12. The microelectronic device of claim 7, wherein the dielectric structure is flexible.13. The microelectronic device of any of claims 7-12:wherein the support structure further comprises a third group ofcontacts;wherein the first semiconductor die comprises a fourth group ofcontacts on a surface facing away from the support structure; andfurther comprising a second multi-conductor interconnect, the second multi-conductor interconnect comprising,a respective first group of multiple conductors extending inspaced generally relation to one another, with each conductor having first and second contact surfaces on opposing ends, the first contact surfaces coupled to respective contacts of the third group of contacts on the support structure, and the second contact surfaces coupled to respective contacts of the fourth group of contacts on the first semiconductor die,a respective second group of multiple conductors extending in spaced relation to the conductors of the first group of multiple conductors of the second interconnect, and interleaved between the conductors of such first group, anda respective dielectric structure electrically isolating theconductors of the first group of multiple conductors of the second interconnect from the conductors of the second group of multiple conductors of the second interconnect, and retaining such conductors in the described orientations. 14. The microelectronic device of claim 13:wherein at least a portion of each of the first and third groups ofcontacts on the support structure extend in parallel to one another;wherein at least a portion of each of the second and fourth groups of contacts on the semiconductor die extend generally parallel to one another; andwherein the second multi-conductor interconnect extends above the first multi-conductor interconnect. 15. The microelectronic device of claim 13:wherein at least a portion of each of the first and third groups ofcontacts on the support structure extend generally linearly along lines generally perpendicular to one another;wherein at least a portion of each of the second and fourth groups of contacts on the semiconductor die extend generally linearly along different sides of the semiconductor die; and wherein the second multi-conductor interconnect extends generally perpendicularly to the first multi-conductor interconnect. 16. The microelectronic device of any of claims 7-12, wherein the support structure further comprises a third group of contacts, and further comprising:a second semiconductor die stacked over the first semiconductor die and including a fourth group of contacts on a surface facing away from the support structure; anda second multi-conductor interconnect, the second multi-conductor interconnect comprising,a respective first group of multiple conductors extending inspaced generally relation to one another, with each conductor having first and second contact surfaces on opposing ends, the first contact surfaces coupled to respective contacts of the third group of contacts on the support structure, and the second contact surfaces coupled to respective contacts of the fourth group of contacts on the second semiconductor die,a respective second group of multiple conductors extending in spaced relation to the conductors of the first group of multiple conductors of the second interconnect, and interleaved between the conductors of such first group, anda respective dielectric structure electrically isolating theconductors of the first group of multiple conductors of the second interconnect from the conductors of the second group of multiple conductors of the second interconnect, and retaining such conductors in the described orientations.17. A method of making an interconnect for a microelectronic device package, comprising:forming multiple conductors extending in spaced relation with oneanother on a surface;forming a dielectric material over the multiple conductors electrically isolating the multiple conductors from one another; forming an electrical connection between selected spaced conductors of the multiple conductors. 18. The method of claim 17, wherein forming the multiple conductorscomprises forming the multiple conductors to extend generally parallel to one another. 19. The method of claim 17, further comprising forming a shield layerextending above the multiple conductors, wherein the shield layer forms an electrical connection between selected spaced conductors of the multiple conductors. 20. The method of claim 17, wherein forming the multiple conductorscomprises forming a first group of conductors having contact surfaces on each end. 21. The method of claim 17, wherein conductors of the second group of multiple conductors alternate with the conductors of the first group of multiple conductors. 22. The method of claim 17,wherein forming the dielectric layer comprises forming openings to the selected spaced conductors of the multiple conductors; and wherein forming an electrical connection between the selected spaced multiple conductors comprises forming a shield layer which extends through the openings to the selected spaced conductors.23. An electronic system, comprising:a microelectronic device, comprising,a support structure having a first group of contacts;a first semiconductor die coupled to the support structure, the first semiconductor die having a second group of contacts on a surface facing away from the support structure; and a first multi-conductor interconnect, including,a first group of multiple conductors extending in spacedgenerally relation to one another, with each conductor having first and second contact surfaces on first and second ends, the first contact surfaces coupled to respective contacts of the first group of contacts on the support structure, and the second contact surfaces coupled to respective contacts of the second group of contacts on the first semiconductor die,a second group of multiple conductors extending in spacedrelation to the conductors of the first group, and interleaved between the conductors of the first group, anda dielectric structure electrically isolating the conductors of the first group from the conductors of the second group. 24. The electronic system of claim 23, wherein the support structure further comprises a third group of contacts, and further comprising:a second semiconductor die stacked over the first semiconductor die and including a fourth group of contacts on a surface facing away from the support structure; anda second multi-conductor interconnect, the second multi-conductor interconnect comprising,a respective first group of multiple conductors extending inspaced generally relation to one another, with each conductor having first and second contact surfaces on opposing ends, the first contact surfaces coupled to respective contacts of the third group of contacts on the support structure, and the second contact surfaces coupled to respective contacts of the fourth group of contacts on the second semiconductor die,a respective second group of multiple conductors extending in spaced relation to the conductors of the first group of multiple conductors of the second interconnect, and interleaved between the conductors of such first group, anda respective dielectric structure electrically isolating theconductors of the first group of multiple conductors of the second interconnect from the conductors of the second group of multiple conductors of the second interconnect, and retaining such conductors in the described orientations. 25. The electronic system of claim 23, wherein the support structure further comprises a third group of contacts;wherein the first semiconductor die comprises a fourth group ofcontacts on a surface facing away from the support structure; andfurther comprising a second multi-conductor interconnect, the second multi-conductor interconnect comprising,a respective first group of multiple conductors extending inspaced generally relation to one another, with each conductor having first and second contact surfaces on opposing ends, the first contact surfaces coupled to respective contacts of the third group of contacts on the support structure, and the second contact surfaces coupled to respective contacts of the fourth group of contacts on the first semiconductor die,a respective second group of multiple conductors extending in spaced relation to the conductors of the first group of multiple conductors of the second interconnect, and interleaved between the conductors of such first group, anda respective dielectric structure electrically isolating the conductors of the first group of multiple conductors of the second interconnect from the conductors of the second group of multiple conductors of the second interconnect, and retaining such conductors in the described orientations. |
MULTI-CONDUCTOR INTERCONNECT STRUCTURE FOR A MICROELECTRONICDEVICEPriority Application[0001] This application claims the benefit of priority to MalaysianApplication Serial No. PI 2016704823, filed 27 December 2016, which is incorporated herein by reference in its entirety.Technical Field[0002] Embodiments described herein relate generally to methods and apparatus for providing interconnections in microelectronic devices; and more particularly relate to methods and apparatus for providing interconnections through an interconnect assembly including multiple conductors and integrated shielding for the conductors.Background[0003] Many forms of microelectronic devices such as IC (integrated circuit) packages include one or more semiconductor die which are coupled to another structure, such as another die or a supporting substrate through various mechanisms. One example technique used in many microelectronic devices is wire bonding between respective contact pads on a die and contact pads on another die, substrate, etc. Such wire bonding requires placement of individual wires extending between the respective locations. In some systems, shielding wires coupled to a voltage reference may be interspersed between some number of signal carrying wires to attempt to minimize crosstalk and other interference. However, because the wires are independent structures, following similar but not identical paths, the provided shielding is less than optimal. Additionally, such wire bonding wires provide minimal shielding relative to external devices or structures and yield substandard channel impedance control due to non-ideal current return paths. As a result, the transmission line impedance performance of wire bonded interconnects is commonly significantly above preferred levels (currently believed to be in the range of 60 to 95 ohms, and preferably in the range of 80-85 ohms). Such impedance mismatch or discontinuity further obstructs the enabling of high speed interconnects, for example in excess of about 2 Gbps or greater, hence limiting device performance scaling.[0004] Additionally, wire bonding increases the vertical height (Z- dimension) of the device, as extra height is needed above the die or other structure to accommodate the nail head of the wire bond, and the loop required for forming a radius extending down to a lower structure. Additionally, in microelectronic device packages including stacked die, the need to provide this loop clearance for wire bonds may require the use of spacers or interposers between die to provide the required space for the wire bond, which further increases the Z dimension of the package. In many microelectronic devices there may be many interconnections required between one or more die and other structures. For example a single die can require, in many cases, forming electrical connections with 200 contacts on the die, with electrical connections with 400 to 500 contacts, and even more contacts, on a die being common. Forming each interconnection individually, as is required with conventional wire bonding techniques, can require substantial assembly throughput time.Brief Description of the Drawings[0005] Figures 1A-C are schematic representations of an example microelectronic device in accordance with the present description, in which Figure 1A depicts the device in a cross-sectional view; Figure IB depicts the package from a top view; and Figure 1C depicts an enlarged section of the cross-sectional view of Figure 1A.[0006] Figures 2A-D are schematic representations of an example multi- conductor interconnect generally in accordance with the interconnects depicted in the microelectronic device of Figures 1A-C, in which: Figure 2A depicts a lateral cross-sectional view of the interconnect; Figure 2B depicts a longitudinal cross-sectional view of the interconnect; Figure 2C depicts a bottom view of the interconnect; and Figure 2D depicts a cutaway view sequentially showing the layers forming the interconnect.[0007] Figures 3A-B are each schematic cross-sectional schematic representations of alternative examples of microelectronic device structures including multi-conductor interconnects as described herein.[0008] Figure 4 is a flowchart of an example process for forming a microelectronic device incorporating at least one interconnect of the type described herein.[0009] Figure 5 is a flowchart of an example process for forming a microelectronic device interconnect of the type described herein.[0010] Figures 6A-F are schematic cross-sectional representations of representative stages in an example process for forming microelectronic device interconnect of the type described herein.[0011] FIG. 7 is block diagram of an electronic system which may incorporate a microelectronic device including one or more multi-conductor interconnects as described herein.Description of Embodiments[0012] The following description and the drawings sufficiently illustrate specific embodiments to enable those skilled in the art to practice them. Other embodiments may incorporate structural, logical, electrical, process, and other changes. Portions and features of some embodiments may be included in, or substituted for, those of other embodiments. Example embodiments set forth in the claims encompass all available equivalents of those claims.[0013] The present description addresses example embodiments of an interconnect structure incorporating multiple conductors and integrated shielding for the conductors, and further addresses microelectronic devices including that multi-conductor interconnect structure. In some examples as described herein, the multi-conductor interconnect structure may include a first group of multiple conductors extending in spaced relation to one another. The conductors of this first group are constructed to carry signals, and each conductor may include a contact surface of some configuration to facilitate electrical coupling to an electrical contact on a semiconductor die, substrate, or other structure. The interconnect structure may further include a second group of multiple conductors interleaved between the conductors of the first group. The multiple conductors of the second group are constructed to serve as shielding conductors. At least some portion of the conductors of the second group, and in some examples all or substantially all of such conductors of the second group, may be electrically coupled with one another. As described herein, the multi-conductor interconnect structure may further include a dielectric structure extending between the first and second groups of multiple conductors to electrically isolate the conductors of the first and second groups from one another. In many examples, the multi-conductor interconnect structure may further include a reference shield layer extending over some portion, or all, of the conductors of the first and second groups. Though the term "shield layer" is used herein to identify this shield structure, which in many examples may have some portion extending laterally over at least some portion of the multiple conductors of the first and second groups, the shield structure may include additional features or components in addition to the laterally extending portion (or "layer") extending over such conductors. When present, the shield layer may be adapted to connect to reference nodes on different structures. In many examples, the interconnect may be flexible to facilitate coupling the interconnect as desired to establish the electrical connections between structures in the microelectronic device. To avoid unnecessary wordiness in the current description, the term "interconnect" will be used to describe the "multi-conductor interconnect" referred to above.Thus, the two terms should be considered as synonymous in the context of the present description.[0014] When an interconnect structure as described above is used in forming a microelectronic device, all electrical connections between a semiconductor die and another structure (another semiconductor die, a substrate, and inter poser, or other structure, etc.) can be made through one or more such interconnects. In other example microelectronic devices, the interconnect may be used only for interconnecting signal paths susceptible to either causing or being disrupted by electromagnetic interference (EMI). For example, signals in a channel operating approximately at or above 2Gbps can be of concern in some situations. Particular concerns regarding interference may be present in a channel operating at 10 Gbps (5 GHz) or higher. As one example of interference concerns, memory input/output signals transitioning at approximately 2.9 GHz (i.e., a channel operating at 5.8 Gbps), have been found to cause electromagnetic interference with some forms of wirelesscommunication devices, such as, for example, those operating in accordance with the IEEE 802.11 standard and its related standards family. The EMI effect can increase with an increase in frequency of the signals. Memory input/output signals transitioning at 3.2 GHz or above, such as are reflected in theDDR5/LPDDR5 (double rate/low power double data rate) specifications for dynamic random-access memory (DRAM) devices from the JEDEC Solid State Technology Association, have also been found to cause problematic EMI with such wireless devices.[0015] Referring now first to Figures 1A-C, the figures schematically depict an example microelectronic device 100, in which Figure 1A depicts the device 100 in a cross-sectional side view; Figure IB depicts the package from a top view; and Figure 1C depicts an enlarged section of the cross-sectional side view of Figure 1A. The example microelectronic device 100 includes a support structure, here in the form of a substrate 102. A first die 104 is coupled to substrate 102 in a flip chip configuration, with contacts, indicated generally 106, on first die 104 directly engaging corresponding contacts on substrate 102. Substrate 102 also includes a plurality of external contacts, here in the example form of contact balls, indicated generally at 134, to provide electrical connection with a printed circuit board, such as a motherboard, or another external device or structure. In other examples, the support structure may be another semiconductor die, an interposer, a redistribution structure, or any other structure or device with which electrical connections are needed. [0016] Microelectronic device 100 also includes a stacked die assembly, indicated generally at 108, in which a lower die 110 is secured to substrate 102, such as through a first adhesive layer 112. Additionally, an upper die 114 is secured to lower die 110 again potentially through use of a second adhesive layer 116. For each of upper and lower die 110, 114, the die are located with their active surfaces facing upwardly, away from substrate 102. As a result, a first group of contacts 118A, 118B, extends along each side of lower die 110. Similarly, additional groups of contacts 120A, 120B extend along either side of upper die 114. In the depicted example, two groups of contacts 122A, 122B, 124A, 124B (each arranged in a linear row) are formed on substrate 102 on opposite sides of lower die 110.[0017] In the depicted example, respective contacts within contact group 118 A on lower die 110 are coupled to respective contacts within contact group 124A on substrate 102 through a first interconnect structure 126; and respective contacts within contact group 120A on upper die 114 are coupled to respective contacts within contacts 122A on substrate 102 through a second interconnect structure 128. In a similar manner, respective contacts within contact group 118B on lower die 110 are coupled to respective contacts within contact group 124B on substrate 102 through a third interconnect structure 130; and respective contacts within contact group 120B on upper die 114 are coupled to respective contacts in contact group 122B on substrate 102 through a fourth interconnect structure 132. An example configuration for each interconnect structure will be discussed in more detail in reference to Figures 2A-D. In the depicted example, the components of microelectronic device 100 are encased within an enclosure 136, such as a molded component.[0018] Referring now specifically to Figure IB, the schematically represented top view of microelectronic device 100 depicts an example embodiment wherein upper die 114 includes additional rows of contacts 120C, 120D on the remaining two sides of upper die 114 not visible in the cross- section of Figure 1A. Respective contacts within each of rows 120C, 120D are coupled to respective contacts in additional rows of contacts on the substrate, as indicated at 122C, 120D, through additional interconnect structures 140, 142. As a result, different groups of contacts, in this example extending along adjacent sides of the die, are connected through respective interconnects which extend generally perpendicularly to one another. Additionally, as can be seen in Figure 1A, interconnects extending to different groups of contacts can extend over one another (as will also be discussed in reference to Figure 3A). The interconnects 126, 130 extending between the lower die 110 and the substrate are not visible in the top view of Figure IB, due to the presence of interconnect structures 128, 132, extending above them. Lower die 110 may also have groups of contacts extending along all four edges, and coupled through respective interconnects to additional rows of contacts on substrate 102, in a manner directly analogous to the connections between upper die 114 and substrate 102.[0019] Referring now to Figures 2A-D, the figures schematically depict a representative portion of an example structure for an interconnect 200 generally in accordance with the interconnect structures depicted in the microelectronic device of Figures 1A-C, in which: Figure 2A depicts a lateral cross-sectional view of the interconnect; Figure 2B depicts a longitudinal cross- sectional view of the interconnect (along line 2B— 2B in Figure 2A); Figure 2C depicts a bottom view of the interconnect; and Figure 2D depicts a cutaway view sequentially showing the layers forming the interconnect 200.[0020] As shown in Figure 2A, interconnect 200 includes a first group of conductors, represented by conductors 202A-C. Conductors 202A-C may be signal carrier conductors in interconnect 200 (as opposed to shielding conductors). The example cross-section is shown for purposes of illustration only, and represents only a small segment of an interconnect that may be constructed in accordance with the present disclosure. In some example systems, a single interconnect may have as few as 2 conductors in the first group (i.e., signal conductors). Though in many example systems, such as those in which the interconnect is used to provide larger scale chip-to-chip interconnection, the interconnect may include substantially more signal conductors. For example, interconnects for such applications may include 50 or more signal conductors, and in many cases may include at least 200 signal conductors, with 400 to 500, or more, signal conductors also beingcontemplated for interconnects in accordance with the present disclosure.[0021] Interconnect 200 also includes a second group of conductors, represented by conductors 204A-D. In the depicted example the second group of conductors may be shielding conductors. In many examples, the second group of conductors may be interleaved with the first group of conductors. In the depicted example, the interleaving is in a 1-to-l ratio, where every signal conductor 202A-C may represent a single-ended bus, and is separated from another signal conductor by a respective second (shielding) conductor (i.e., the conductors of the first and second groups alternate along the cross-section of the interconnect). Other degrees of interleaving are contemplated. An example alternative interleaving arrangement may be applied, for example, in interconnects providing a differential bus utilizing a pair of conductors to carry a set of signals across the interface. In some examples of that structure, the interconnect may include signal conductors in a 2-to-l ratio to shielding conductors. Other examples may include a 2-to-l (or greater) ratio of signal conductors to shielding conductors in examples other than those establishing a differential bus. Additionally, the interleaving need not be identical across the width of the interconnect. For example, in portions of the interconnect carrying signals operating at a high transition rate, the interleaving may be in a 1-to-l ratio, as depicted in Figure 2A, to provide maximum shielding, while in other regions across the width of the interconnect that are intended to carry lower transition rate signals (therefore presenting less risk of causing, or being impacted by, electrical interference), the interleaving might be to place a shielding conductor between every two or three signal conductors (therefore having signal conductors present in a 2 to 1, or 3 to 1, or other desired ratio, to shielding conductors), in that region of the interconnect.[0022] In the example of interconnect 200, the first and second groups of conductors are electrically isolated from one another by a dielectric structure, indicated generally at 206. In the depicted example, dielectric structure includes a first vertical layer, indicated generally at 208, which extends above and around each conductor of the first group of conductors 202A-C, and therefore extends between each conductor of the first group and the adjacent conductors of the second group, isolating those conductors of the first group from adjacent conductors of the second group 204A-D. The depicted example interconnect 200 also includes a conductive shield layer 210 which extends along the width of the interconnect and above each conductor of the first group (202A-C). In one desirable configuration, each conductor of the second group 204A-D extends to and engages shield layer 210, which extends as a generally planar structure (at least prior to any flexing of interconnect 200). In view of the presence of shield layer 210, dielectric structure 206 also includes a second vertical layer, indicated generally at 212, which extends over shield layer 210 (and which may extend around a portion of shield layer 210, such as, for example, on the terminal ends or sides of the interconnect 200).[0023] As can best be seen in the bottom view of interconnect 200 ofFigure 2C, each conductor of the first group 202A-C includes contact surfaces at each end of interconnect 200, as shown at 214A-C proximate a first end(indicated generally at 220), of interconnect 200, and at 216A-C, proximate a second end (indicated generally at 222), of interconnect 200. Contact surfaces 214A-C and 216A-C are sized and configured to facilitate their bonding to respective contacts on a die, substrate or other structure (as discussed in reference to Figures 1A-C).[0024] In many examples, not all conductors of the second group 204A- D will include contact surfaces to enable respective direct connections to other structures. Because these conductors are shielding conductors, a first contact surface 218 at the first end 220, and a second contact surface 224 at the second end 222 may be sufficient, in many examples, to connect to respective reference voltage nodes on the mating structures (such as a first reference voltage node on a semiconductor die and a second reference voltage node on the supporting structure), and to provide electrical communication between those reference voltage nodes through conductors 204A-D and, in this example, through shield layer 210. For the example configuration of the example interconnect, placing of the shielding contact surfaces 218, 224 diagonally opposite one another in interconnect 220 is believed to balance voltage and current flow across the shielding components. However in other examples, which may include interconnects having a relatively larger number of signal conductors (As one example, over 100 or more signal conductors), it may be desirable to include additional connections of one or more shielding conductors to the reference nodes across the width of the interconnect provide desired referencing of the shielding structures.[0025] Referring now to Figures 3A-B, each figure depicts a schematic cross-section of a respective microelectronic device structure including interconnects as described herein. Figure 3A depicts a representative portion of a microelectronic device 300 in which a substrate 302 supports a stacked die assembly, indicated generally at 304 including a lower die 306 and an upper die 308. Lower die 306 may be coupled to substrate 302 by an adhesive layer 332, and similarly, upper die 308 may be coupled to lower die 306 by a similar adhesive layer 334.[0026] Lower die 306 includes two groups of contacts 310, 312 which in many example configurations may be arranged in generally parallel lines of repeating (and in many examples, equally spaced) contacts on the active surface and along the depicted edge 314 of lower die 306. In the depicted example, upper die 308 includes only a single group of contacts 316, again preferably arranged in a row (extending perpendicular to the plane of the figure) on the upper surface of the die. Each group of contacts 310, 312, 316, is coupled by a respective interconnect 320, 322, 324 (which may each be of the configuration as discussed above in reference to Figures 2A-D), to a respective group of contacts on substrate 302, as indicated at 326, 328, 330. For clarity of illustration, interconnects 320, 322, 324 are not depicted entirely in cross- section, except at the upper terminal end wherein the contact surfaces (214A-C in Fig. 2) on the first group of conductors (202A-C in Figure 2) of each interconnect 320, 322, 324, are coupled to respective contacts 310, 312, 316 on lower die 306 and upper die 308. In the example of Figure 3A, the use of the described interconnects facilitates an improved input/output (I/O) density between lower die 306 and substrate 302 with less package substrate real- estate consumption (i.e., through tighter spacing between respective group of contacts on substrate 302 e.g. 326, 328, 330), and provides greater protection against short circuits, than would be available using conventional wire bonding techniques. Accordingly, interconnect 322 can extend over interconnect 320 to engage contact group 310 because of the dielectric layer along the upper surface of interconnect 320 (as can be seen at 212 in Figures 2A-B, and D).[0027] Figure 3B depicts a representative portion of an alternative structure for a microelectronic device 340 in which a substrate 342 supports a stacked die assembly, indicated generally at 344, including a lower die 346 and an upper die 348. Lower die 306 may again be coupled to substrate 342 by an adhesive layer 362, and similarly, upper die 348 may be coupled to lower die 346 by similar adhesive layer 360. The configuration of the stacked lower die 346 and upper die 348 differs from that of Figure 3A in that rather than upper die 308 having an edge which is set back from edge 314 of lower die 306, as shown in that figure, in microelectronic device 340 the lower die 346 and upper die 348 can be arranged with edges 364, 366 essentially flush with one another. This configuration would not be possible with conventional wire bonding techniques due to the vertical space required for the wire bonding loops. With such conventional wire bonding techniques, if die were to be stacked with generally aligned edges as shown in Figure 3B, an interposer or other spacer would need to be placed between the die to accommodate the wire bonding, and such would then add to the Z-dimension of the die stack.[0028] Such increased dimensions are avoided by the use of the described interconnect 354 to connect with a group of contacts, as indicated generally at 350 on lower die 346, and to electrically couple those contacts to respective contacts of group 352 on substrate 342. Again, interconnect 354 may be in the configuration discussed above in reference to Figures 2A-D. As shown in the figure, interconnect 354 can be coupled with contacts of contact group 350 essentially in the vertical space established by adhesive layer 360. A second interconnect 356 can couple respective contacts of a group of contacts as generally indicated at 358 to contacts on substrate 342 (not depicted).[0029] Referring now to Figure 4, the figure depicts a flowchart of an example process 400 for forming a microelectronic device incorporating at least one interconnect of the type described herein. As indicated at 402, a first semiconductor die is attached to a support structure having a first group of contacts; and the first semiconductor die includes a second group of contacts on its active surface, which is placed facing away from the support structure. In the current example the support structure may be in the form of a substrate. However as discussed elsewhere herein, in other examples the support structure may be another semiconductor die, an interposer, a spacer, or another redistribution structure, or any structure or device for which electrical communication with the first semiconductor die is required. For example, in some embodiments the first semiconductor die may be a processor, while in other embodiments the first semiconductor die may be another device including, but not limited to, a memory device, a chipset, a field programmable grid array (FPGA), or graphics processor, etc.. In other examples, the support structure may be a semiconductor die, and may be or include any one or more of a processor, a memory device, a chipset, a field programmable grid array (FPGA), or a graphics processor, etc.[0030] As indicated at 404, contacts of the first group of contacts on the support structure are electrically coupled to respective contacts of the second group of contacts on the first semiconductor die through a interconnect. In the example process, the interconnect may have a structure analogous to that discussed in reference to Figures 2A-D: (i) a first group of multiple conductors extending in spaced relation to one another, with each conductor having contact surfaces on opposing ends, the contact surfaces configured to mechanically and electrically coupled to the identified contacts; (ii) second group of multiple conductors interleaved between the conductors of the first group of multiple conductors, with at least some portion of the conductors of the second group electrically coupled with one another; and (iii) a dielectric structure extending between the conductors of the first and second groups and retaining the first and second groups of multiple conductors in the respective orientations. The electrical coupling of the contact surfaces of the interconnect with respective contacts of the identified groups can be accomplished through various techniques, including thermal compression bonding (which optionally can include use of anisotropic conductive film), a solder reflow or diffusion process, surface activated bonding, etc. In an example solder reflow process, pre-formed solder paste may be applied to the contacts on both the semiconductor die and the support structure through stencil printing prior to the solder reflow process.[0031] As indicated at 406, the example process also may include establishing at least one electrical connection between a reference voltage node on the first semiconductor die and the second group of multiple conductors. Similarly, as indicated at 408, at least one electrical connection is established between a reference voltage node on the support structure and the second group of multiple conductors.[0032] Additionally, in many examples, the interconnect may also include a reference structure (again including a laterally extending "layer" as discussed earlier herein) extending over at least a portion of the first and second sets of conductors. In such examples in which such a shield layer is also present, an electrical connection may also be established between the reference voltage nodes and the shield layer. As described in reference to Figures 2A-D, one desirable way of establishing that electrical connection is through direct electrical connection of the second group of multiple conductors with the shield structure.[0033] The reference voltage nodes can be at any desired potential for shielding, but commonly may be either ground or Vcc. Where a shield layer extending over the conductors is present, the shield layer may also be coupled to the reference voltage nodes. In many examples, the connection of the shield layer with the reference voltage nodes may be through the second group of multiple conductors.[0034] Referring now to Figure 5, the figure depicts a flowchart of an example process 500 for forming a microelectronic device interconnect of the type described herein. The description of the identified process will also be made in reference to Figure 6A-F, which depict schematic cross-sectional representations during example stages of possible implementations of process 500. As indicated at 502, multiple conductors are formed extending with one another on a surface. One example way of forming these conductors is through use of a metal layer 602, which may be for example a metal foil, laminated onto a carrier 604. That metal layer can then be patterned to define the individual conductors from layer 602, as depicted in Figure 6B. Though in the depicted cross section the individual conductors are shaped identically, as can be seen in reference to Figure 2C, in other cross-sections the contact surfaces may be formed on some conductors, and the resulting cross-sections will appear different than the depicted cross section. Because the patterned conductors may be parts of different groups of conductors, they are identified with numbers accordingly, with conductors 610A-C being conductors of the first (signal) group, and conductors 608A-D being conductors of the second(shielding) group.[0035] The metal foil (or other metal structure) may be patterned, for example, by routing and/or laser cutting of the metal, or by photolithography and etching. In many examples, the metal may be copper or a copper- containing alloy, or in some examples aluminum. Any suitable metal providing appropriate conductivity may be used in the described interconnects.[0036] In other examples, rather than forming the individual conductors from a patterned metal layer, metal conductors, such as wires (for example, wire bonding wire) may be arranged to extend alongside one another in the desired arrangement. Those wires may then be electrically isolated from one another and retained in their desired orientation, through use of a dielectric material, as described below. [0037] An example of the configuration of the multiple conductors can be seen in Figure 2C. In the example of that figure, and in many other examples, the multiple conductors may extend generally parallel to one another. However that orientation is not required. For example, in some microelectronic devices, the spacing of the first group of contacts on a semiconductor die may be different from the spacing of a second group of contacts on a substrate or other structure to be electrically coupled to the die. In that circumstance, conductors in the interconnect may be arranged, for example, not in parallel, but in a fanned pattern to facilitate the spatial transition between such first and second groups of contacts.[0038] Referring again to Figure 5, as indicated 504, a dielectric material is then formed over the multiple conductors to electrically isolate at least a portion of the multiple conductors from one another. The dielectric material can be of many suitable compositions, including, for example, one or more of polyimide, polyamide, bismaleimide-triazine resin, benzocyclobutene (BCB), polyurethanes, high-density polyethylene (HDPE), poly (4,4'-oxydiphenylene- pyromellitimide) and polyethylene terephthalate, etc. As noted previously, preferably the dielectric material may be sufficiently flexible as to ease forming of the interconnect into a shape to provide the connections as described herein. Thus, flexibility of the dielectric, and of the formed interconnect, refers to the ability of the interconnect to be formed in a first configuration (such as a generally flat structure, as shown in the examples herein), and to be shaped sufficiently to couple to respective groups of contacts. In many examples, the contacts may be oriented vertically offset from one another but facing in the same direction, as is common with applications in which wire bonding is used.[0039] As can be seen in Figure 6C, a dielectric material layer 606 has been deposited over the patterned conductors 610A-C and 608A-D. In the present example, the dielectric material layer 606 thickness and the conductor spacing define the distance between the signal conductors and the shielding conductors. A wide variety of material spacings within the interconnect may be utilized as appropriate for any specific application. An example spacing suitable for use with high-frequency signals as discussed earlier herein is a spacing of 10 to 20 μιτι, though such spacing may be adjusted depending upon the impedance target for the conductive channel through the interconnect.[0040] As indicated at 506, an electrical connection is formed between selected spaced conductors of the multiple conductors; and as indicated at 508 an optional reference shield structure is formed extending above the multiple conductors. In the present example of Figures 6A-F, the electricalinterconnection between spaced conductors of the multiple conductors may be formed through forming of the shield layer.[0041] As shown in Figure 6D, an example of implementation of the above operations is for dielectric layer 606 to be patterned to expose surfaces of alternate (shielding) conductors 608 A-D, while leaving dielectric extending around and over interleaved (signal) conductors 610 A-C. A metal layer 612 may then be formed to extend from exposed tops of shielding conductors 608A-D and to form a shield layer extending across at least a substantial portion of the width of the interconnect being formed. In the depicted example the shield layer contacts the shielding conductors 608A-D along at least a portion of their length, and in many examples along substantially all of their length. Thus, viewed in cross-section as in Figure 6E, the shield structure has a comb-like configuration extending over and around the signal conductors 610A-C. In some examples, metal layer 612 may be formed by an electroplating process. In some such processes, a metallic seed layer may be sputtered onto the structure such as that of Figure 6D, and then copper or another material may be electroplated over the seed layer.[0042] In many examples, another dielectric layer 614 may form a portion of the dielectric structure of the interconnect and may extend around the sides of the conductors at the lateral ends of the interconnect. When present, dielectric layer 614 can be of the same material as used for dielectric layer 606; or in some cases may be formed of a different material. For example, it may be desirable to have dielectric layer 614 formed of a more abrasion- resistant material. For applications in which the interconnect is not expected to have another interconnect (or other conductive structure) extending above or nearby, the upper dielectric layer 614 may be omitted. The described example process facilitates forming of flexible interconnects having a thickness of approximately 30 μιτι to about 70 μιτι, though as noted above, dimensions of the described structures of the interconnect may be adjusted in response to a desired characteristic impedance of the channel.[0043] Subsequently, the formed interconnect may be singulated and removed from the carrier as a discrete interconnect structure. In some examples, the formed interconnect may be singulated through mechanical sawing or laser cutting. The interconnect may then be used as described herein for connecting a semiconductor die with another die or other support structure.[0044] As noted earlier, many types of semiconductor die may be beneficially packaged together in a microelectronic device in the manner described herein. One example of such a beneficial combination would be in connecting a processor to a substrate or other structure through use of a multi- conductor interconnect as described herein; or wherein another device type, for example memory device(s), a chipset, a field programmable grid array (FPGA), graphics processor, etc. may be connected with each other, a substrate, or another structure through use of a multi-conductor interconnect. The resulting microelectronic device may then be included in a larger electronic device or system, as described below.[0045] Figure 7 illustrates a system level diagram, according to one embodiment of the invention. For instance, Figure 7 depicts an example of an electronic device (e.g., system) including one or more microelectronic devices including one or more interconnects as described herein. Figure 7 is included to show an example of a higher level device application for the present invention. In one embodiment, system 700 includes, but is not limited to, a desktop computer, a laptop computer, a netbook, a tablet, a notebook computer, a personal digital assistant (PDA), a server, a workstation, a cellular telephone, a mobile computing device, a smart phone, an Internet appliance or any other type of computing device. In some embodiments, system 700 is a system on a chip (SOC) system.[0046] In one embodiment, processor 710 has one or more processing cores 712 and 712N, where 712N represents the Nth processor core inside processor 710 where N is a positive integer. In one embodiment, system 700 includes multiple processors including 710 and 705, where processor 705 has logic similar or identical to the logic of processor 710. In some embodiments, processing core 712 includes, but is not limited to, pre-fetch logic to fetch instructions, decode logic to decode the instructions, execution logic to execute instructions and the like. In some embodiments, processor 710 has a cache memory 716 to cache instructions and/or data for system 700. Cache memory 716 may be organized into a hierarchal structure including one or more levels of cache memory.[0047] In some embodiments, processor 710 includes a memory controller 714, which is operable to perform functions that enable the processor 710 to access and communicate with memory 730 that includes a volatile memory 732 and/or a non-volatile memory 734. In someembodiments, processor 710 is coupled with memory 730 and chipset 720. Processor 710 may also be coupled to a wireless antenna 778 to communicate with any device configured to transmit and/or receive wireless signals. In one embodiment, the wireless antenna interface 778 operates in accordance with, but is not limited to, the IEEE 802.11 standard and its related family, Home Plug AV (HPAV), Ultra Wide Band (UWB), Bluetooth, WiMax, or any form of wireless communication protocol.[0048] In some embodiments, volatile memory 732 includes, but is not limited to, Synchronous Dynamic Random Access Memory (SDRAM), Dynamic Random Access Memory (DRAM), RAMBUS Dynamic Random Access Memory (RDRAM), and/or any other type of random access memory device. Nonvolatile memory 734 includes, but is not limited to, flash memory, phase change memory (PCM), read-only memory (ROM), electrically erasable programmable read-only memory (EEPROM), or any other type of non-volatile memory device.[0049] Memory 730 stores information and instructions to be executed by processor 710. In one embodiment, memory 730 may also store temporary variables or other intermediate information while processor 710 is executing instructions. In the illustrated embodiment, chipset 720 connects with processor 710 via Point-to-Point (PtP or P-P) interfaces 717 and 722. Chipset 720 enables processor 710 to connect to other elements in system 700. In some embodiments of the invention, interfaces 717 and 722 operate in accordance with a PtP communication protocol such as the Intel* QuickPath Interconnect (QPI) or the like. In other embodiments, a different interconnect may be used.[0050] In some embodiments, chipset 720 is operable to communicate with processor 710, 705N, display device 740, and other devices 772, 776, 774, 760, 762, 764, 766, 777, etc. Chipset 720 may also be coupled to a wireless antenna 778 to communicate with any device configured to transmit and/or receive wireless signals.[0051] Chipset 720 connects to display device 740 via interface 726.Display 740 may be, for example, a liquid crystal display (LCD), a plasma display, cathode ray tube (CRT) display, or any other form of visual display device. In some embodiments of the invention, processor 710 and chipset 720 are merged into a single SOC. In addition, chipset 720 connects to one or more buses 750 and 755 that interconnect various elements 774, 760, 762, 764, and 766. Buses 750 and 755 may be interconnected together via a bus bridge 772. In one embodiment, chipset 720 couples with a non-volatile memory 760, a mass storage device(s) 762, a keyboard/mouse 764, a network interface 766, a smart TV 776, consumer electronic(s) 777, etc. via interface 724.[0052] In one embodiment, mass storage device 762 includes, but is not limited to, a solid state drive, a hard disk drive, a universal serial bus flash memory drive, or any other form of computer data storage medium. In one embodiment, network interface 766 is implemented by any type of well known network interface standard including, but not limited to, an Ethernet interface, a universal serial bus (USB) interface, a Peripheral Component Interconnect (PCI) Express interface, a wireless interface and/or any other suitable type of interface. In one embodiment, the wireless interface operates in accordance with, but is not limited to, the IEEE 802.11 standard and its related family,Home Plug AV (HPAV), Ultra Wide Band (UWB), Bluetooth, WiMax, or any form of wireless communication protocol.[0053] While the modules shown in Figure 7 are depicted as separate blocks within the system 700, the functions performed by some of these blocks may be integrated within a single semiconductor circuit or may beimplemented using two or more separate integrated circuits. For example, although cache memory 716 is depicted as a separate block within processor 710, cache memory 716 (or selected aspects of 716) can be incorporated into processor core 712.[0054] To better illustrate the methods and apparatuses described herein, a non-limiting set of example embodiments are set forth below as numerically identified examples:[0055] Example 1 is an interconnect for a microelectronic device, including: a first group of multiple conductors extending in spaced relation to one another, with each conductor having contact surfaces on opposing ends; a second group of multiple conductors interleaved between the conductors of the first group of multiple conductors, the conductors of the second group of multiple conductors electrically coupled with one another; and a dielectric structure electrically isolating the conductors of the first group from the conductors of the second group, and retaining the first and second groups of multiple conductors in their respective orientations.[0056] In Example 2, the subject matter of Example 1 optionally includes a shield extending along a first side of the first group of multiple conductors, where at least a portion of the conductors of the second group of multiple conductors are electrically coupled to one another and to the shield. [0057] In Example 3, the subject matter of Example 2 where at least some conductors of the second group of multiple conductors extend to the shield along at least a portion of their length.[0058] In Example 4, the subject matter of any one or more of Examples 2-3 where the dielectric structure optionally includes: a first dielectric layer extending over an outer surface of the shield; and a second dielectric layer extending between the interleaved conductors of the first and second groups of multiple conductors.[0059] In Example 5, the subject matter of Example 4 where the shield and the second group of multiple conductors form a structure having a comblike cross-section.[0060] In Example 6, the subject matter of any one or more of Examples4-5 where the second dielectric layer insulates the conductors of the first group of multiple conductors from the conductors of the second group of multiple conductors and from the shield.[0061] In Example 7, the subject matter of any one or more of Examples4-6 where the first and second dielectric layers are formed of the same dielectric material.[0062] In Example 8, the subject matter of any one or more of Examples 4-7 where the first and second dielectric layers are formed of different dielectric materials.[0063] In Example 9, the subject matter of any one or more of Examples4-8 optionally include a first shielding contact surface in electricalcommunication with at least one of a conductor of the second group of multiple conductors and the shield proximate a first end of the interconnect; and a second shielding contact surface in electrical communication with at least one of a conductor of the second group of multiple conductors and the shield proximate a second end of the interconnect.[0064] In Example 10, the subject matter of any one or more ofExamples 1-9 where a first portion of the contact surfaces on a first end of the conductors of the first group of multiple conductors are arranged in linearly spaced relation proximate a first end of the interconnect, and where a second portion of the contact surfaces on a second end of the conductors of the first group of multiple conductors are arranged in linearly spaced relation proximate a second end of the interconnect.[0065] In Example 11, the subject matter of Example 10 where the first and second portions of the contact surfaces proximate the first and second ends of the interconnect are each arranged at a common spacing distance to one another.[0066] In Example 12, the subject matter of any one or more of Examples 10-11 where the contact surfaces on at least one end of the interconnect are configured to be coupled to respective contacts on a semiconductor die.[0067] In Example 13, the subject matter of any one or more ofExamples 1-12 where the conductors of the first group of multiple conductors extend generally parallel to one another; and where the conductors of the second group of multiple conductors extend generally parallel to one another.[0068] Example 14 is a microelectronic device, including: a support structure having a first group of contacts; a first semiconductor die extending over at least a portion of the support structure, the first semiconductor die having a second group of contacts on a surface facing away from the support structure; and a first multi-conductor interconnect, including, a first group of multiple conductors extending in spaced relation to one another, with each conductor having first and second contact surfaces on first and second ends, the first contact surfaces coupled to respective contacts of the first group of contacts on the support structure, and the second contact surfaces coupled to respective contacts of the second group of contacts on the first semiconductor die, a second group of multiple conductors extending in spaced relation to the conductors of the first group, and interleaved between the conductors of the first group, and a dielectric structure electrically isolating the conductors of the first group from the conductors of the second group and retaining the conductors of the first and second groups in the interleaved orientations. [0069] In Example 15, the subject matter of Example 14 where the support structure optionally includes a second semiconductor die.[0070] In Example 16, the subject matter of Example 14 where the support structure optionally includes a substrate.[0071] In Example 17, the subject matter of Example 14 where the support structure optionally includes a redistribution layer.[0072] In Example 18, the subject matter of Example 14 where the multi-conductor interconnect further optionally includes a shield structure.[0073] In Example 19, the subject matter of Example 18 where the shield structure is coupled to conductors of the second group of conductors.[0074] In Example 20, the subject matter of Example 14 optionally including at least 50 conductors in the first group of conductors.[0075] In Example 21, the subject matter of Example 14 optionally including at least 100 conductors in the first group of conductors.[0076] In Example 22, the subject matter of Example 14 optionally including at least 200 conductors in the first group of conductors.[0077] In Example 23, the subject matter of Example 14 where the dielectric structure of the multi-conductor interconnect optionally includes dielectric material formed around the first and second set of conductors.[0078] In Example 24, the subject matter of Example 23 where the multi-conductor interconnect structure further optionally includes a shield structure, and where the dielectric structure further optionally includes dielectric material formed over the shield structure.[0079] In Example 25, the subject matter of Example 14 where the dielectric structure is flexible.[0080] In Example 26, the subject matter of Example 14 where the multi-conductor interconnect is flexible and is flexed to extend between the first group of contacts on the support structure and the second of contacts on the first semiconductor die. [0081] In Example 27, the subject matter of Example 14 where the multiple conductors of the first group of multiple conductors alternate with the conductors of the second group of multiple conductors.[0082] In Example 28, the subject matter of any one or more of Examples 14-27 where the support structure further optionally includes a third group of contacts; where the first semiconductor die optionally includes a fourth group of contacts on a surface facing away from the support structure; and further including a second multi-conductor interconnect, the second multi- conductor interconnect including, a respective first group of multiple conductors extending in spaced generally relation to one another, with each conductor having first and second contact surfaces on opposing ends, the first contact surfaces coupled to respective contacts of the third group of contacts on the support structure, and the second contact surfaces coupled to respective contacts of the fourth group of contacts on the first semiconductor die, a respective second group of multiple conductors extending in spaced relation to the conductors of the first group of multiple conductors of the second interconnect, and interleaved between the conductors of such first group, and a respective dielectric structure electrically isolating the conductors of the first group of multiple conductors of the second interconnect from the conductors of the second group of multiple conductors of the second interconnect, and retaining such conductors in the described orientations.[0083] In Example 29, the subject matter of Example 28 where the second multi-conductor interconnect is flexible.[0084] In Example 30, the subject matter of Example 28 where at least a portion of each of the first and third groups of contacts on the support structure extend in parallel to one another; where at least a portion of each of the second and fourth groups of contacts on the semiconductor die extend generally parallel to one another; and where the second multi-conductor interconnect extends above the first multi-conductor interconnect.[0085] In Example 31, the subject matter of Example 28 where at least a portion of each of the first and third groups of contacts on the support structure extend generally linearly along lines generally perpendicular to one another; where at least a portion of each of the second and fourth groups of contacts on the semiconductor die extend generally linearly along different sides of the semiconductor die; and where the second multi-conductor interconnect extends generally perpendicularly to the first multi-conductor interconnect.[0086] In Example 32, the subject matter of any one or more ofExamples 14-27 optionally include where the support structure further optionally includes a third group of contacts, and further including: a second semiconductor die stacked over the first semiconductor die and including a fourth group of contacts on a surface facing away from the support structure; and a second multi-conductor interconnect, the second multi-conductor interconnect including, a respective first group of multiple conductors extending in spaced generally relation to one another, with each conductor having first and second contact surfaces on opposing ends, the first contact surfaces coupled to respective contacts of the third group of contacts on the support structure, and the second contact surfaces coupled to respective contacts of the fourth group of contacts on the second semiconductor die, a respective second group of multiple conductors extending in spaced relation to the conductors of the first group of multiple conductors of the second interconnect, and interleaved between the conductors of such first group, and a respective dielectric structure electrically isolating the conductors of the first group of multiple conductors of the second interconnect from the conductors of the second group of multiple conductors of the second interconnect, and retaining such conductors in the described orientations.[0087] In Example 33, the subject matter of Example 32 where the second semiconductor die is stacked with a first side of the secondsemiconductor die proximate the fourth group of contacts essentially aligned with a first side of the underlying first semiconductor die proximate the second group of contacts, and where the first multi-conductor interconnect couples to the second group of contacts at a location vertically between the first and second semiconductor die.[0088] In Example 34, the subject matter of Example 32 optionally includes an adhesive layer mechanically coupling the second semiconductor die to the underlying first semiconductor die at a first vertical separation distance; and where the first multi-conductor interconnect is coupled to the second group of contacts within the first vertical separation distance.[0089] In Example 35, the subject matter of any one or more ofExamples 14-27 where the first semiconductor die is a processor.[0090] In Example 36, the subject matter of any one or more ofExamples 14-27 where the first semiconductor die is a memory device.[0091] In Example 37, the subject matter of any one or more ofExamples 14-27 where the first semiconductor die is one of: a processor, a chipset, a memory device, and a graphics processor.[0092] In Example 38, the subject matter of any one or more ofExamples 14-27 where the support structure optionally includes one or more of a processor, a chipset, a memory device, field programmable grid array, and a graphics processor.[0093] Example 39 is a method of making an interconnect for a microelectronic device package, including: forming multiple conductors extending in spaced relation with one another on a surface; forming a dielectric material over the multiple conductors electrically isolating the multiple conductors from one another; forming an electrical connection between selected spaced conductors of the multiple conductors.[0094] In Example 40, the subject matter of Example 39 where forming the multiple conductors optionally includes forming the multiple conductors to extend generally parallel to one another.[0095] In Example 41, the subject matter of Example 39 optionally includes forming a shield layer extending above the multiple conductors, where the shield layer forms an electrical connection between selected spaced conductors of the multiple conductors. [0096] In Example 42, the subject matter of Example 39 optionally include where forming the multiple conductors optionally includes forming a first group of conductors having contact surfaces on each end.[0097] In Example 43, the subject matter of Example 39 where forming an electrical connection between selected spaced conductors of the multiple conductors optionally includes forming the electrical connection between conductors of a second group of conductors separate from the first group of conductors.[0098] In Example 44, the subject matter of Example 39 where forming an electrical connection between selected spaced conductors of the multiple conductors optionally includes forming the electrical connection between a second group of conductors interleaved with the first group of conductors.[0099] In Example 45, the subject matter of Example 44 where conductors of the second group of multiple conductors alternate with the conductors of the first group of multiple conductors.[00100] In Example 46, the subject matter of Example 39 where forming the dielectric layer optionally includes forming openings to the selected spaced conductors of the multiple conductors; and where forming an electrical connection between the selected spaced multiple conductors optionally includes forming a shield layer which extends through the openings to the selected spaced conductors.[00101] In Example 47, the subject matter of Example 46 where forming the shield layer optionally includes electroplating a metal material to form the shield layer.[00102] In Example 48, the subject matter of Example 46 optionally includes forming a dielectric layer over at least a portion of the shield layer.[00103] In Example 49, the subject matter of Example 46 where the openings are formed by laser drilling.[00104] In Example 50, the subject matter of any one or more of Examples 39-49 where forming the multiple conductors on the surface optionally includes patterning a metal foil layer supported by a carrier. [00105] In Example 51, the subject matter of Example 50 where patterning the metal foil layer optionally includes using a metal routing process.[00106] In Example 52, the subject matter of Example 50 where patterning the metal foil layer optionally includes photolithography and etching of the metal foil layer.[00107] In Example 53, the subject matter of any one or more ofExamples 39-44 where forming multiple conductors extending in spaced relation with one another on a surface optionally includes arranging wires in spaced relation with one another on the surface.[00108] In Example 54, the subject matter of any one or more ofExamples 39-49 where the dielectric material is flexible.[00109] Example 55 is a method of forming a microelectronic device, including: attaching a first semiconductor die to a support structure having a first group of contacts, the first semiconductor die extending over at least a portion of the support structure, the first semiconductor die having a second group of contacts on a surface facing away from the support structure;electrically coupling contacts of the first group of contacts on the support structure to respective contacts of the second group of contacts on the first semiconductor die through a multi-conductor interconnect, including, a first group of multiple conductors extending in spaced relation to one another, with each conductor having contact surfaces at first and second ends, a second group of multiple conductors interleaved between the conductors of the first group of multiple conductors, the conductors of the second group of multiple conductors electrically coupled with one another, and a dielectric structure extending between the conductors of the first and second groups and retaining the first and second groups of multiple conductors in their respective orientations, the electrically coupling further including, coupling the first contact surfaces to respective contacts of the first group of contacts on the support structure, and coupling the second contact surfaces to respective contacts of the second group of contacts on the first semiconductor die, and establishing an electrical connection between a first reference voltage node on the first semiconductor die and a second reference voltage node on the support structure through at least some portion of the second group of multiple conductors.[00110] In Example 56, the subject matter of Example 55 where establishing the electrical connection between the first reference voltage node and a second voltage reference node, optionally includes: establishing a first electrical connection between the first reference voltage node on the first semiconductor die and a first conductor of the second group of multiple conductors; and establishing a second electrical connection between the second reference voltage node on the support structure and a second conductor of the second group of multiple conductors.[00111] In Example 57, the subject matter of Example 55 where the support structure optionally includes a second semiconductor die.[00112] In Example 58, the subject matter of Example 55 where the support structure optionally includes a substrate.[00113] In Example 59, the subject matter of Example 55 where the support structure optionally includes a redistribution layer.[00114] In Example 60, the subject matter of Example 55 where the multi-conductor interconnect further optionally includes a shield structure.[00115] In Example 61, the subject matter of Example 60 where the shield structure is coupled to conductors of the second group of conductors.[00116] In Example 62, the subject matter of Example 55 optionally including at least 50 conductors in the first group of conductors.[00117] In Example 63, the subject matter of Example 55 optionally including at least 100 conductors in the first group of conductors.[00118] In Example 64, the subject matter of Example 55 optionally including at least 200 conductors in the first group of conductors.[00119] In Example 65, the subject matter of Example 64 where the dielectric structure of the multi-conductor interconnect extends around the first and second set of conductors. [00120] In Example 66, the subject matter of Example 65 where the multi-conductor interconnect structure further optionally includes a shield structure, and where the dielectric structure further extends around the shield structure.[00121] In Example 67, the subject matter of any one or more ofExamples 55-66: where the support structure further optionally includes a third group of contacts; where the first semiconductor die optionally includes a fourth group of contacts on a surface facing away from the support structure; and further including electrically coupling respective contacts of the third group of contacts to respective contacts of the fourth group of contacts.[00122] In Example 68, the subject matter of Example 67 where electrically coupling respective contacts of the third group of contacts to respective contacts of the fourth group of contacts, optionally includes coupling the respective contacts through use of a second multi-conductor interconnect, including: a respective first group of multiple conductors extending in spaced relation to one another, with each conductor having contact surfaces on opposing ends, a respective second group of multiple conductors interleaved between the conductors of the respective first group of multiple conductors, the conductors of the respective second group of multiple conductors electrically coupled with one another, and a dielectric structure extending over and between the conductors of the first and second groups and retaining the first and second groups of multiple conductors in their respective orientations, the electrically coupling further including, coupling the first contact surfaces to respective contacts of the third group of contacts on the support structure, and coupling the second contact surfaces to respective contacts of the fourth group of contacts on the first semiconductor die, establishing at least one electrical connection between a reference voltage node on the first semiconductor die and the second group of multiple conductors of the second multi-conductor interconnect, and establishing at least one electrical connection between a reference voltage node on the support structure and the second group of multiple conductors of the second multi-conductor interconnect. [00123] In Example 69, the subject matter of Example 68: where at least a portion of each of the first and third groups of contacts on the support structure extend in parallel to one another; where at least a portion of each of the second and fourth groups of contacts on the semiconductor die extend generally parallel to one another; and where the second multi-conductor interconnect extends above the first multi-conductor interconnect.[00124] In Example 70, the subject matter of Example 68 where at least a portion of each of the first and third groups of contacts on the support structure extend generally linearly along lines generally perpendicular to one another; where at least a portion of each of the second and fourth groups of contacts on the semiconductor die extend generally linearly along different sides of the semiconductor die; and where the second multi-conductor interconnect extends generally perpendicularly to the first multi-conductor interconnect.[00125] In Example 71, the subject matter of any one or more ofExamples 55-66 where the support structure further optionally includes a third group of contacts, and further including: attaching a second semiconductor die over the first semiconductor die, the second semiconductor die including a fourth group of contacts on a surface facing away from the support structure; and electrically connecting contacts of the third group of contacts to the fourth group of contacts through use of a second multi-conductor interconnect, the second multi-conductor interconnect including, a respective first group of multiple conductors extending in spaced generally relation to one another, with each conductor having first and second contact surfaces on opposing ends, the first contact surfaces coupled to respective contacts of the third group of contacts on the support structure, and the second contact surfaces coupled to respective contacts of the fourth group of contacts on the secondsemiconductor die, a respective second group of multiple conductors extending in spaced relation to the conductors of the first group of multiple conductors of the second interconnect, and interleaved between the conductors of such first group, and a respective dielectric structure electrically isolating the conductors of the first group of multiple conductors of the second interconnect from the conductors of the second group of multiple conductors of the second interconnect, and retaining such conductors in the described orientations.[00126] In Example 72, the subject matter of Example 71: where attaching the second semiconductor die optionally includes placing the second semiconductor die with a first side of the second semiconductor die essentially aligned with a first side of the underlying first semiconductor die proximate the second group of contacts, and where electrically connecting contacts of the third group of contacts to the fourth group of contacts optionally includes coupling the first multi-conductor interconnect to the second group of contacts at a location vertically between the first and second semiconductor die.[00127] In Example 73, the subject matter of Example 72 where attaching a second semiconductor die over the first semiconductor die optionally includes mechanically coupling the second semiconductor die to the underlying first semiconductor die at a first vertical separation distance through use of an adhesive material layer; and where the first multi-conductor interconnect is coupled to the second group of contacts within the first vertical separation distance.[00128] In Example 74, the subject matter of any one or more of Examples 55-66 where electrically coupling contacts of the first group of contacts on the support structure to respective contacts of the second group of contacts on the first semiconductor die optionally includes attaching respective contact surfaces of the interconnect to respective contacts of the first and second groups of contacts, through use of one or more of: a thermal compression bonding process, a solder diffusion process, a solder reflow process, a surface activated bonding process.[00129] Example 75 is an electronic system, including: a microelectronic device, including, a support structure having a first group of contacts; a first semiconductor die coupled to the support structure, the first semiconductor die having a second group of contacts on a surface facing away from the support structure; and a first multi-conductor interconnect, including, a first group of multiple conductors extending in spaced generally relation to one another, with each conductor having first and second contact surfaces on first and second ends, the first contact surfaces coupled to respective contacts of the first group of contacts on the support structure, and the second contact surfaces coupled to respective contacts of the second group of contacts on the first semiconductor die, a second group of multiple conductors extending in spaced relation to the conductors of the first group, and interleaved between the conductors of the first group, and a dielectric structure electrically isolating the conductors of the first group from the conductors of the second group.[00130] In Example 76, the subject matter of Example 75 where the first semiconductor die is a processor.[00131] In Example 77, the subject matter of Example 75 where the first semiconductor die is a memory device.[00132] In Example 78, the subject matter of Example 75 optionally include where the first semiconductor die is one of: a processor, a chipset, a memory device, field programmable grid array, and a graphics processor.[00133] In Example 79, the subject matter of Example 75 where support structure optionally includes one or more of a processor, a chipset, a memory device, field programmable grid array, and a graphics processor.[00134] In Example 80, the subject matter of Example 75 optionally includes at least one of a mass storage device and a network interface operably coupled to the microelectronic device.[00135] In Example 81, the subject matter of Example 75 where the support structure optionally includes a second semiconductor die.[00136] In Example 82, the subject matter of Example 75 where the support structure optionally includes a substrate.[00137] In Example 83, the subject matter of Example 75 where the support structure optionally includes a redistribution layer.[00138] In Example 84, the subject matter of Example 75 where the multi-conductor interconnect further optionally includes a shield structure. [00139] In Example 85, the subject matter of Example 84 where the shield structure is coupled to conductors of the second group of conductors.[00140] In Example 86, the subject matter of any one or more of Examples 75-85 where the support structure further optionally includes a third group of contacts, and further including: a second semiconductor die stacked over the first semiconductor die and including a fourth group of contacts on a surface facing away from the support structure; and a second multi-conductor interconnect, the second multi-conductor interconnect including, a respective first group of multiple conductors extending in spaced generally relation to one another, with each conductor having first and second contact surfaces on opposing ends, the first contact surfaces coupled to respective contacts of the third group of contacts on the support structure, and the second contact surfaces coupled to respective contacts of the fourth group of contacts on the second semiconductor die, a respective second group of multiple conductors extending in spaced relation to the conductors of the first group of multiple conductors of the second interconnect, and interleaved between the conductors of such first group, and a respective dielectric structure electrically isolating the conductors of the first group of multiple conductors of the second interconnect from the conductors of the second group of multiple conductors of the second interconnect, and retaining such conductors in the described orientations.[00141] In Example 87, the subject matter of any one or more of Examples 75-85 where the support structure further optionally includes a third group of contacts; where the first semiconductor die optionally includes a fourth group of contacts on a surface facing away from the support structure; and further including a second multi-conductor interconnect, the second multi- conductor interconnect including, a respective first group of multiple conductors extending in spaced generally relation to one another, with each conductor having first and second contact surfaces on opposing ends, the first contact surfaces coupled to respective contacts of the third group of contacts on the support structure, and the second contact surfaces coupled to respective contacts of the fourth group of contacts on the first semiconductor die, a respective second group of multiple conductors extending in spaced relation to the conductors of the first group of multiple conductors of the second interconnect, and interleaved between the conductors of such first group, and a respective dielectric structure electrically isolating the conductors of the first group of multiple conductors of the second interconnect from the conductors of the second group of multiple conductors of the second interconnect, and retaining such conductors in the described orientations.[00142] In Example 88, the subject matter of Example 87 where at least a portion of each of the first and third groups of contacts on the support structure extend in parallel to one another; where at least a portion of each of the second and fourth groups of contacts on the semiconductor die extend generally parallel to one another; and where the second multi-conductor interconnect extends above the first multi-conductor interconnect.[00143] In Example 89, the subject matter of Example 87 where at least a portion of each of the first and third groups of contacts on the support structure extend generally linearly along lines generally perpendicular to one another; where at least a portion of each of the second and fourth groups of contacts on the semiconductor die extend generally linearly along different sides of the semiconductor die; and where the second multi-conductor interconnect extends generally perpendicularly to the first multi-conductor interconnect.[00144] In example 90, the subject matter of example 87, including a microelectronic device including any of the structures identified in one or more of Examples 1-38.[00145] In example 91, the subject matter of example 87, including a microelectronic device manufactured in accordance with any of the methods set forth in Examples 55-72.[00146] In example 92, the subject matter of example 87, including an interconnect manufactured in accordance with any the of the methods of Examples 39-54. [00147] In example 93, a microelectronic device having a structure in accordance with any one or more of Examples 1-54.[00148] In example 94, the microelectronic device of Example 93 including any one or more components manufactured in accordance with any one or more of Examples 39-54.[00149] The above detailed description includes references to the accompanying drawings, which form a part of the detailed description. The drawings show, by way of illustration, specific embodiments in which the invention can be practiced. These embodiments are also referred to herein as "examples." Such examples can include elements in addition to those shown or described. However, the present inventors also contemplate examples in which only those elements shown or described are provided. Moreover, the present inventors also contemplate examples using any combination or permutation of those elements shown or described (or one or more aspects thereof), either with respect to a particular example (or one or more aspects thereof), or with respect to other examples (or one or more aspects thereof) shown or described herein.[00150] In this document, the terms "a" or "an" are used, as is common in patent documents, to include one or more than one, independent of any other instances or usages of "at least one" or "one or more." In this document, the term "or" is used to refer to a nonexclusive or, such that "A or B" includes "A but not B," "B but not A," and "A and B," unless otherwise indicated. In this document, the terms "including" and "in which" are used as the plain-English equivalents of the respective terms "comprising" and "wherein." Also, in the following claims, the terms "including" and "comprising" are open-ended, that is, a system, device, article, composition, formulation, or process that includes elements in addition to those listed after such a term in a claim are still deemed to fall within the scope of that claim. Moreover, in the following claims, the terms "first," "second," and "third," etc. are used merely as labels, and are not intended to impose numerical requirements on their objects. [00151] The above description is intended to be illustrative, and not restrictive. For example, the above-described examples (or one or more aspects thereof) may be used in combination with each other. Other embodiments can be used, such as by one of ordinary skill in the art upon reviewing the above description. The Abstract is provided to comply with 37 C.F.R. §1.72(b), to allow the reader to quickly ascertain the nature of the technical disclosure. It is submitted with the understanding that it will not be used to interpret or limit the scope or meaning of the claims. Also, in the above Detailed Description, various features may be grouped together to streamline the disclosure. This should not be interpreted as intending that an unclaimed disclosed feature is essential to any claim. Rather, inventive subject matter may lie in less than all features of a particular disclosed embodiment. Thus, the following claims are hereby incorporated into the Detailed Description, with each claim standing on its own as a separate embodiment, and it iscontemplated that such embodiments can be combined with each other in various combinations or permutations. The scope of the invention should be determined with reference to the appended claims, along with the full scope of equivalents to which such claims are entitled. |
Methods and systems may involve identifying source content associated with an activity of a user with respect to a first media source. Discovery content may be captured from one or more additional media sources based on the source content, wherein the discovery content may be presented to the user if at least a portion of the discovery content is tangential to the source content. |
1. At least one computer accessible storage medium includes a collection of instructions that, if executed by a processor, cause the computer to:The first media source identifies source content associated with the user's activity, wherein the source content includes metadata, one or more keywords, music lyrics, closed titled information, subtitle information, video information, and audio. One or more of the information;Capturing discovery content from one or more additional media sources based on the source content;If the at least a portion of the discovered content touches the source content, presenting the discovered content to the user;Detecting one or more user selections from the discovered content;Identifying one or more differences between the source content and the discovered content based on the one or more user selections;Identifying one or more macro content fields based on the source content, the discovered content, the one or more user selections, and the one or more differences;Determining whether the one or more differences are dependent on whether the user is associated with a group of people during the activity;Determining whether the one or more differences are media platform related;A weighted data set is created based on the source content, the discovery content, the one or more user selections, the one or more differences, and the one or more macro content domains.2. The medium of claim 1 wherein said activity comprises watching television.3. The medium of claim 1 wherein said activity comprises listening to music.4. The medium of claim 1 wherein said activity comprises reading a book.5. At least one computer accessible storage medium includes a collection of instructions that, if executed by a processor, cause the computer to:Identifying, by the first media source, source content associated with the user's activity;Capturing discovery content from one or more additional media sources based on the source content;If at least a portion of the discovered content touches the source content, the discovered content is presented to the user.6. The medium of claim 5 wherein said instructions cause a computer if executed:Detecting one or more user selections from the discovered content;One or more differences between the source content and the discovered content are identified based on the one or more user selections.7. The medium of claim 6 wherein said instructions cause a computer if executed:One or more macro content domains are identified based on the source content, the discovered content, the one or more user selections, and the one or more differences.8. The medium of claim 7 wherein said instructions cause a computer if executed:A weighted data set is created based on the source content, the discovery content, the one or more user selections, the one or more differences, and the one or more macro content domains.9. The medium of claim 6 wherein said instructions cause a computer if executed:A determination is made as to whether the one or more differences are dependent on whether the user is associated with a group of people during the activity.10. The medium of claim 6 wherein said instructions cause a computer if executed:Determining whether the one or more differences are media platform related.11. The medium of claim 5 wherein said activity comprises one or more of watching television, listening to music, and reading a book.12. The medium of claim 5, wherein the source content comprises one or more of metadata, one or more keywords, music lyrics, closed titled information, subtitle information, video information, and audio information. .13. A computing platform that includes:display screen;a source module configured to identify source content associated with a user's activity with respect to the first media source;a discovery module configured to capture discovery content from one or more additional media sources based on the source content;a presentation module configured to display the discovered content to the user via the display device if at least a portion of the discovered content touches the source content.14. The platform of claim 13 wherein said discovery module is configured to:Detecting one or more user selections from the discovered content;One or more differences between the source content and the discovered content are identified based on the one or more user selections.15. The platform of claim 14 wherein said instructions, if executed, cause a platform:One or more macro content domains are identified based on the source content, the discovered content, the one or more user selections, and the one or more differences.16. The platform of claim 15 wherein said discovery module is configured to:A weighted data set is created based on the source content, the discovery content, the one or more user selections, the one or more differences, and the one or more macro content domains.17. The platform of claim 14 wherein said discovery module is configured to:A determination is made as to whether the one or more differences are dependent on whether the user is associated with a group of people during the activity.18. The platform of claim 14 wherein said discovery module is configured to:Determining whether the one or more differences are media platform related.19. The platform of claim 13 wherein said activity comprises one or more of watching television, listening to music, and reading a book.20. The platform according to claim 13, wherein said source content is to include one of metadata, one or more keywords, music lyrics, closed title-added information, subtitle information, video information, and audio information. many.21. A device comprising logic, the logic:Regarding the first media source identifying the source content associated with the user's activity,Capturing discovery content from one or more additional media sources based on the source content, andIf at least a portion of the discovered content touches the source content, the discovered content is presented to the user.22. The apparatus of claim 21 wherein said logic is to:Detecting one or more user selections from the discovered content, andOne or more differences between the source content and the discovered content are identified based on the one or more user selections.23. The apparatus of claim 22 wherein said logic is to:One or more macro content domains are identified based on the source content, the discovered content, the one or more user selections, and the one or more differences.24. The apparatus of claim 23 wherein said logic is to:A weighted data set is created based on the source content, the discovery content, the one or more user selections, the one or more differences, and the one or more macro content domains.25. The apparatus of claim 22 wherein said logic is to:A determination is made as to whether the one or more differences are dependent on whether the user is associated with a group of people during the activity.26. The apparatus of claim 22 wherein said logic is to:Determining whether the one or more differences are media platform related.27. The apparatus of claim 21 wherein said activity comprises one or more of watching television, listening to music, and reading a book.28. The apparatus according to claim 21, wherein said source content comprises one or more of metadata, one or more keywords, music lyrics, closed title-added information, subtitle information, video information, and audio information. .29. A computer implemented method comprising:The first media source identifies source content associated with the user's activity, wherein the source content includes metadata, one or more keywords, music lyrics, closed titled information, subtitle information, video information, and audio information. One or more of them;Capturing discovery content from one or more additional media sources based on the source content;If the at least a portion of the discovered content touches the source content, presenting the discovered content to the user;Detecting one or more user selections from the discovered content;Identifying one or more differences between the source content and the discovered content based on the one or more user selections;Identifying one or more macro content fields based on the source content, the discovered content, the one or more user selections, and the one or more differences;Determining whether the one or more differences are dependent on whether the user is associated with a group of people during the activity;Determining whether the one or more differences are related to a media platform;A weighted data set is created based on the source content, the discovery content, the one or more user selections, the one or more differences, and the one or more macro content domains.30. The method of claim 29 wherein said activity comprises watching television.31. The method of claim 29 wherein said activity comprises listening to music.32. The method of claim 29 wherein said activity comprises reading a book. |
Use discovery to understand user behavior, interests, and preferencesRelated application cross-referenceThis application claims the priority benefit of U.S. Provisional Patent Application No. 61/533,457, filed on Sep. 12, 2011.Background techniqueTraditional search engines can strive to select search results based on keywords or topics entered by the user to direct the user to a particular end goal. Such a strategy may not be the most effective in understanding user behavior, interests, and preferences.DRAWINGSThe various advantages of the embodiments of the invention will be apparent to those skilled in the <RTIgt1 is a block diagram of an example of a scheme of identifying discovery content, according to an embodiment;2 is a block diagram of an example of a discovery architecture, in accordance with an embodiment;3 is a flowchart of an example of a method of identifying discovery content, according to an embodiment;4 is a block diagram of an example of a computing platform in accordance with an embodiment.Detailed waysEmbodiments can include at least one computer-accessible storage medium having a set of instructions that, if executed by a processor, cause a computer to identify source content associated with a user activity with respect to a first media source. The instructions may also cause the computer to capture the discovery content from one or more additional media sources based on the source content and present the discovery content to the user if at least a portion of the content is found to touch the source content.Embodiments can also include a computing platform having a display device, a source module configured to identify source content associated with user activity with respect to the first media source. The platform may also include a discovery module that captures the discovered content from one or more additional media sources based on the source content, and a presentation module that presents the discovered content to the user via the display device if at least a portion of the content is found to touch the source content.Other embodiments may include a device having logic configured to identify source content associated with a user activity with respect to a first media source and to capture discovery content from one or more additional media sources based on the source content. The logic can also present the discovered content to the user if at least a portion of the content is found to tangential to the source content.Moreover, embodiments can include at least one computer-accessible storage medium having a set of instructions that, if executed by a processor, cause the computer to identify source content associated with user activity with respect to the first media source. The source content may include one or more of metadata, one or more keywords, music lyrics, closed captioned information, subtitle information, video information, and audio information. The instructions may also cause the computer to capture the discovery content from one or more additional media sources based on the source content and present the discovery content to the user if at least a portion of the content is found to touch the source content. Additionally, the instructions may cause the computer to detect one or more user selections from the discovered content and identify one or more differences between the source content and the discovered content based on one or more user selections. Additionally, the instructions can cause the computer to determine whether one or more differences are dependent on whether the user is associated with a group of people during the activity and determine if one or more differences are media platform related. The instructions also cause the computer to create a weighted data set based on the source content, the discovery content, one or more user selections, one or more differences, and one or more macro content fields.Moreover, embodiments can be directed to a computer-implemented method in which source content associated with a user activity is identified with respect to a first media source, wherein the source content includes metadata, one or more keywords, music lyrics, closed plus One or more of the titled information, subtitle information, video information, and audio information. The method can also be provided for capturing content from one or more additional media sources based on the source content, presenting the discovery content to the user if at least a portion of the content is found to be touching the source content, and detecting a content from the discovery content Or more user choices. Moreover, the method can include identifying one or more differences between the source content and the discovery content based on one or more user selections, and based on the source content, the discovery content, one or more user selections, and one or more differences To identify one or more macro content areas. Additionally, the method can involve determining whether one or more differences are dependent on whether the user is associated with a group of people during the activity, determining whether one or more differences are media platform related, and based on the source content, the discovery content, one or more Multiple user selections, one or more differences, and one or more macro content fields to create a weighted data set.Turning now to Figure 1, a scheme 10 for identifying discovered content is shown. In general, the user/consumer source activity 12 can be identified, wherein the source activity 12 can include, for example, watching television (TV), listening to music, reading a book, and the like. One or more topics and/or keywords may be extracted/extracted from the source activity 12. Topics/keywords can be web-based, locally tagged, metadata enabled, and the like. For example, music lyrics, audio or e-book text tracks, ambient video or audio information, embedded text such as closed caption information or subtitles, or any other associated metadata may be extracted from the source activity 12.In the illustrated example, source activity 12 is used to capture discovery content 14 from one or more additional media sources. The discovery content 14 can be presented to the user, wherein the user can select from the discovery content 14, which can result in further identification of the discovery content 16, 18. For example, if the term "Hawaii" is encountered as a closed titled or subtitled word while the user is watching a program about the US Navy, then Hawaii can be used to identify and capture content from a wide range of sources. Many types of Hawaii related content (eg, photos, recipes, music, architecture, Hawaiian and human biographies, outriggers, social networking messages, products, etc.) can be displayed to the user. In such an example, the content shown may not involve military, or some content may involve military, depending on the web source provided or the web service selected by the user as the source. Thus, at least a portion of the content 14, 16, 18 can be found to be accessible to the content extracted from the source activity 12 by design.For example, the user's selection from the displayed discovery content can be branched into many areas of interest, new areas of interest or old areas of interest. As such, as user interests change and evolve or evolve, they are not just identified, but are used to actively show users opportunities for growth and opportunities to learn or experience new things. Thus, the illustrated approach has less historical insights of the user's past interests, and more has a representation of the forward path and a mechanism to preemptively determine and direct what areas of interest the particular user is leaning toward.Ultimately, the understanding of such individual users (as opposed to aggregated majority opinions created by the search) provides the other side of the information about the user based on the area of interest (which may have links to other areas of interest). The illustrated approach may also provide a preemptive mechanism for presenting interesting things to the user to please, entertain, and alarm the user. Of course, these bits of interesting content may generate interest and stimulate demand for more relevant content in that field. Briefly, the illustrated scheme 10 can represent the randomness of exploration and an extension of interest and concepts based on accidental discovery that not only enhances known interests, but also identifies or creates new interests on the end user side.In particular, the user's response can be captured into a data set that is linked in combination based on the source activity 12 and/or content, the resulting end user selected discovery content 14, 16, 18, and subsequent user actions. This data set can be used to compare and correlate multiple events and their results at different times/dates on the same or different platforms. Such an approach can create a weighted data set that understands the user's interests and preferences based on various contexts.The manner illustrated may also highlight new areas of interest that have been introduced to the user during the discovery process. As a result, a deeper understanding of the user's inclinations, new things that the user has discovered and liked (representing new opportunities) can be achieved. Of course, it is found that the content 14, 16, 18 can highlight the direction in which the user is actually headed about the macro interest area. For example, "exploration" can be identified as a propensity based on differences in links between Hawaiian, New Guinea, New Zealand, and the Galapagos Islands based on observations of perceived interests across different types of historical, shopping, and cooking programs. .A special note about Hawaii's example is that a typical search engine might simply return the most popular or demographically related Hawaiian project from a more or more cloudy server. However, in the discovery system described herein, the user may be presented with non-linear content (eg, a discovery item that is not directly related to a primary experience such as watching a TV). Therefore, a determination can be made that the user has less interest in Hawaii and is more interested in isolated tropical islands and exploration. In addition, the discovery approach can be independent of the aggregation of information into statistically large data sets in the cloud (although this is an option). Rather, data can personalize individuals in pure form. As a result, the overall accuracy of the user data can be specific to each user as each platform carries its own user context and preference information.Moreover, the illustrated scheme 10 can be extended to a broader discovery framework in which discovery data is tracked for multiple users in the same household or for linked friends who are interacting with each other and may be watching or sharing in the same primary experience. . Briefly, the illustrated scenario 10 may enable understanding of how preferences and selections may change in group settings (eg, when the home is participating in the user alone).2 illustrates an architecture 20 in which decimator 22 is used to identify metadata (e.g., tags, keywords, etc.) 26 based on user activity with respect to devices and/or media sources, such as TV 24. In the illustrated example, metadata 26 is injected into discovery engine 28, which identifies one or more additional media discovery sources 30 and issues queries for discovered content such as discovered discovery content 14, 16, 18 (FIG. 1). 32 to those source 30. The discovered content obtained from source 30 can be presented to the user through a user interface (UI) or other suitable interface of discovery engine 28, wherein weighted data set 34 can be created and maintained based on the user's interests and preferences in various contexts.For example, while a conventional search engine may use parameters entered by a user into a text field to find a "correct" answer, the illustrated architecture 20 does not rely on the user-accessible entry domain nor the end result. Rather, the illustrated architecture 20 uses input as an input to the person's activities/experiences (eg, what they are doing, what is being entertained or educated, what web services they choose, etc.). This input can be used to capture discovery data where the discovery data highlights their overall area of interest. The discovery data can also be used to generate additional modifiers and coefficients that can be used to tailor the target content to the end user's particular interests. In addition, the discovery of data can enable new areas to be identified for presentation to end users in the future.The concepts reflected in the illustrated architecture 20 can thus be considered as "primary experience," "discovery experience," "difference," and "macro." For example, the primary experience can be characterized as a user's primary concern, such as watching a TV, listening to music, or reading a book or e-book. On the other hand, the discovery experience can be viewed as an experience that can be encountered/implemented on the same or different platforms and new and touched content can be introduced to the user to determine if the user welcomes new and interesting content ( And what type of content).Differences can also be identified with the recognition that the discovery content selected by the user can represent an extension of the primary experience as part of a larger macro that triggers for some interest or provides a broader framework to link the end user's respective interests. Such differences can extend the user's interests based on multiple primary experiences (eg, TV and e-books) and associations that are generated autonomously by the user or with a group of people, where those actions can affect others' choices of interest. Interest and decisions when content.As already mentioned, the macros can constitute a larger content ball that helps to merge the various choices. For example, the type of product or activity, the selection of multiple geographic locations may be "macro'd" to an "adventure" ball that helps identify the type of user and identify the discovery content. How much more aggressive can be extended beyond the scope of the main experience theme.Turning now to Figure 3, a method 36 of identifying discovered content is illustrated. For example, the illustrated method 36 can be, for example, in a determined functional hardware using circuit technology such as an application specific integrated circuit (ASIC), complementary metal oxide semiconductor (CMOS), or transistor-transistor logic (TTL) technology, or any combination thereof, For example implemented as a collection of executable logic instructions stored in a machine or computer readable storage medium, such as: random access memory (RAM), read only memory (ROM), programmable ROM (PROM), flash memory, Firmware, microcode, etc. For example, computer program code that performs the operations illustrated in method 36 may be written in any combination of one or more programming languages, including, for example, an object-oriented programming language (such as C++, etc.) and a conventional procedural programming language (such as " C" programming language or similar programming language). Additionally, various aspects of the illustrated functionality may be implemented as embedded logic of a processor using any of the above described circuit techniques.The illustrated processing block 38 is ready to identify the source content associated with the user activity with respect to the first media source. As already indicated, activities may involve watching a video program, listening to audio content, reading, and the like. At block 40, the explored content may be captured from one or more additional media sources based on the source content, wherein the illustrated block 42 determines whether at least a portion of the discovered content is sufficiently accessible with respect to the media source (eg, satisfying a "touchability threshold" "). If not, the capture of the content can be repeated. Otherwise, at block 44 it is found that the content can be presented to the user.4 shows a computing platform 64 having a processor 66, a system memory 68, a platform controller hub (PCH) 70, a mass storage device (eg, a hard drive/HDD, an optical disk, a flash memory, etc.) 72, a network interface/control The device 74, one or more user interface (UI) devices 76, and various other controllers (not shown). Platform 64 can be, for example, a laptop computer, a personal digital assistant (PDA), a wireless smart phone, a media player, an imaging device, a mobile internet device (MID), any smart device such as a smart phone, smart tablet, or the like, or any combination thereof. part. Additionally, platform 64 can be part of a smart TV, personal computer (PC), server, workstation, and the like. Accordingly, processor 66 may include one or more processing cores capable of running a collection of stored logical instructions and an integrated memory controller (IMC) 78 configured to communicate with system memory 68. System memory 68 may, for example, include dynamic random access memory (DRAM), which is configured, for example, as a memory module such as a dual in-line memory module (DIMM), a small DIMM (SODIMM), or the like.In the illustrated example, processor 66 is configured to run logic 80 that identifies source content associated with user activity with respect to the first media source, captures the discovery content from one or more additional media sources based on the source content, and if It is found that at least a portion of the content touches the source content and the discovered content is presented to the user via UI device 76. Thus, for example, logic 80 may include source modules, discovery modules, and/or presentation modules configured to implement one or more aspects of method 36 (FIG. 3) that have been explored.The illustrated PCH 70 (sometimes referred to as the south bridge of the chipset) operates as a host device and can communicate with the network controller 74, which can, for example, provide wireless communication functionality off the platform for a wide range of applications. For purposes such as cellular telephones (eg, Wideband Code Division Multiple Access/W-CDMA (Universal Mobile Telecommunications System/UMTS), CDMA2000 (IS-856/IS-200), etc.), Wi-Fi (Wireless Fidelity, eg electrical And Institute of Electrical Engineers/IEEE 802.11-2007, WLAN/LAN Media Access Control (MAC) and Physical Layer (PHY) specifications), LR-WPAN (Low Rate Wireless Personal Area Network, eg IEEE 802.15.4-2006), Bluetooth (eg IEEE 802.15.1-2005, Wireless Personal Area Network), WiMax (eg IEEE 802.16-2004, LAN/MAN Broadband Wireless LANS), GPS (Global Positioning System), Spread Spectrum (eg 900 MHz) and other RF (RF) telephony technical purposes.Network controller 74 may also provide off-platform wired communications (eg, RS-232 (Electronic Industries Alliance/EIA), Ethernet (eg, IEEE 802.3-2005), power line communications (eg, X10, IEEE P1675), USB (eg, universal strings) Line bus, such as USB Specification 3.0, Rev 1.0, November 12, 2008, USB Implementer Forum), DSL (Digital Subscriber Line), cable modem, T1 connection, etc. In one example, platform 64 uses network control The device 74 is to obtain source content from another device, such as the TV 24 (Fig. 2) in question. UI (e.g., touch screen, liquid crystal display/LCD, light emitting diode/LED, keyboard, mouse, etc.) device 76 may enable the user to Platform 64 interacts and perceives information from platform 64.Thus, as an extension of the search or in parallel with the search, the techniques described herein may be preemptive rather than reactive. In fact, a very small percentage of the discovered content of the presentation can be deliberately assigned to touch the core theme based on the use of keywords and metadata that reflect the user's real-time experience at that moment in the primary experience. Because the selected discovery content can be displayed and tracked when the user is alone or with others, the differences in behavior and what they tend to be interested in can be identified in those different situations.In addition, a joint discovery link between the main experience and the discovered content selection process can be used to achieve a better understanding of the user. In addition, the results of multiple primary experiences can be used to determine the weighted preferences of the discovered topic, the popularity of new content based on the type of primary experience, what platform is being used, and whether the user is alone or with friends or family.Certain aspects of embodiments of the invention may be implemented in hardware, software, or a combination thereof, and may be implemented in one or more computer systems or other processing systems. Program code can be applied to data entered using the input device to perform the functions described and produce output information. Output information can be applied to one or more output devices. Those skilled in the art will appreciate that embodiments can be implemented with a variety of computer system configurations including: multiprocessor systems, minicomputers, mainframe computers, and the like. Embodiments can also be practiced in distributed computing environments where tasks can be performed by a remote processing device that is linked through a communication network.Each program can be implemented in a high level procedural or object oriented programming language to communicate with the processing system. However, the program can be implemented in assembly or machine language if desired. In any case, the language can be compiled or interpreted.Program instructions may be used to facilitate the execution of the methods described herein by a general purpose or special purpose processing system programmed with instructions. Alternatively, the method may be performed by a particular hardware component containing hardwired logic that performs the method, or by any combination of programmed computer components and custom hardware components. The methods described herein can be provided as a computer program product that can include at least one machine readable medium having instructions stored thereon that can be used to program a processing system or other electronic device to perform the method. The term "machine-readable medium" or "machine-accessible medium" as used herein shall include any medium that is capable of storing or encoding a sequence of instructions for the machine to operate and cause the machine to perform any of the methods described herein. The terms "machine-readable medium" or "machine-accessible medium" may thus include, but are not limited to, solid-state memory, optical and magnetic disks, and carrier waves that encode data signals. In addition, it is common in the art to describe one or another form of software (eg, a program, a process, a process, an application, a module, a logic, etc.) as acting or causing a result. Such expressions are merely a shorthand way of stating that the processing system is running software to cause the processor to perform an action or produce a result.The term "coupled" may be used herein to refer to any type of direct or indirect connection between the components in question, and may be applied to electrical, mechanical, fluid, optical, electromagnetic, electromechanical, or other connections. Moreover, the terms "first," "second," and the like may be used in this context only to facilitate discussion and do not carry the meaning of the particular time or sequence unless otherwise indicated.While the various embodiments of the invention have been described, A person skilled in the art will appreciate that various changes in form and detail may be made therein without departing from the spirit and scope of the invention as defined in the appended claims. Therefore, the scope and breadth of the invention should not be |
The invention relates to a pitch translation architecture for a semiconductor package including an embedded interconnect bridge. Various embodiments relate to the semiconductor package. The semiconductor package includes a first die. The first die includes a first bridge interconnect region. The semiconductor package further includes a second die. The second die includes a second bridge interconnect region. The semiconductor package includes a bridge die. The bridge die includes a first contact area to connect to the first bridge interconnect region and a second contact area to connect to thesecond bridge interconnect region. In the semiconductor package, the first bridge interconnect region is larger than the second bridge interconnect region. Additionally, each of the first bridge interconnect region and the second bridge interconnect region include a plurality of conductive bumps. An average pitch between adjacent bumps of the first bridge interconnect region is larger than an average pitch between adjacent bumps of the second bridge interconnect region. |
1.A semiconductor package comprising:a first die comprising a first bridge interconnect region;a second die comprising a second bridge interconnect region;a bridge die including a first contact region connected to the first bridge interconnect region and a second contact region connected to the second bridge interconnect region, whereinThe first bridge interconnect region is larger than the second bridge interconnect region;Each of the first bridge interconnect region and the second bridge interconnect region includes a plurality of conductive bumps;An average spacing between adjacent protrusions of the first bridge interconnect region is greater than an average spacing between adjacent protrusions of the second bridge interconnect region.2.The semiconductor package of claim 1 wherein at least one of said first die and said second die are independently selected from a central processing unit, a flash memory, a Wi-Fi transmitter, and a global positioning system .3.The semiconductor package according to any one of claims 1 or 2, wherein an average pitch between the protrusions of the first bridge interconnection region is between adjacent protrusions of the second bridge interconnection region The average pitch is in the range of from about 10 times to about 0.25 times.4.The semiconductor package of any of claims 1-3, wherein an average spacing between the protrusions of the first bridge interconnect region is in a range from about 75 microns to about 150 microns.5.The semiconductor package of any of claims 1 to 4, wherein an average pitch between the protrusions of the second bridge interconnect region is in a range from about 20 microns to about 70 microns.6.The semiconductor package of any of claims 1-5, wherein the first die is larger than the second die depending on at least one of a surface area and a volume.7.The semiconductor package of any of claims 1-6, wherein the second die further comprises a first shunt region, the first shunt region comprising adjacent the second interconnect region at the first location A plurality of conductive bumps positioned.8.The semiconductor package of claim 7, wherein a spacing between adjacent protrusions of at least one of the first shunt area and the second shunt area is adjacent to the second interconnected area The pitch of the bumps ranges from about 10 times to about 0.5 times.9.A semiconductor package comprising:a first die comprising a first bridge interconnect region;a second die comprising a second bridge interconnect region;a bridge die including a first contact region connected to the first bridge interconnect region and a second contact region connected to the second bridge interconnect region, whereinThe first bridge interconnect region is larger than the second bridge interconnect region;The first die is larger than the second die according to at least one of a surface area and a volume;Each of the first bridge interconnect region and the second bridge interconnect region includes a plurality of conductive bumps;The average spacing between the protrusions of the first bridge interconnect region is in a range from about 10 times to about 0.25 times the average pitch between adjacent bumps of the second bridge interconnect region.10.The semiconductor package of claim 9 wherein an average spacing between the protrusions of said first bridge interconnect region is from about 2 of an average pitch between adjacent bumps of said second bridge interconnect region Up to about 0.5 times the range.11.The semiconductor package of any of claims 9 or 10, wherein the conductive bumps of at least one of the first bridge interconnect region and the second bridge interconnect region comprise copper.12.The semiconductor package of any of claims 9-11, wherein the first bridge interconnect region is in a range from about 10 times to about 0.5 times the second bridge interconnect region.13.The semiconductor package of any of claims 9-12, wherein the first bridge interconnect region is in a range from about 5 times to about 2 times the second bridge interconnect region.14.The semiconductor package of any of claims 9-13, wherein the second die further comprises a first shunt region, the first shunt region comprising adjacent the second mutual at a first location A plurality of conductive bumps positioned in the zone.15.The semiconductor package of claim 14 wherein said second die further comprises a second shunt region, said second shunt region comprising a plurality of locations located adjacent said second interconnect region at said second location Conductive bumps.16.A method of fabricating a semiconductor package, the method comprising:Connecting the first die to the bridge die along the first bridge interconnect region;Connecting a second die to the bridge die along a second bridge interconnect region;The first bridge interconnect region is larger than the second bridge interconnect region;Each of the first bridge interconnect region and the second bridge interconnect region includes a plurality of conductive bumps;An average spacing between adjacent protrusions of the first bridge interconnect region is greater than an average spacing between adjacent protrusions of the second bridge interconnect region.17.The method of claim 16 further comprising at least partially embedding at least one of the first die, the second die, and the bridge die in a substrate.18.The method of claim 16 wherein an average spacing between protrusions of said first bridge interconnect region is about 10 times greater than an average spacing between adjacent protrusions of said second bridge interconnect region Up to about 0.25 times the range.19.The method according to any one of claims 16 to 18, wherein an average pitch between the protrusions of the first contact region is an average pitch between adjacent protrusions of the second bridge interconnection region From about 2 times to about 0.5 times the range.20.The method of any of claims 16-19, wherein the average spacing between the protrusions of the first contact region is in a range from about 75 microns to about 150 microns.21.The method of any of claims 16-20, wherein the average spacing between the protrusions of the first contact region is in a range from about 75 microns to about 130 microns.22.The method of any of claims 16-21, wherein the average spacing between the protrusions of the second contact region is in a range from about 20 microns to about 70 microns.23.The method of any of claims 16-22, wherein the first die is larger than the second die depending on at least one of a surface area and a volume.24.The method of any of claims 16-23, further comprising a plurality of conductive bumps on the bridge at a location between the first die and the second die.25.The method of any of claims 16-24, wherein the first die is larger than the second die depending on at least one of a surface area and a volume. |
Pitch conversion architecture for semiconductor packages including embedded interconnect bridgesBackground techniqueHigh-bandwidth interconnects on packages are becoming relevant in high-performance computing. The embedded multi-die interconnect bridge (EMIB), led and developed by Intel®, addresses this and promotes a very high-density interconnect between heterogeneous dies on a single package. A low cost and simpler device for 2.5D packaging methods. Instead of having to load the entire die composite and use a "through silicon via" (TSV) to package the connected expensive interposer for all top dies, a typical EMIB includes small silicon embedded in the package substrate. The bridge chip enables a very high density die-to-die connection to be achieved only where needed, such as with thin line and spacing (FLS) traces.DRAWINGSIn the drawings, which are not necessarily to scale, the same reference The same reference numbers with different letter suffixes represent different examples of substantially similar components. The figures generally illustrate various embodiments discussed in this document by way of example and not by way of limitation.1 is a cross-sectional view of a semiconductor package using an embedded interconnect bridge (EMIBTM) architecture, in accordance with various embodiments.2 is a schematic top view of an embodiment of a package in accordance with various embodiments.3 is a schematic view of a portion of the package of FIG. 2, in accordance with various embodiments.4 is a side view of a portion of the package of FIG. 3, in accordance with various embodiments.FIG. 5 is a schematic top view of another embodiment of a package including bumps on a bridge die, in accordance with various embodiments.Figure 6 is a block diagram of an electronic system in accordance with various embodiments.Detailed waysReference will now be made in detail to certain embodiments of the disclosed subject matter, While the disclosed subject matter will be described in conjunction with the appended claims, it is to be understood thatThroughout this document, values expressed in a range format should be interpreted in a flexible manner to include not only the numerical values that are explicitly recited as the limits of the range, but also all individual values or sub-ranges covered in the range, as each value and sub-range It is clearly stated. For example, a range of "about 0.1% to about 5%" or "about 0.1% to 5%" should be interpreted to include not only about 0.1% to about 5%, but also individual values within the indicated range (eg, , 1%, 2%, 3%, and 4%) and sub-ranges (eg, 0.1% to 0.5%, 1.1% to 2.2%, 3.3% to 4.4%). The statement "about X to Y" has the same meaning as "about X to about Y" unless otherwise indicated. Likewise, the statement "about X, Y or about Z" has the same meaning as "about X, about Y or about Z" unless otherwise indicated.In the present document, the terms "a", "an" or "the" are used to include one or more than one unless the context clearly dictates otherwise. The term "or" is used to mean a non-exclusive "or" unless otherwise indicated. The statement "at least one of A and B" has the same meaning as "A, B or A and B". In addition, it is to be understood that the phraseology or terminology used herein is not intended to be limiting, The use of any chapter title is intended to aid in the reading of the document and is not to be construed as limiting; information relating to the chapter title may appear within or outside the particular chapter.In the methods described herein, the acts may be performed in any order, other than when the time or sequence of operations is explicitly recited, without departing from the principles of the disclosure. In addition, the specified actions can be performed simultaneously, unless the explicit claim language states that they are executed separately. For example, the act of performing the claimed protection of X and the act of performing the claimed protection of Y may be performed simultaneously within a single operation, and the resulting process will fall within the scope of the text of the claimed process.The term "about" as used herein may allow a certain degree of variation in value or range, for example, within 10%, within 5%, or within 1% of the stated value or stated range limit, and includes a definitive statement Value or range.The term "substantially" as used herein refers to most or a majority, such as at least about 50%, 60%, 70%, 80%, 90%, 95%, 96%, 97%, 98%, 99. %, 99.5%, 99.9%, 99.99% or at least about 99.999% or more, or 100%.1 is a cross-sectional view of a semiconductor package using an embedded multi-die interconnect bridge (EMIBTM) architecture. In one example, the package 10 is formed from a substrate 12 that exhibits an at least partially embedded bridge core 28 that serves as a communication path for the surface first die 14 and the second die 16. The first die 14 and the second die 16 can be top mounted active or passive dies. The embedded bridge die 28 can be an active die or a passive die. Cover 18 covers substrate 12 and dies 14 and 16. As shown in this example, a cooling solution 22, such as a cooling fin, is attached to the top of the cover 18. Depending on the particular embodiment, a wide variety of different cooling solutions 22 can be used, such as a conductive plate, integrated heat sink, liquid cooling, heat pipe or heat sink, as shown. Alternatively, the device can be fabricated without the cooling solution 22 and even without the cover 18.Device substrate 12 may include internal low density interconnect routing for communication between surface dies 14 and 16. The substrate 12 includes an embedded component of a semiconductor material (eg, silicon, gallium, indium, germanium, or variations or combinations thereof) and one or more insulating layers, such as an organic based build up film, a glass reinforced epoxy, Such as FR-4, polytetrafluoroethylene (Teflon), cotton reinforced epoxy resin (CEM-3), phenolic glass (G3), paper phenolic (FR-1 or FR-2), polyester glass (CEM -5) Or any other dielectric layer that can be used in a printed circuit board (PCB). Substrate 12 can be fabricated using a bumpless buildup layer process (BBUL) or other technique. The BBUL process includes one or more buildup layers formed around the components, such as high density interconnect components or bridge cores 28 or dies 14, 16. A microvia formation process such as laser drilling can form a connection between the buildup layer and the die bond pads. The buildup layer can be formed using high density integrated patterning techniques.The dies 14, 16 can be many types of dies. In one example, die 14 can be a memory die and die 16 can be a central processing unit (CPU) die. Other examples of dies may include Wi-Fi transmitters and global positioning systems. In some examples, the two dies may be the same or different. Other examples may include more than two dies. Dies 14 and 16 may be coupled to a power source (not shown) external to the device through C4 bumps 24 and vias 26. Although only a pair of C4 bumps 24 are shown for each of the dies 14, 16 coupled to a single via 26, there may be many junctions for each of the dies 14, 16 coupled through the plurality of vias 26 to The dies 14, 16 are connected to the device and to an external circuit. The overall package 10 can be directly connected to a printed circuit board (PCB) or to a slot that is attached to some other device such as another (PCB).Dies 14 and 16 may include low density interconnect pads 39 and 42, such as may be used for power, ground or other electrical coupling. The low density interconnect pads 42 may be electrically coupled to a bus (not shown), such as by a low density interconnect element 26, such as a power, ground, or data bus. The low density interconnect pads 42 may also be electrically coupled to the conductive pads, such as by a conductive adhesive (not shown). The conductive adhesive can be solder (eg, solder paste), electroplated, or microspheres, such as microspheres configured for flip-chip device interconnects (eg, controlled collapse device connection (C4) interconnects).Embedded within the substrate 12 is a bridge core 28, which is also referred to as an interconnect bridge. Bridge core 28 is made of silicon and has a silicon dioxide surface. Bridge die 28 is connected to CPU die 16 and memory die 14 by bumps or solder balls 30 and 32. The interconnect layer 34 within the bridge is formed between the pins on each die or the land on the other die 14, 16 or the landing pad. In this manner, the CPU and memory can transfer data and control information within the package 10.In one example, as shown in FIG. 1 , the CPU die 16 has a first bridge interconnect region 41 that includes a closest memory 14 for connection to the memory die 14 through the embedded bridge die 28 . Bumps 32. The CPU die 16 has a second bridge interconnect region 43 that includes bumps 43 for connection to the vias. The bumps 30 and 32 may comprise any conductive metal such as copper, gold, silver, aluminum, zinc, nickel, brass, bronze, iron, and the like.Bridge die 28 includes conductive pads at least partially over or in the top surface of bridge die 28. The conductive pad may comprise a conductive metal such as copper, gold, silver, aluminum, zinc, nickel, brass, bronze, iron, or the like. The bridge die 28 includes a contact region 40 and a contact region 49 that connect the vias 30 and 32, respectively.In addition, power rails 36 above bridge pad layer 35 receive power from outside the device through separate power vias (not shown) and provide this power to memory die 14 and CPU die 16. Power rail 36 may be formed from a metal layer deposited over substrate 12.In one example, dielectric layer 50 can be formed over bridge core 28 and substrate 12. The dielectric layer 50 allows dimensional changes in the placement and embedding of the bridge and electrically isolates all of the interconnected regions. The dielectric layer 50 may be formed of an epoxy-based resin such as bisphenol A, epoxy resin, bisphenol F epoxy resin, novolac epoxy resin, aliphatic epoxy resin, glycidylamine epoxy resin, and glycidylamine ring. An oxyresin, or any other resin that includes one or more terminal epoxy groups. In some embodiments, dielectric layer 50 includes a layer having a thickness ranging from about 5 microns to about 50 microns or from about 15 microns to 45 microns, or from 20 microns to 35 microns or about 30. Or less than, equal to, or greater than about 15 microns, 20 microns, 25 microns, 30 microns, 35 microns, 40 microns, or 45 microns.In some examples of package 10, first die 14 and second die 16 may differ in size relative to each other. For example, the first die 14 and the second die 16 may differ depending on at least one of volume or surface area. In these examples, it may be desirable to have a heterogeneous distribution of the protrusions 30 and 32 relative to each other. By heterogeneity, it means that the average pitch between adjacent protrusions 30 is different from the average pitch between adjacent protrusions 32. The raised heterogeneous distribution may be a result of different sizes depending on the surface area of the first bridge interconnect region 41 and the second bridge interconnect region 43.2 is a schematic top view of an embodiment of a package 10 showing a first die 14 including a first bridge interconnect region 41 and interconnect pads 39; including interconnect pads 42 and second bridges Junction 43 and second die 16 of breakout region 70; and bridge die 28 (shown in outline). Individual protrusions are not shown.The first bridge interconnect region 41 may be within a range from about 10 times to about 2 times the second bridge interconnect region 43 and about 5 times to about 3 times the second bridge interconnect region 43, or Less than, equal to, or greater than about 2, 2.5, 3, 3.5, 4, 4.5, 5, 5.5, 6, 6.5, 7, 7.5, 8, 8.5, 9, 9.5, or approximately 10 times the second bridge interconnecting region 43 . To convert signals between the dies 14 and 16 by the bridge dies 28, the bumps 30 are compressed by reducing the average spacing between the lobes 30 relative to the average spacing between the lobes 32. For example, the average spacing between the protrusions 32 of the first bridge interconnect region 41 may be within a range from about 10 times the average spacing between adjacent protrusions 30 of the second bridge interconnect region 43 to About 0.25 times, about 2 times to about 0.5 times, or less than, equal to or greater than about 0.25 times, 0.5, 1, 1.5, 2, 2.5, 3, 3.5, 4, 4.5, 5, 5.5, 6, 6.5, 7, 7.5, 8, 8.5, 9, 9.5 or approximately 10 times. By way of example, the average spacing between the protrusions 32 of the region 41 can be in the range of from about 75 microns to about 150 microns, from about 75 microns to about 130 microns, or less than, equal to, or greater than about 75 microns, 80. 85, 90, 95, 100, 105, 110, 115, 120, 125, 130, 135, 140, 145 or 150 microns. As a further example, the average spacing between the protrusions 30 of the region 43 can be in the range of from about 20 microns to about 70 microns, from about 30 microns to about 65 microns, or less than, equal to, or greater than about 20 microns. , 25, 30, 35, 40, 45, 50, 55, 60, 65 or approximately 70 microns.The shunt region 70 is directly adjacent to the second bridge interconnect region 43 and is at least partially surrounded by the bridge core 28. The shunt region 70 includes a plurality of conductive bumps on an outer surface of the die 16. With respect to the protrusions 30, the spacing between adjacent protrusions 24 of the shunt area 70 may be within a range from about 10 times to about 0.5 times, about 5 times the pitch of the adjacent protrusions 30 of the area 43. Up to about 2 times, or less than, equal to, or greater than about 0.5, 1, 1.5, 2, 2.5, 3, 3.5, 4, 4.5, 5, 5.5, 6, 6.5, 7, 7.5, 8, 8.5, 9, 9.5 Or about 10 times.The shunt area 70 may allow signals from the die 16 to be routed through the bridge die 28. The ability to form the shunt region 70 is partially possible by reducing the size of the second interconnect region 43 relative to the first interconnect region 41. That is, the space available on the second die 16 outside of the second interconnect region 43 but in contact with the bridge die 28 is available to the bumps 24 of the shunt region 70.FIG. 3 is a schematic view of a portion 75 of the package 10 taken from FIG. 2. 3 is a top view showing the first interconnect region 41 including the bumps 32, the second interconnect region 43 including the bumps 30, and the shunt region 70 including the bumps 76. FIG. 3 further illustrates an assembly of bridge dies 28 that includes an input/output 78 that is coupled to bumps 76 and that is exposed on the surface of bridge dies 28. Bridge die 28 further includes VSS 80, VCC 82 and input and output terminals 84 that connect bumps 30 and 32. 4 is a side view of the package 10 taken from FIG. 3 showing the path of the input/output terminal 78.FIG. 5 is a schematic top view of another example package 10. Package 10 may include many of the same features as the examples of package 10 shown and described with respect to Figures 1-4. In addition to or in place of those features, bridge die 28 can include a plurality of bumps 86 between dies 14 and 16. A bump 86 can be attached to the input/output to directly transmit or receive signals between the bridge die 28 and any other components.Package 10 can be fabricated in accordance with any suitable method. For example, the protrusions 30, 32, and 76 may be formed on the respective interconnect regions 41, 43 and the shunt region 70 by depositing a conductive metal precursor thereon. As an example, the precursor may comprise electrolytic copper. Electrolytic copper can be deposited as a liquid and plated thereon. The bumps may be formed directly on the vias of any of the dies 14, 16 or 28. The projections 30, 32 and 76 can be connected to the through hole or the transmission line by soldering corresponding projections and transmission lines or through holes.Figure 6 illustrates a system level diagram in accordance with an embodiment of the present invention. For example, FIG. 6 depicts an example of an electronic device (eg, a system) that includes package 10; including FIG. 6 to illustrate an example of a higher level device application for the subject matter. In an embodiment, system 600 includes, but is not limited to, a desktop computer, a laptop, a netbook, a tablet device, a notebook computer, a personal digital assistant (PDA), a server, a workstation, a cellular telephone, a mobile computing device, a smart phone, an internet appliance Or any other type of computing device. In some embodiments, system 600 is a system on a chip (SOC) system.In an embodiment, processor 610 has one or more processing cores 612 and 612N, where 612N represents the Nth processor core internal to processor 610, where N is a positive integer. In an embodiment, system 600 includes a plurality of processors, including 610 and 605, wherein processor 605 has logic similar or identical to the logic of processor 610. In some embodiments, processor core 612 includes, but is not limited to, prefetch logic for fetch instructions, decode logic for decode instructions, execution logic to execute instructions, and the like. In some embodiments, processor 610 has a cache memory 616 that caches instructions and/or data for system 600. Cache memory 616 can be organized into a hierarchical structure that includes one or more levels of cache memory.In some embodiments, processor 610 includes a memory controller 614 operative to perform functions that enable processor 610 to access memory 630 and to communicate with memory 630, which includes volatile memory 632 and/or Non-volatile memory 634. In some embodiments, processor 610 is coupled to memory 630 and chipset 620. Processor 610 can also be coupled to wireless antenna 678 to communicate with any device configured to transmit and/or receive wireless signals. In an embodiment, wireless antenna 678 operates in accordance with, but is not limited to, the IEEE 802.11 standard and its associated family, Home Plug AV (HPAV), Ultra Wide Band (UWB), Bluetooth, WiMax, or any form of wireless communication protocol.In some embodiments, volatile memory 632 includes, but is not limited to, Synchronous Dynamic Random Access Memory (SDRAM), Dynamic Random Access Memory (DRAM), RAMBUS Dynamic Random Access Memory (RDRAM), and/or any other type. Random access memory device. Non-volatile memory 634 includes, but is not limited to, flash memory, phase change memory (PCM), read only memory (ROM), electrically erasable programmable read only memory (EEPROM), or any other type of nonvolatile memory. device.Memory 630 stores instructions and information to be executed by processor 610. In an embodiment, memory 630 may also store temporary variables or other intermediate information while processor 610 executes the instructions. In the illustrated embodiment, chipset 620 is coupled to processor 610 via point-to-point (PtP or P-P) interfaces 617 and 622. Chipset 620 enables processor 610 to connect to other components in system 600. In some embodiments of the invention, interfaces 617 and 622 operate in accordance with a PtP communication protocol, such as Intel® Fast Path Interconnect (QPI) and the like. In other embodiments, different interconnections can be used.In some embodiments, chipset 620 is operable to communicate with processors 610, 605N, display device 640, and other devices 672, 676, 674, 660, 662, 664, 666, 677, and the like. Chipset 620 can also be coupled to wireless antenna 678 to communicate with any device configured to transmit and/or receive wireless signals.Chipset 620 is coupled to display device 640 via interface 626. Display device 640 can be, for example, a liquid crystal display (LCD), a plasma display, a cathode ray tube (CRT) display, or any other form of visual display device 640. In some embodiments of the invention, processor 610 and chipset 620 are incorporated into a single SOC. In addition, chipset 620 is coupled to one or more buses 650 and 655 that interconnect various components 674, 660, 662, 664, and 666. Buses 650 and 655 can be interconnected via bus bridge 672. In an embodiment, chipset 620 interfaces with non-volatile memory 660, mass storage device(s) 662, keyboard/mouse 664 and network interface 666, smart TV 676, consumer via interface 624 and/or 626 The electronic device 677 or the like is coupled.In an embodiment, mass storage device 662 includes, but is not limited to, a solid state drive, a hard drive, a universal serial bus flash memory drive, or any other form of computer data storage medium. In an embodiment, network interface 666 is implemented by any type of well-known network interface standard including, but not limited to, an Ethernet interface, a universal serial bus (USB) interface, a peripheral component interconnect (PCI) fast interface, Wireless interface and / or any other suitable type of interface. In an embodiment, the wireless interface operates in accordance with, but is not limited to, the IEEE 802.11 standard and its associated family, Home Plug AV (HPAV), Ultra Wide Band (UWB), Bluetooth, WiMax, or any form of wireless communication protocol.Although the modules shown in FIG. 6 are depicted as separate blocks within system 600, the functions performed by some of these blocks may be integrated within a single semiconductor circuit, or two or more separate integrated circuits may be used. achieve. For example, although cache memory 616 is depicted as a separate block within processor 610, cache memory 616 (or selected aspects of cache memory 616) may be incorporated into processing core 612.The terms and expressions that have been used are used to describe the terms and not to limit the terms, and are not intended to exclude any equivalents of the features shown or described, but rather Various modifications are possible within the scope of the embodiments of the present disclosure. Therefore, it should be understood that the present disclosure may be modified and modified by the subject matter disclosed herein, and such modifications and Variations are considered to be within the scope of embodiments of the present disclosure.There are many reasons for using the package 10, including the following non-limiting reasons. For example, according to various embodiments, the dies 14 and 16 may differ in size relative to each other. Varying the spacing of the protrusions 32 and 32 relative to each other can help ensure that reliable transmission of signals through the bridge die 28 is maintained. Additionally, the reduced size of the second interconnect region 43 creates a space on the die 16 to allow the shunt region 70 to be located thereon as compared to the first interconnect region 41. According to some embodiments, the shunt region 70 may allow direct routing of signals from the die 16 or any other die on which the shunt region 70 is located, routed directly through the bridge die 28 to external components. According to some embodiments, the presence of the shunt region 70 or bumps 76 may allow testing or debugging of signals to be transmitted directly through the bridge die 28.According to some embodiments, in previous designs, the bump pitch mismatch between dies having a smaller pitch between adjacent bumps results in an uncorrelated spacing of the bumps relative to the bridge dies (eg, , the protrusions are different in spacing. However, in accordance with some embodiments, synchronizing the spacing between the protrusions of the first or second dies 14 and 16 with the pitch of the bridge dies 28 may release surface areas on the bridge dies 28 that may be utilized to The signal is routed through the bridge to the surface layer of the package 10 to effectively utilize the surface area.Additional embodimentEmbodiment 1 provides a semiconductor package including:a first die comprising a first bridge interconnect region;a second die comprising a second bridge interconnect region;a bridge die including a first contact region connected to the first bridge interconnect region and a second contact region connected to the second bridge interconnect region, whereinThe first bridge interconnect area is larger than the second bridge interconnect area;Each of the first bridge interconnect region and the second bridge interconnect region includes a plurality of conductive bumps;The average spacing between adjacent protrusions of the first bridge interconnect region is greater than the average spacing between adjacent bumps of the second bridge interconnect region.Embodiment 2 provides the semiconductor package of embodiment 1, further comprising a substrate, wherein at least one of the first die, the second die, and the bridge die is at least partially embedded therein.Embodiment 3 provides the semiconductor package of any of embodiments 1 or 2, wherein at least one of the first die, the second die, and the bridge die comprises silicon.Embodiment 4 provides the semiconductor package of any of embodiments 1-3, wherein the first die and the second die are independently selected from a central processing unit, a flash memory, a Wi-Fi transmitter, and a global positioning system At least one of them.Embodiment 5 provides the semiconductor package of any of embodiments 1-4, wherein the average spacing between the protrusions of the first bridge interconnect region is between the adjacent bumps of the second bridge interconnect region From about 10 times to about 0.25 times the range.Embodiment 6 provides the semiconductor package of any of embodiments 1-5, wherein the average spacing between the bumps of the first bridge interconnect region is between the adjacent bumps of the second bridge interconnect region From about 2 times to about 0.5 times the range.Embodiment 7 provides the semiconductor package of any of embodiments 1-6, wherein the average spacing between the protrusions of the first bridge interconnect region is in a range from about 75 microns to about 150 microns.Embodiment 8 provides the semiconductor package of any of embodiments 1-7, wherein the average spacing between the protrusions of the first bridge interconnect region is in a range from about 75 microns to about 130 microns.Embodiment 9 provides the semiconductor package of any of embodiments 1-8, wherein the average spacing between the protrusions of the second bridge interconnect region is in a range from about 20 microns to about 70 microns.Embodiment 10 provides the semiconductor package of any of embodiments 1-9, wherein the average spacing between the protrusions of the second bridge interconnect region is in a range from about 30 microns to about 65 microns.Embodiment 11 provides the semiconductor package of any of embodiments 1-10, wherein the first die is larger than the second die depending on at least one of a surface area and a volume.Embodiment 12 provides the semiconductor package of any of embodiments 1-11, wherein the conductive bumps of at least one of the first bridge interconnect region and the second bridge interconnect region comprise copper.Embodiment 13 provides the semiconductor package of any of embodiments 1-12, wherein the first bridge interconnect region is in a range from about 10 times to about 0.5 times the second bridge interconnect region.Embodiment 14 provides the semiconductor package of any of embodiments 1-13, wherein the first bridge interconnect region is in a range from about 5 times to about 2 times the second bridge interconnect region.Embodiment 15 provides the semiconductor package of any of embodiments 1-14, wherein the second die further comprises a first shunt region, the first shunt region comprising positioning adjacent the second interconnect region at the first location Multiple conductive bumps.Embodiment 16 provides the semiconductor package of embodiment 15, wherein the second die further comprises a second shunt region, the second shunt region comprising a plurality of conductive bumps positioned adjacent the second interconnect region at the second location .Embodiment 17 provides the semiconductor package of any one of embodiments 15 or 16, wherein at least one of the first shunt region and the second shunt region is at least partially surrounded by the bridge core.Embodiment 18 provides the semiconductor package of any of embodiments 15-17, further comprising at least one of a plurality of inputs and outputs connected to the first shunt area and the second branch Conductive bumps of at least one of the road zones.Embodiment 19 provides the semiconductor package of any one of embodiments 15-18, wherein a spacing between adjacent protrusions of at least one of the first shunt area and the second shunt area is in the second interconnected area The pitch of adjacent protrusions ranges from about 10 times to about 0.5 times.Embodiment 20 provides the semiconductor package of any one of embodiments 15-19, wherein a spacing between adjacent protrusions of at least one of the first shunt area and the second shunt area is in the second interconnected area The pitch of adjacent protrusions ranges from about 5 times to about 2 times.Embodiment 21 provides the semiconductor package of any of embodiments 1-20, further comprising a plurality of conductive bumps on the bridge core at a location between the first die and the second die.Embodiment 22 provides the semiconductor package of embodiment 21, wherein the spacing between adjacent conductive bumps of the bridge can range from about 1 mm to about 5 mm.Embodiment 23 provides a semiconductor package including:a first die comprising a first bridge interconnect region;a second die comprising a second bridge interconnect region;a bridge die including a first contact region connected to the first bridge interconnect region and a second contact region connected to the second bridge interconnect region, whereinThe first bridge interconnect area is larger than the second bridge interconnect area;The first die is larger than the second die by at least one of a surface area and a volume.Each of the first bridge interconnect region and the second bridge interconnect region includes a plurality of conductive bumps;The average spacing between the protrusions of the first bridge interconnect region is in the range of from about 10 times to about 0.25 times the average pitch between adjacent bumps of the second bridge interconnect region.Embodiment 24 provides the semiconductor package of embodiment 23, further comprising a substrate, wherein at least one of the first die, the second die, and the bridge die is at least partially embedded therein.Embodiment 25 provides the semiconductor package of any of embodiments 23 or 24, wherein at least one of the first die, the second die, and the bridge die comprises silicon.Embodiment 26 provides the semiconductor package of any of embodiments 23-25, wherein the first die and the second die are independently selected from a central processing unit, a flash memory, a Wi-Fi transmitter, and a global positioning system At least one of them.Embodiment 27 provides the semiconductor package of any of embodiments 23-26, wherein an average spacing between protrusions of the first bridge interconnect region is an average spacing between adjacent bumps of the second bridge interconnect region From about 2 times to about 0.5 times the range.Embodiment 28 provides the semiconductor package of any of embodiments 23-27, wherein the average spacing between the protrusions of the first bridge interconnect region is in a range from about 75 microns to about 50 microns.Embodiment 29 provides the semiconductor package of any of embodiments 23-28, wherein the average spacing between the protrusions of the first bridge interconnect region is in a range from about 75 microns to about 130 microns.Embodiment 30 provides the semiconductor package of any of embodiments 23-29, wherein the average spacing between the protrusions of the second bridge interconnect region is in a range from about 20 microns to about 70 microns.Embodiment 31 provides the semiconductor package of any of embodiments 23-30, wherein the average spacing between the protrusions of the second bridge interconnect region is in a range from about 30 microns to about 65 microns.Embodiment 32 provides the semiconductor package of any of embodiments 23-31, wherein the conductive bumps of at least one of the first bridge interconnect region and the second bridge interconnect region comprise copper.Embodiment 33 provides the semiconductor package of any of embodiments 23-32, wherein the first bridge interconnect region is in a range from about 10 times to about 0.5 times the second bridge interconnect region.Embodiment 34 provides the semiconductor package of any of embodiments 23-33, wherein the first bridge interconnect region is in a range from about 5 times to about 2 times the second bridge interconnect region.Embodiment 35 provides the semiconductor package of any of embodiments 23-34, wherein the second die further comprises a first shunt region, the first shunt region comprising positioning adjacent the second interconnect region at the first location Multiple conductive bumps.Embodiment 36 provides the semiconductor package of embodiment 23, wherein the second die further comprises a second shunt region, the second shunt region comprising a plurality of conductive bumps positioned adjacent the second interconnect region at the second location .Embodiment 37 provides the semiconductor package of any one of embodiments 35 or 36, wherein at least one of the first shunt region and the second shunt region is at least partially surrounded by the bridge core.Embodiment 38 provides the semiconductor package of any one of embodiments 35-37, further comprising at least one of a plurality of inputs and outputs connected to the first shunt area and the second branch Conductive bumps of at least one of the road zones.Embodiment 39 provides the semiconductor package of any one of embodiments 35-38, wherein a spacing between adjacent protrusions of at least one of the first shunt area and the second shunt area is in the second interconnected area The pitch of adjacent protrusions ranges from about 10 times to about 0.5 times.Embodiment 40 provides the semiconductor package of any one of embodiments 35-39, wherein a spacing between adjacent protrusions of at least one of the first shunt area and the second shunt area is in the second interconnected area The pitch of adjacent protrusions ranges from about 5 times to about 2 times.Embodiment 41 provides the semiconductor package of any of embodiments 23-40, further comprising a plurality of conductive bumps on the bridge at a location between the first die and the second die.Embodiment 42 provides the semiconductor package of embodiment 41, wherein the spacing between adjacent conductive bumps of the bridge can range from about 1 mm to about 5 mm.Embodiment 43 provides a method of fabricating a semiconductor package, the method comprising:Connecting the first die to the bridge die along the first bridge interconnect region;Connecting the second die to the bridge die along the second bridge interconnect region;The first bridge interconnect area is larger than the second bridge interconnect area;Each of the first bridge interconnect region and the second bridge interconnect region includes a plurality of conductive bumps;The average spacing between adjacent protrusions of the first bridge interconnect region is greater than the average spacing between adjacent bumps of the second bridge interconnect region.Embodiment 44 provides the method of embodiment 43, further comprising at least partially embedding at least one of the first die, the second die, and the bridge die in the substrate.Embodiment 45 provides the semiconductor package of any of embodiments 43 or 44, wherein at least one of the first die, the second die, and the bridge die comprises silicon.Embodiment 46 provides the semiconductor package of any of embodiments 43-45, wherein the first die and the second die are independently selected from a central processing unit, a flash memory, a Wi-Fi transmitter, and a global positioning system At least one of them.Embodiment 47 provides the semiconductor package of any of embodiments 43-46, wherein the average spacing between the bumps of the first bridge interconnect region is between the adjacent bumps of the second bridge interconnect region. From about 10 times to about 0.25 times the range.Embodiment 48 provides the semiconductor package of any of embodiments 43-47, wherein an average spacing between protrusions of the first bridge interconnect region is an average spacing between adjacent bumps of the second bridge interconnect region From about 2 times to about 0.5 times the range.Embodiment 49 provides the semiconductor package of any of embodiments 43-48, wherein the average spacing between the protrusions of the first bridge interconnect region is in a range from about 75 microns to about 150 microns.Embodiment 50 provides the semiconductor package of any of embodiments 43-49, wherein the average spacing between the protrusions of the first bridge interconnect region is in a range from about 75 microns to about 130 microns.Embodiment 51 provides the semiconductor package of any of embodiments 43-50, wherein the average spacing between the protrusions of the second bridge interconnect region is in a range from about 20 microns to about 70 microns.Embodiment 52 provides the semiconductor package of any of embodiments 43-51, wherein the average spacing between the protrusions of the second bridge interconnect region is in a range from about 30 microns to about 65 microns.Embodiment 53 provides the semiconductor package of any of embodiments 43-52, wherein the first die is larger than the second die in accordance with at least one of a surface area and a volume.Embodiment 54 provides the semiconductor package of any of embodiments 43-53, wherein the conductive bumps of at least one of the first bridge interconnect region and the second bridge interconnect region comprise copper.Embodiment 55 provides the semiconductor package of any of embodiments 43-54, wherein the first bridge interconnect region is in a range from about 10 times to about 0.5 times the second bridge interconnect region.Embodiment 56 provides the semiconductor package of any of embodiments 43-55, wherein the first bridge interconnect region is in a range from about 5 times to about 2 times the second bridge interconnect region.Embodiment 57. The semiconductor package of any one of embodiments 43-56, wherein the second die further comprises a first shunt region, the first shunt region comprising being positioned adjacent the second interconnect region at the first location Multiple conductive bumps.Embodiment 58 provides the semiconductor package of embodiment 57, wherein the second die further comprises a second shunt region, the second shunt region comprising a plurality of conductive bumps positioned adjacent the second interconnect region at the second location .Embodiment 59 provides the semiconductor package of any one of embodiments 57 or 58, wherein at least one of the first shunt region and the second shunt region is at least partially surrounded by the bridge core.Embodiment 60 provides the semiconductor package of any one of embodiments 57-59, further comprising at least one of a plurality of inputs and outputs connected to the first shunt area and the second branch Conductive bumps of at least one of the road zones.Embodiment 61 provides the semiconductor package of any one of embodiments 57-60, wherein a spacing between adjacent protrusions of at least one of the first shunt area and the second shunt area is in the second interconnected area The pitch of adjacent protrusions ranges from about 10 times to about 0.5 times.Embodiment 62 provides the semiconductor package of any one of embodiments 57-61, wherein a spacing between adjacent protrusions of at least one of the first shunt area and the second shunt area is in the second interconnected area The pitch of adjacent protrusions ranges from about 5 times to about 2 times.Embodiment 63 provides the semiconductor package of any of embodiments 43-62, further comprising a plurality of conductive bumps on the bridge at a location between the first die and the second die.Embodiment 64 provides the semiconductor package of embodiment 63, wherein the spacing between adjacent conductive bumps of the bridge can range from about 1 mm to about 5 mm. |
Methods, systems, and apparatus, including computer programs encoded on non-transitory computer storage medium(s), are directed to improving completeness of map information and data related to maps created through sensor data. Map completeness can be improved by determining object completeness and coverage completeness of a generated map and reducing amount of unknown areas of the generated map. |
1.A method for improving the integrity of the map, including:Obtaining sensor data from multiple sensors over time, the multiple sensors covering the sensor field;Generating a map by fusing the obtained sensor data, wherein the generated map includes a plurality of grid units at least partially covered by the sensor field;Using the obtained sensor data to determine the completeness of at least a part of the generated map by determining the object completeness and coverage completeness of the map from the obtained sensor data; andThe integrity of at least the portion of the generated map is updated by reducing the amount of unknown area of the at least a portion of the generated map by using the determined object integrity and the determined coverage integrity.2.The method of claim 1, wherein determining the integrity of the object comprises:Obtain sensor data from the entrance sensor and exit sensor;Determining objects that enter at least a part of the generated map and objects that leave at least a part of the generated map by using data obtained from an entrance sensor and an exit sensor;Determining the current number of objects in the at least a portion of the generated map from the obtained sensor data; andTrack each of the one or more objects in the at least a part of the generated map through the obtained sensor data.3.The method of claim 2, wherein determining the object integrity comprises: determining one or more objects based on the tracking, the number of objects determined, and the net change of objects in the at least a portion of the generated map. The disappearance of a plurality of objects from at least the part of the generated map.4.The method of claim 3, wherein, in response to determining that one or more objects have disappeared from the at least a portion of the generated map, the method further comprises: determining a prediction space, wherein the prediction space It includes one or more sets of grid cells currently occupied by the determined disappearing object.5.The method of claim 4, wherein determining the prediction space further comprises:For each of the determined disappearing objects, the past sensor data of each of the disappearing objects is used to determine the possible position that the disappearing object can currently occupy.6.The method according to claim 5, wherein determining the coverage integrity comprises: determining one or more unknown grid units of the generated map, wherein the one or more unknown grid units are Grid unit with insufficient sensor information.7.7. The method of claim 6, wherein updating the integrity of at least the part of the generated map comprises:Eliminate unknown grid cells that are members of the determined prediction space.8.7. The method of claim 7, wherein eliminating the unknown grid cell comprises: assigning a known state to the eliminated unknown grid cell.9.The method of claim 6, wherein determining the coverage integrity further comprises: determining one or more known grid cells of the generated map, wherein the grid cells are determined in response to the obtained sensor data Occupied by the object or not occupied by the object, the grid unit is known.10.The method of claim 6, wherein, in response to determining that one or more grid cells are not covered by the sensor field, the one or more grid cells are determined to be unknown.11.The method of claim 6, wherein, in response to determining that one or more grid cells are not covered due to a failure of one or more of the plurality of sensors, the one or more grid cells are Determined to be unknown.12.The method of claim 6, wherein, in response to determining that one or more grid cells are not covered due to occlusion, the one or more grid cells are determined to be unknown.13.The method of claim 2, wherein the object is a vehicle.14.The method of claim 1, further comprising: obtaining a region of interest, wherein the at least part of the generated map is the obtained region of interest.15.The method of claim 1, further comprising: after improving the integrity, determining one or more integrity metrics of the at least a portion of the generated map.16.The method of claim 2, wherein determining the integrity of the object comprises determining that there is no object based on the tracking, the number of objects determined, and the net change of the object in the at least a portion of the generated map The at least part of the missing from the generated map.17.The method of claim 16, wherein, in response to determining that no objects are missing, it is determined that all grid cells of the at least a portion are known.18.One or more computing devices, including one or more processors, and at least one non-transitory computer-readable storage medium including instructions that, when executed by the one or more processors, cause the one or more Processors are used for:Obtaining sensor data from multiple sensors over time, the multiple sensors covering the sensor field;Generating a map by fusing the obtained sensor data, wherein the generated map includes a plurality of grid units at least partially covered by the sensor field;Using the obtained sensor data to determine the completeness of at least a part of the generated map by determining the object completeness and coverage completeness of the map from the obtained sensor data; andThe integrity of at least the portion of the generated map is updated by reducing the amount of unknown area of the at least a portion of the generated map by using the determined object integrity and the determined coverage integrity.19.The one or more computing devices of claim 18, wherein the executed instructions cause the one or more processors to be used to determine object integrity by causing the one or more processors to perform the following operations:Obtain sensor data from the entrance sensor and exit sensor;Determining objects that enter at least a part of the generated map and objects that leave at least a part of the generated map by using data obtained from an entrance sensor and an exit sensor;Determine the current number of objects in the at least a part of the generated map from the obtained sensor data; andTrack each of the one or more objects in the at least a part of the generated map through the obtained sensor data.20.The one or more computing devices of claim 19, wherein the executed instructions cause the one or more processors to be used to determine the integrity of the object by causing the one or more processors to perform the following operations: The tracked objects, the number of objects determined, and the net change of objects in the at least part of the generated map to determine the disappearance of one or more objects from the at least part of the generated map . |
Method and system for improving mapTechnical fieldThe embodiments generally relate to improving the map.Background techniqueAutomated agents such as robots or cars rely on dynamic maps to navigate in a changing environment. Those maps are created from on-board or remote (for example, infrastructure) sensor detection, and need to meet certain quality requirements in order to generate a safe motion trajectory. Although the most advanced multi-sensor fusion scheme requires a variety of specialized plausibility checks to verify the correctness and accuracy of the sensor measurements performed, it is the quality attribute for the lack of information about a specific object or area, which is the quality attribute Almost no attention. Uncovered areas represent a general security risk, as they may contain hidden moving objects and should be avoided. Especially in the case of remote basic setting sensing, since the undetectable area may be located in the path of the vehicle to be driven immediately, this is expected to have a high correlation. However, unless a reference to the ground truth (ie, an alternative source of information for the same environment) is available, the automated agent typically does not know the incompleteness of the map it uses but only knows the incompleteness of the object actually detected The problem still exists. As a result, it cannot always make appropriate decisions.Generally speaking, as far as there are previous solutions, there is no concept of systematically quantifying the ignorance of map information. Ignorance about individual sensor measurements can be used for data fusion purposes, but no meaningful completeness measurement involving map information is reported to the end user. As a result, for the user's decision making, the previous completeness measures are not taken into account. This is a safety-critical issue, especially for complex automated environments such as roadside sensor infrastructure.Description of the drawingsIn the drawings, the same reference numerals generally indicate the same parts throughout the different views. The drawings are not necessarily to scale, but generally focus on illustrating the principles of the invention. In the following description, various embodiments of the present invention are described with reference to the following drawings, in which:Figures 1 and 2 illustrate an exemplary cooperative sensor field according to or implemented in an exemplary embodiment of the present disclosure.FIG. 3 illustrates an exemplary method for improving the integrity of a map according to or implemented in an exemplary embodiment of the present disclosure.FIG. 4 illustrates an exemplary processing flow according to or implemented in an exemplary embodiment of the present disclosure.5A-5D illustrate further exemplary sensor fields according to or implemented in an exemplary embodiment of the present disclosure.FIG. 6 illustrates an exemplary process for determining and improving map integrity according to or implemented in an exemplary embodiment of the present disclosure.FIG. 7 illustrates an exemplary processing flow according to or implemented in an exemplary embodiment of the present disclosure.8 and 9 illustrate an exemplary roadside sensor structure according to or implemented in an exemplary embodiment of the present disclosure.detailed descriptionIn the following detailed description, reference is made to the accompanying drawings, which illustrate specific details and embodiments in which the present invention can be implemented.The word "exemplary" is used in this application to mean "serving as an example, instance, or illustration." Any embodiment or design described as "exemplary" in this application is not necessarily construed as being preferred or advantageous over other embodiments or designs.The words "majority" and "plurality" in the specification and claims refer to an amount greater than one. The terms "group", "set", "sequence", etc. refer to an amount equal to or greater than one. Any term expressed in the plural without expressly expressing "majority" or "plurality" similarly refers to an amount equal to or greater than one. The term "smaller subset" refers to a subset of a set that contains less than all elements of the set. Any vector and/or matrix notation utilized herein is exemplary in nature and is adopted for explanation purposes. The aspects of the present disclosure described using vector and/or matrix notation are not limited to being implemented using vectors and/or matrices, and associated processing and calculations can be performed in an equivalent manner using collections or sequences of data or other information.As used herein, the term "a or an" shall mean one or more than one. The term "another" is defined as the second or more. The terms "include" and/or "have" are open ended (eg, include). The term "and/or" used herein is interpreted such that "A and/or B" means any of the following: A only; B only; A and B. Similarly, A, B, and/or C means any of the following: A; B; A and B; A and C; B and C; A, B, and C.As used herein, "memory" is understood as a non-transitory computer-readable medium in which data or information can be stored for retrieval. References to "memory" included in this text can therefore be understood as referring to volatile or non-volatile memory, including random access memory (RAM), read only memory (ROM), flash memory, solid state storage, magnetic tape , Hard drives, optical drives, etc., or any combination thereof. In this context, registers, shift registers, processor registers, data buffers, etc. may also be encompassed by the term memory. The term "software" refers to any type of executable instruction(s), including, for example, firmware.Unless explicitly specified, the term "transport" encompasses both direct (point-to-point) and indirect (via one or more intermediate points) transmission. Similarly, the term "receive" encompasses both direct and indirect reception. In addition, the terms "transmit", "receive", "transmit" or other similar terms encompass both physical transmission (e.g., transmission of radio signals) and logical transmission (e.g., transmission of digital data through logical software-level connections). For example, a processor or controller can transmit or receive data in the form of radio signals with another processor or sensor through a software-level connection, where the physical transmission and reception are handled by the radio layer such as RF transceiver and antenna , And the logic transmission and reception through the software level connection are executed by the processor or controller. The term "transmission" encompasses one or both of transmission and reception, that is, unidirectional or bidirectional transmission in one or both of the incoming and outgoing directions. The term "calculation" encompasses both'direct' calculations via mathematical expressions/formulas/relationships and'indirect' calculations via lookup tables or hash tables and other array indexing or search operations.Exemplary embodiments of the present disclosure relate to or for the estimation (e.g., real-time) of the integrity of the map (e.g., dynamically created map) or selected sub-regions of the map (especially in the absence of external ground truth). Estimation) system, device and/or method. In various exemplary embodiments, details of sensor field design such as, but not limited to, position, range, orientation, etc. are known. That is, each exemplary embodiment herein relates to a method for estimating the completeness of information for a dynamic occupancy grid map without basic facts.In one or more embodiments of the present disclosure, an area with a supervised boundary (eg, a closed road section) may allow object counting to enhance map integrity and thereby improve the quality of dynamic map information.Figures 1 and 2 illustrate an example of a cooperative sensor field according to an exemplary embodiment of the present disclosure. In FIG. 1, the area of interest (AoI) 10 is covered by the field of view (FoV) of the spatially distributed sensors 20a, 20b, 20c, 20d, and 20e. In Figure 1, FoV 30a, 30b, 30c, 30d, and 30e do not completely cover AoI. Each sensor sends or transmits its sensor data to the central computing node 50. In some embodiments, the central computing node is called a fog node, or simply a fog. Further, each sensor can be characterized by its FoV, range, resolution, orientation, sensor location, and can have a unique sensor index. In various embodiments, the sensor information or related sensor information may be known to the fog 50 at any time (thus, if necessary, the fog is dynamically updated).Generally speaking, the sensors described herein can transmit sensor data to a computing node (e.g., a central computing node) or fog through any suitable device (e.g., directly or indirectly via a wired or wireless device). The central computing node can be any suitable device or devices. Further, according to various embodiments of the present disclosure, sensor detection information (for example, target position, target speed, size, etc.) can be reported from the sensor (for example, wired or wireless) or electronically communicated to the central node. This sensor information can be repeatedly and/or continuously transmitted to the fog over time.Generally speaking, the sensors described herein can be radar sensors, camera devices, light sensors, lidar sensors, and/or any other suitable sensors (for example, sonar sensors, lidar sensors, radar sensors, video/camera Image sensor or V2X sensor). For example, one sensor on a vehicle (e.g., a motor vehicle) may be a rotating lidar sensor installed.In various cases, for simplicity, it can be assumed that if the sensor has a direct line of sight to the target, the probability of false positive or false negative detection is negligible. If the target is obscured by other objects or is not in the sensor's FoV, the target is missed by the sensor field. Further, in some exemplary embodiments, false positive measurements can be largely eliminated by using tracking algorithms. For example, the possibility of false negative detection of an object directly seen by the sensor may depend on the technical capability of the sensor, but this possibility is usually small. In each embodiment, a sensor having the ability to detect or read the size of the detected object is used.A computing node (eg, a central computing node) or fog 50 may include or may be one or more computing devices that may be configured to dynamically generate a map using sensor data obtained from sensors, etc. More specifically, the central node or fog can implement a fusion mapping algorithm for dynamically generating a map from the acquired sensor data. That is, the fog master is used to fuse individual sensor data to create a dynamic map calculation process. The map can be dynamically created or updated in real time. In various embodiments, an occupancy grid map (OGM) can be built from evenly spaced units that are a discretized environment model.In addition to using the received sensor data to dynamically create a map, the fog may be further configured to monitor the health of the sensors, for example via a heartbeat update signal, and to be configured to immediately detect or identify sensor interruptions.In a further example embodiment, map information (e.g., a generated map) may be transmitted to the agent (e.g., via wired or wireless technology) for cooperative sensing and navigation purposes. The agent may be a process or algorithm implemented by fog or another computing device.In one example, the vehicle can use map input to infer driving decisions. That is, the agent may have to be able to associate its "self-perspective" with the map content for self-positioning, that is, the agent and the sensor field share a reference coordinate system. That is, for example, if both the agent and the sensor field have a module (Global Navigation Satellite System) for GNSS positioning.In various embodiments of the present disclosure, an area of interest (AoI) is a well-defined area (or area in some cases) in space. The AoI can be determined by the user (eg, defined by the user through user input) and/or can cover the relevant area for the task at hand. The AoI may but does not necessarily overlap with the area monitored by the cooperative sensor field. For example, the AoI may be a specific sub-region of interest of the map for upcoming driving operations. If the AoI only has a small or no overlap with the range of the dynamic map, this presents substantial design incompleteness.Referring back to Figure 1, there is shown AoI 10 monitored by sensors 20a, 20b, 20c, 20d, and 20e. The sensors 20 (sensors 20a, 20b, 20c, 20d, and 20e) of FIG. 1 may be implemented as part of a roadside sensor infrastructure for supporting automated roads. However, FOVs 30a-30e do not completely overlap or cover AOI 10.In the example of FIG. 2, the sensor and fog are included with the vehicle 40 or integrated with the vehicle 40. That is, the vehicle 40 has an on-board sensor for environment sensing. In the case of Figure 2, AoI is not fixed or limited. In FIG. 2, the sensor has FoVs 30a, 30b, 30c, 30d, and 30e, and these FOVs are fixed relative to the vehicle 40. In this example, AoI is the area in front of the vehicle 40.FIG. 3 illustrates an exemplary method for improving the integrity of a dynamically created map according to at least one exemplary embodiment of the present disclosure. FIG. 3 can be understood in conjunction with the exemplary embodiment depicted in FIG. 4. That is, FIG. 4 illustrates a processing flow related to the method of FIG. 3 according to at least one exemplary embodiment. The method depicted in FIG. 3 may be implemented by one or more processors of one or more computing devices. One or more computing devices can be a central node or fog.At 310 of FIG. 3, one or more computing devices obtain sensor data or sensor information from multiple sensors over time, where the multiple sensors define a sensor field. For example, the sensors may be spatially distributed as shown in Figures 1 and 2.At 320 in FIG. 3, one or more computing devices dynamically generate a map by fusing the obtained sensor data, where the generated map includes a plurality of grid cells at least partially covered by the sensor field. That is, the generated map may be or may include information representing or about the external environment covered by the sensor, for example, each grid cell may include external information or environmental information or may be associated with external information or environmental information. The grid cell (or simply, the cell) may be any suitable shape or size, and may depend on the characteristics of the sensor. That is, the grid cells are not necessarily uniform in shape or size. Any suitable sensor fusion algorithm can be used to create sensor maps or map from sensor data. These sensor fusion algorithms include but are not limited to involving or applying central limit theorem, Kalman filter, Bayesian network, evidence theory ( Dempster-Shafer), convolutional neural network and other algorithms/techniques.Further, in one or more embodiments, the exemplary method of FIG. 3 may further include, for example, receiving an AoI from a user. Correspondingly, the generated map may include AoI or may be limited to AoI, which may be a sub-region of a sub-part of the map. FIG. 4 shows that sensor data or sensor information 405 flows to the fog 450 to generate a map at 410. FIG. 4 shows that input may be received at 415, for example, from a user or an electronic source, the output specifying AoI 415. The AoI is transmitted to or obtained from the fog 415, and is used to update or refine the map 410 into the AoI map 420.Referring back to FIG. 3, at 330, one or more computing devices determine the completeness of the generated map by determining the object completeness and coverage completeness of the generated map from the obtained sensor data. In one or more embodiments, one or more computing devices may determine the completeness of a sub-portion or sub-region (eg, AoI) of the generated map. FIG. 4 shows sensor information 405 used by fog 450 to determine the integrity of the map by calculating the unknown at 425. As explained in more detail below, the integrity determination may include determining an unknown area of the map. For example, an unknown area may be an area (e.g., cell) in which the occupancy state is unknown or undetectable due to lack of sufficient sensor information for the area.At 340 in FIG. 3, one or more computing devices may update (or improve) the integrity of the generated map by using the determined object integrity and the determined coverage integrity to eliminate the unknown area of the generated map. . In other words, the map or map information can be updated to reduce the amount of unknown areas or the number of unknown areas. That is, one or more unknown areas of the map can now be considered "known". For example, one or more computing devices may use completeness information to eliminate or reduce the amount of unknown areas or the number of unknown areas in the obtained AoI, instead of reducing the unknown areas in the entire map.For example, FIG. 4 shows that the map of the AOI with unknown is updated at 430 and the integrity information is also determined or calculated at 435. This updated information can be used further. For example, the information associated with determining completeness may include calculations of completeness metrics (eg, Γ(AoI)). Therefore, the updated map and/or completeness metric (with reduced location area) may be transmitted or used as input in further decision-making processes at 440. In one example, the updated AOI map and integrity metric can be input into the self-driving algorithm or any other computer process that uses the data. In some embodiments, the information is simply given to the user or is simply presented to the user (e.g., visually), for example through a display device in order to be relied upon by the user.Generally speaking, the map integrity reflects the extent to which the map or sensor is sufficient to express the scope of discussion (for example, AoI). The map integrity is generally reduced or reduced by the part of the information that fails to capture the basic facts (for example, objects (for example, passing objects) are missed by the sensor). Referring back to FIG. 3, at 330, two types of completeness are determined or calculated, namely, object completeness and coverage completeness. FIG. 6 illustrates a process for determining and improving the integrity of the map by determining the integrity of the object and the integrity of coverage according to at least one exemplary embodiment.Generally speaking, in an object-based environment representation model, object integrity can be measured as objects (including attributes such as size, location, and speed) reported in a map (for example, AoI map) and based on basic facts. The ratio of objects. That is, the known objects are compared with the currently existing objects. However, the number of incomplete objects does not produce information about the location of the missing objects. In addition, in the real dynamic environment, the corresponding basic facts are usually not available.On the other hand, in a grid-based environment model, the information unit is not an object entity but a grid unit (Euclidean region) with a known or unknown state. In this case, the coverage integrity can be determined or calculated by comparing the number of known grid cells with the number of currently existing grid cells. This is equivalent to the ratio of the spatially covered part of the AoI to the full range of the AoI.Generally speaking, the coverage integrity and the object integrity can be represented by different metrics, and will only overlap in the case of completely uniform traffic and minimal cell occlusion. This can be understood as follows: For the ideal case of an ideally uniform vehicle distribution with constant vehicle density for each area in both the covered and uncovered parts of the AoI, on average, the coverage integrity measure and object integrity The degree measures will be equal or coincident with each other. However, passing objects may cast shadows on the grid cells behind an area—depending on the design of the sensor field—that area may be very different from the area corresponding to the occluded vehicle in the shadow area. Therefore, in practice, the two measurements may differ.As discussed in FIG. 3, the completeness of the generated map is improved by using both the determined object integrity and the determined coverage integrity to eliminate unknown areas.In various embodiments of the present application, the object integrity may include determining whether any objects are missing from the map (for example, AoI). As described in various embodiments, the map may be generated by sensor fusion (for example, an implementation of a sensor fusion algorithm that operates on the obtained sensor data). Sensor fusion algorithms generally include methods, techniques, or processes that combine data from several sensors or sources for the purpose of improving application or system performance. (See, for example, https://en.wikipedia.org/wiki/Sensor_fusion). In various exemplary embodiments, the sensor data may have environmental properties from various sources, and the sensor data may be intelligently merged or combined for positioning and mapping of an outdoor or external environment.For example, at 610 in FIG. 6, one or more computing devices (e.g., fog) are configured to determine objects entering and leaving at least a portion of the generated map, and determine that the current in the at least a portion of the generated map The number of objects. This can be achieved by monitoring the flow of objects entering and leaving the map area (AoI. In other words, the entry and exit area of the map or AoI can be supervised so that objects will not enter or exit unnoticed. Except for generating fusion Sensors other than the sensors of the map or independent of the sensors used to generate the fusion map can be deployed to monitor or supervise the entry and exit of objects. Such additional sensors can include, for example, cameras, gratings, or wireless at entry and exit points. The device for the handshake message.In short, the fog can receive sensor information to dynamically or continuously track the number of objects leaving and entering and the number of objects currently in AoI, and at the same time the number of objects leaving and entering and the objects currently in AoI The number is counted. The counting of objects can be done or implemented at any time to establish a dynamic ground truth in terms of the number of objects. As a prerequisite, it may be necessary to calibrate the basic fact once, for example, through a blank or empty scene.In addition, determining the integrity of the object may include individually or independently monitoring or tracking each of the objects in the map or AoI. At 620 of FIG. 6, the fog tracks or monitors each of the one or more objects in at least a portion of the generated map. The tracking of the object can be accomplished through the cooperation of sensors that form or define the sensor field. In this case, these sensors can detect the presence of the object and then be used to track the movement of the object. Moreover, sensors can be used to detect and record various attributes of objects in the sensor field, including, for example, target position, target speed, size, orientation, and so on. Report or transmit such sensor information to the fog that uses that information to track each object. In various embodiments of the present disclosure, such sensor data is associated with each object, for example, detection (e.g., where the object is detected, when the object is detected, etc.) and object attributes (size, position, speed, etc.) , Orientation, etc.) may be stored in any suitable computer storage medium, for example, operatively connected to one or more databases of the fog. In addition, the status of the sensor (sensor operation status, sensor position, sensor FoV, etc.) can also be stored. Once the object is detected, the fog can uniquely assign an identification to the tracked object (which can be stored along with the associated attributes of the tracked object). In short, depending on such information (for example, historical data sensor data), fog can be retrieved at any time.In FIG. 6, at 630, the fog may determine the object integrity of at least a portion of the generated map. That is, based on the verification of the current (in AoI) amount of objects and the number of objects leaving or entering the AoI using tracking, one or more computing devices can determine the degree of object integrity. For example, when object tracking does not detect the disappearance of any objects and the tracking corresponds to the amount or number of current objects including the number of objects entering or leaving, there will be object integrity. That is, based on the current number and amount of objects entering and leaving, the tracking information may indicate whether any objects are missing. If there is no object omission, for example, if none of the tracked objects is "missed" and the net change of the object in the AoI is taken into account by entering or leaving objects, the map can be determined to be complete. That is, that the map is determined to be complete means that no objects are not taken into account, and therefore there are no unknown grid cells in terms of occupancy in at least AoI of the map. The map can then be updated at step 660.In the case where there is no missing object—it will be considered that there is no unknown area—for example, there is no area in which the occupation of the area (grid unit) is unknown.However, in the case where the map is incomplete, for example, one or more objects are missing or not taken into account, then at 640, the prediction space is determined. For example, when one or more objects are determined to be missing based on, for example, analysis of tracking and counting information (current number information and the number of objects entering and exiting), the fog determines or calculates the prediction space for each missing object . The prediction space of all missing objects is a unified prediction space.According to an exemplary embodiment of the present disclosure, a sensor may be used to detect various events associated with an object. For example, fog can use the acquired sensor data to detect or identify disappearance events as discussed. The disappearance of the object can be detected by comparing the detection history in the previous time step(s) with the detection(s) in the current time step, so that it can be verified that there is no stable continuity of the vehicle path. Therefore, the fog can infer that the object (e.g., vehicle) has left the FoV of the sensor field or is occluded.In various embodiments, fog uses the acquired sensor data to determine the exit event. Except for the fact that the vehicle has left the map (AoI) within a specified time interval, the exit event can be equivalent to the disappearance event. The fog can be configured to determine or identify an entry event. Except for the fact that the vehicle has entered the AoI within the specified time interval, the entry event may be equivalent to or similar to the exit event. The fog can be configured to determine or detect occurrences. The appearance of the object can be detected or discovered by comparing the detection history in the previous time step(s) with the detection(s) in the current time step. Vehicle detection occurs for vehicles that have no continuous tracking history. Fog can be inferred or determined when the object was occluded or outside the FoV before a certain time step, the object has now entered or just entered the sensor FoV.Once an occurrence is identified, the fog can delete the prediction space associated with the reappearing object in order to free up the grid space involved. Further, the fog can reassign the object ID to specify which of the previously occluded objects has reappeared. Since more than one object can be located in the prediction space, the reallocation may not always be unique. In this case, part of the prediction space should be reserved. If the object type is also registered and stored along with the ID, reallocation is facilitated.In each exemplary embodiment of the present disclosure, the state of the object is managed to adapt to the above event. The following is an exemplary pseudo code that illustrates such logic. Maintain two lists featuring all objects in the AoI in sequential time steps (previous and current):list_prev={v_1,v_2,...};list_curr={v_1,v_2,...};Each vehicle object is a structure having at least the following attributes.v=struct(ID,position,velocity,size,status,prediction space) (v=struct(ID,position, velocity, size, status, prediction space));status∈{vis,invis,noAoI} (status∈{vis,invis,noAoI});The status can be any of visible (vis), invisible (invis), or not in AoI (noAoI).Initialize list_prev, list_curr (initialize list_prev, list_curr);WHILE time<time limit (when time<time limit)FOR all detections (for all detections)Write list_curr.position,list_curr.velocity,list_curr.size,list_curr.status=vis or noAoI (write list_curr.position,list_curr.velocity,list_curr.size,list_curr.status=vis or noAoI);ENDFORAssign list_curr IDs by matching positions, velocity of list_curr tolist_prev (assign list_currID by matching the position and speed of list_curr with the position and speed of list_prev);Complement list_curr with items present in list_prev but not in list_curr(so list_curr.status=invis (use items that exist in list_prev but not in list_curr to supplement list_curr (so list_curr.status=invis));Identify exit/entry/appearance/disappearance+continuing invisibilityevents from list_curr and list_prev status (identify exit/entry/appear/disappear from list_curr and list_prev status + continue invisibility events);FOR all events (for all events)IF disappearance or continuing invisibility (if disappearance or continuing invisibility)Update prediction space using last known position and velocity in list_prev (use the last known position and velocity in list_prev to update the prediction space);ELSEIF appearance event (otherwise an event occurs)Reassign ID and reset respective prediction space (reassign ID and reset respective prediction space);ELSEIF exit event (otherwise exit event)Delete from list_curr (delete from list_curr);ENDIFENDFORtime=time+1 (time=time+1);list_prev=list_curr;ENDWHILEIn terms of coverage integrity, a map (for example, a map generated by a fusion algorithm based on collaborative sensor data as described herein) may include multiple grid cells. Referring again to FIG. 6, at 650, the coverage integrity is determined by calculating or determining the generated map or the unknown grid unit of the AoI of the generated map by the computing device/fog. In other words, in order to determine the coverage integrity, the status of each grid cell is evaluated. In at least one embodiment of the present disclosure, the computing device determines whether there is sufficient sensor information for each grid cell to determine whether the specific grid cell is occupied. Moreover, the sensor information is evaluated as to whether the sensor information is sufficient to confirm whether the occupancy state of the grid unit is known. For example, if there is no sensor information or insufficient sensor information for a specific grid unit, the specific grid unit is determined as an "unknown" state and/or the specific grid unit is assigned an "unknown" state. For example, if no sensor information for the grid cell is available because the grid cell is not covered by the sensor field, the "unknown" state can be called or determined.In contrast, if the sensor information of the grid cell is insufficient to confirm the occupancy state, the grid cell is determined to be in the "unknown" state and/or the grid cell is assigned the "unknown" state. Further, when the state of the grid unit is known, the grid unit can be further determined to have an "occupied" or "non-occupied" state or assigned to the grid unit as "occupied" or "unoccupied" status. That is, if the sensor data indicates that there is at least one object in the grid cell, the grid cell may be determined as an occupied state or may be assigned an occupied state to the grid cell. Similarly, if the sensor data indicates that there is no object in the grid cell, the grid cell may be determined to be in a "non-occupied" state or the grid cell may be assigned a "non-occupied" state. According to an exemplary embodiment, the object may be almost anything that occupies a grid unit. In some embodiments where the generated fusion map or AoI includes roads, the object(s) may be motor vehicles, bicycles, pedestrians, animals, rocks, etc.In this disclosure, the unified set of all grid cells with unknown labels is denoted as S unknown. The "unknownness" of the grid cells can be attributed to different reasons, such as design integrity, system integrity, and integrity related to sensing.Unknown grid cells due to design integrity (denoted as S design) occur when parts of the map (for example, AoI) are not covered by the sensor field due to design constraints. Grid unit unknown due to system integrity (denoted as S fault) in the part of the map (for example, AoI) is currently not covered by the sensor field due to one or more sensor failures but otherwise fully functional in the sensor field Occurs when the next will be overwritten. Unknown grid cells (denoted as S occlusion) due to sensing-related integrity occur when a part of the map (eg, AoI) is currently not covered by the sensor field even though the system is working properly or correctly. For example, this may be caused by objects that occlude or block the sensing of the grid cells.According to various exemplary embodiments of the present disclosure, the parameters S design, S fault, S occlusion, and S unknown can be depicted in FIGS. 5A-5D. As shown, FIGS. 5A-5D include the sensor, sensor field, AoI, etc. depicted in FIG. 2. Further included in these figures are multiple objects 60, which in this example are vehicles within AoI.By comparing the known sensor FoV and orientation with the map or AoI, one or more grid cells belonging to the set S design can be directly determined or detected. The S design can be changed due to displacement of the region of interest or by reconfiguration of the sensor field. In the example of FIG. 5A, the unit 80a is a unit that is unknown due to design constraints and therefore belongs to the S design. As shown, the unit 80a is not covered by the sensor, for example, outside the FoV of the sensor, but the unit 80a is a unit in the AoI.One or more grid cells belonging to the set S fault can be determined or deduced by locating the faulty sensor in the sensor field by virtue of its unique ID and analyzing changes in the coverage of the map or AoI. This can be easily achieved where the configuration of each sensor is known to the fog node. The S fault can vary over a typically very large time frame of the order of the average lifetime of the sensor, or the S fault can vary if the map or AoI is modified. In the example of FIG. 5B, the unit 80b is a unit that is unknown due to a sensor failure and therefore belongs to an S failure. As shown, the unit 80b is within the field of view of the malfunctioning sensor 20b. In other words, these units are in AoI, but within the designed sensor that has failed but is unknown due to sensor failure.It is possible to detect or determine that one or more grid cells belong to one or more grid cells occluded by S based on the object detection made by the sensor field. For example, the fog node can project a map or AoI subdomain that is occluded behind the detected object, while taking the measurement uncertainty into account. The set changes continuously as the object moves. Sensing-related integrity can be particularly important for roadside sensor infrastructure, because the sensing angle of view may be very different from that of vehicles on the road, and therefore occluded areas may occur in the immediate vicinity of moving objects. In the example of FIG. 5C, the cell 80c is a cell that is unknown due to a blocked or blocked sensor failure and therefore belongs to the S block. As shown, the unit 80c is in the field of view of some of the sensors but is blocked by the object (vehicle) 60c.Finally, FIG. 5D shows the total number of units unknown due to design constraints (S design), units unknown due to sensor failure (S failure), and units unknown due to occlusion (S occlusion). Therefore, the total set of unknown units is based on the union of the coverage integrity method, which is:S unknown=S design∪S failure∪S occlusionTherefore, without further modification, the overall coverage integrity of the map or AoI is:Among them, S is the set of all grid cells including the entire region of interest. Γ usually refers to the selected AoI of the map.For any traffic scene and any AoI, the coverage completeness defined above can serve as a quality measure of the usability of the dynamic map. Since all occluded areas are regarded as security risk areas, this kind of general estimation will lead to an upper limit of incompleteness and therefore lead to extremely cautious decision-making. Further, even in the absence of basic facts about the number of objects, the coverage completeness is determinable and therefore can be particularly useful in very dynamic environments with low predictability (such as densely populated urban settings) .According to various embodiments, the fog or central computing node may be configured to determine the total set and/or total integrity of unknown units. The fog can determine this information dynamically, for example, the information is determined or calculated as the acquired sensor information is received or updated.According to an exemplary embodiment of the present disclosure, if a fog node recognizes a disappearing event, the fog node may be triggered to calculate a separate prediction space for a corresponding object (for example, a vehicle). For example, the prediction scheme can be implemented by fog in order to estimate the set of all grid cells that can be physically occupied. Thus, for example, using sensor data through fog, disappearance events are identified, and then fog can resort to and use the next object history in order to determine or estimate the field of view of the object, for example, the possible position (grid unit) the object can occupy. In various embodiments, the determination of the prediction space may rely on using a mechanical motion model of the disappearing object based on one or more last known positions and velocities. Any suitable algorithm or process can be used to estimate the subject field of view. For example, in the case where the object is a vehicle, the fog can use the maximum steering angle and the physical acceleration rate to determine the vehicle field of view.The set of grid cells in the union of all such object views may identify the unified prediction space S physics. In other words, S physics includes the possible locations of the missing objects, for example, including one or more grid cells that can be occupied by one or more missing objects.Returning to Figure 6, at 660, the computing device or fog updates the map integrity by reducing the number of unknown grid cells. For example, according to an exemplary embodiment, after the object integrity is determined (for example, the S physics is determined), the coverage integrity can then be calculated and improved. Updating the map integrity can be achieved by eliminating grids that are "unknown" or re-identifying or reallocating certain identified unknown grids as "known". Specifically, unknown grid cells (for example, S unknown) that are not members of the unified prediction space (for example, S physics) can be eliminated. These unknown grid cells are grid cells that will not be occupied. Therefore, new or updated grid cells are those cells that belong to the prediction space and that also overlap with the unknown cells that were originally determined or calculated.In the case where all objects are taken into account (complete object integrity), it is therefore recognized by the computing device that these original unknown units are not occupied by any objects. In contrast, in the case where there is a disappearance event-initially unknown grid cells can be eliminated as unknown, because the prediction space indicates that the "missing object" is not in these grid cells or will not be in these grids Unit.In an exemplary embodiment, the fog may use a unified prediction space to eliminate unknown grid units from the previous set of unknown grid units. This refinement by fog can be expressed as follows:S unknown'=S unknown∩S physicsS unknown'=S design∪S failure∪S occlusion∩S physicsIn short, S unknown' is the intersection of the previous S unknown and S physics.As discussed in the next section, depending on the system design, this refinement can significantly improve the integrity quality measurement. The critical case for complete object detection is important: if the existence of all objects is verified—there is no disappearance event and therefore no missing objects—then all unknown grid cells can be safely considered unoccupied, In order to obtain a temporary complete dynamic object.The elimination of unknown grid domains is possible, only due to the fact that the number of objects verifies the absence of objects in these areas. If the sensor field does not detect the entry point and exit point, S physics can also be evaluated, but this does not help the coverage integrity. It cannot be ruled out that the object may appear somewhere at some time in the temporarily occluded domain (for example, pedestrians may step onto the road from the sidewalk). Therefore, even if it is known that the vehicle has just moved from the monitored area into the obscured area, the predicted object or the vehicle field of view cannot provide any useful insight.FIG. 7 shows the updated flow of FIG. 4 according to at least one exemplary embodiment of the present disclosure, in which the calculation unknown part is updated as described herein.FIG. 8 illustrates an exemplary roadside sensor structure of a road section 800 according to at least one exemplary embodiment of the present disclosure. According to an exemplary embodiment, the fog may implement a method for dynamically improving the integrity of the map.The roadside sensor structure may include a plurality of sensors 805 that operatively communicate with fog to monitor a section of road (AoI) 800 that currently includes cars 870a, 870b, 875 and trucks 880. The field of view of the sensors may overlap to provide specific emphasis or redundant supervision at the pre-defined entry point 810a and exit point 810b. Through the detection of all entering and exiting objects, the sensor field can establish the concept of basic facts in terms of the number of vehicles in the AoI.As shown in FIG. 8, a car 875 has just entered the blocked area behind the truck 880. Therefore, the sensor field can be used to identify temporary object imperfections (for example, the disappearance of a car 875). In response, the fog can determine the domain of the physically possible location of the missed car based on the use and analysis of the most recent detection history. The intersection of the set of units and all unknown units defines the incompleteness of the dynamic coverage of the AoI, and the incompleteness of the dynamic coverage of the AoI represents the quality measurement of the AoI map. The light-colored cells 820 show the cells that are determined to be known through the coverage integrity analysis, which is the initially known occupancy state of these cells. Similarly, the darker cell 830 is a cell that is determined to be unknown through coverage integrity analysis. For example, the sensor 805f in the sensor 805 in FIG. 8 is a faulty sensor, so due to this faulty sensor 805f, the unit 830f is unknown.In FIG. 8, the dark cell 840 is initially part of the unknown cell 830. In other words, the dark cell 840 is initially part of the unknown cell 830 due to occlusion caused by the truck 880. In response to the fog recognizing that the car 875 has disappeared through sensor tracking of the car 875, the fog immediately accesses and uses past or historical sensor data when implementing a prediction scheme for estimating or determining the possible location of the car 875. In this case, the dark unit 840 is a unit in which the determined unit from the prediction space intersects or overlaps with the original unknown unit 830. Therefore, the remaining units outside of or not part of 840 may all be identified and considered to have a known occupied state, for example, unoccupied.In an exemplary embodiment, the generated map with updated completeness and completeness metrics can be used by other agents (human or computer) to make decisions. For example, FIG. 9 illustrates an exemplary roadside sensor structure of a road section 800 according to at least one exemplary embodiment of the present disclosure. According to an exemplary embodiment, the fog may implement a method for dynamically improving the integrity of the map. Similar to the road section of FIG. 8, the route of FIG. 9 may include a roadside sensor 905 in communication with the fog operation site. In addition, the vehicles 910a and 920a also include sensors (not shown).In this case, the road segment is a two-lane highway in which car number 910a is blocked by a slow vehicle (truck 920) that it wants to overtake. The AoI for this maneuver for the car 910a extends far back because fast vehicles (such as 910b in the left lane) must be anticipated in order to make a safe driving decision. However, in this case, the on-board sensing range of the vehicle 910a is insufficient to cover this area. Therefore, the corresponding candidate map for the AoI of the vehicle 910a is highly incomplete (by design). However, roadside infrastructure can provide an almost complete dynamic map of AoI, and therefore verify that overtaking is currently a safe option. In this example, neither of these two information sources has detected any immediate threats in the AoI, and the vehicle-mounted sensor field has no evidence at all. However, using these two information sources (for example, dynamic maps with improved or updated completeness information), agents that are using infrastructure sensors to proactively verify that there is no security risk can improve decision-making. Therefore, the decision is not based on collision avoidance but on integrity measurement.In each embodiment of the present disclosure, considering the technical fact that the range of a specific AoI is known compared to the number of objects, the basic fact (for example, the number of all possible visible grid cells) is inferred. The uncovered grid domain can only give hints for possible object positions, but does not necessarily have to contain any missing objects, and therefore the resulting integrity measurement is extremely cautious.Exemplary embodiments of the present disclosure may be implemented by computing device(s) executing the methods described herein or similar methods. For example, a computing device may include one or more processors configured to execute instructions (eg, computer/hardware instructions) that are stored on suitable non-transitory computer-readable media And can be operatively accessed from a suitable non-transitory computer-readable medium. Therefore, the processor(s) of the terminal device can execute instructions, which will enable the computing device to implement the methods or variations of the methods discussed herein.Although the above description uses various exemplary use cases, the use of these specific examples is used to enhance the clarity of the description, and does not limit the applicability or scope of the technology described herein. Although the above description and related descriptions and drawings may depict electronic device components as separate elements, the skilled person will appreciate various possibilities of combining or integrating the discrete elements into a single element. Such possibilities may include: combining two or more circuits to form a single circuit, mounting two or more circuits on a common chip or base to form an integrated component, in a common processor core On the implementation of discrete software components, etc. In turn, the skilled person will realize that a single component can be divided into two or more discrete components, such as decomposing a single circuit into two or more separate circuits, and dividing the chip or base into which it was originally placed. Separate components on the computer, divide the software component into two or more parts and execute each part on a separate processor core, and so on.The following examples relate to further aspects of this disclosure:Example 1 is a method for execution by one or more computing devices, the method comprising: obtaining sensor data from a plurality of sensors over time, the plurality of sensors covering a sensor field; and generating a map by fusing the obtained sensor data , Wherein the generated map includes multiple grid units at least partially covered by the sensor field; using the obtained sensor data, the object integrity and coverage integrity of the map are determined from the obtained sensor data to determine the generated And update the integrity of at least a portion of the generated map by using the determined object integrity and the determined coverage integrity to reduce the amount of unknown area of at least a portion of the generated map degree.In Example 2, the subject as described in Example 1, wherein determining the integrity of the object may include: obtaining sensor data from an entrance sensor and an exit sensor; determining access to the generated map by using the data obtained from the entrance sensor and the exit sensor At least a portion of the objects and objects that leave at least a portion of the generated map; determine the current number of objects in at least a portion of the generated map from the obtained sensor data; and track the generated by the obtained sensor data Each of one or more objects in at least part of the map.In Example 3, the subject as in Example 2, wherein determining the object integrity may include: determining one or more objects based on tracking, the determined number of objects, and the net change of the objects in at least a portion of the generated map The object disappears from at least part of the generated map.In Example 4, the subject of Example 3, wherein, in response to determining that one or more objects have disappeared from at least a part of the generated map, the method may further include: determining a prediction space, wherein the prediction space includes a Or a set of grid cells that can be occupied by multiple determined disappearing objects.In Example 5, the subject as described in Example 4, wherein determining the prediction space may further include: for each determined disappearing object, using past sensor data of each of the disappearing objects to determine Possible locations that the disappearing object can currently occupy.In Example 6, the subject of Example 5, wherein determining the coverage completeness may include: determining one or more unknown grid units of the generated map, wherein the one or more unknown grid units are for It has a grid cell with insufficient sensor information.In Example 7, the subject as described in Example 6, wherein updating the completeness of at least part of the generated map may include eliminating unknown grid units that are members of the determined prediction space.In Example 8, the subject as described in Example 7, wherein eliminating the unknown grid cell may include: assigning a known state to the eliminated unknown grid cell.In Example 9, the subject of any one of Examples 6 to 8, wherein determining the coverage completeness may further include: determining one or more known grid units of the generated map, wherein the response from The obtained sensor data determines whether the grid unit is occupied or not occupied by the object, and the grid unit is known.In Example 10, the subject of any one of Examples 6 to 9, wherein, in response to determining that one or more grid cells are not covered by the sensor field, the one or more grid cells may be determined to be Unknown.In Example 11, the subject of any one of Examples 6 to 10, wherein, in response to determining that one or more grid cells are not covered due to a failure of one or more of the plurality of sensors, the one Or multiple grid cells can be determined to be unknown.In Example 12, the subject of any one of Examples 6 to 11, wherein, in response to determining that one or more grid cells are not covered due to occlusion, the one or more grid cells may be determined to be Is unknown.In Example 13, the subject as in any one of Examples 2 to 12, wherein the object may be a vehicle.In Example 14, the subject according to any one of Examples 1 to 13, wherein the method may further include: obtaining a region of interest, wherein at least a part of the generated map is the obtained region of interest.In Example 15, the subject of any one of Examples 1 to 14, wherein the method may further include: after improving the integrity, determining one or more integrity metrics for at least a portion of the generated map.In Example 16, the subject of any one of Examples 2 to 15, wherein determining the integrity of the object may include: based on tracking, the number of objects determined, and the netness of the objects in at least a portion of the generated map. Change to determine that no objects are missing at least part of the generated map.In Example 17, the subject of Example 16, wherein, in response to determining that no objects are missing, the method may include determining that at least a portion of all grid cells are known.Example 18 is one or more computing devices. The one or more computing devices include: one or more processors and at least one non-transitory computer-readable storage medium including instructions that are executed by the one or more processors When executed, the one or more processors are used to: obtain sensor data from multiple sensors over time, and the multiple sensors cover the sensor field; generate a map by fusing the obtained sensor data, wherein the generated map includes at least part Multiple grid units covered by the sensor field; using the obtained sensor data to determine the integrity of at least a part of the generated map by determining the object integrity and coverage integrity of the map from the obtained sensor data; and By using the determined object integrity and the determined coverage integrity to reduce the amount of unknown area of at least a portion of the generated map, the integrity of at least the portion of the generated map is updated.In Example 19, the subject matter of Example 18, wherein the executed instructions may cause the one or more processors to be used to determine the integrity of the object by causing the one or more processors to use the following operations: from the entrance sensor and The exit sensor obtains sensor data; the data obtained from the entrance sensor and the exit sensor is used to determine the objects entering at least a part of the generated map and the objects leaving at least a part of the generated map; the generated sensor data is used to determine the generated The current number of objects in at least a part of the map; and each object in one or more objects in at least a part of the generated map is tracked through the obtained sensor data.In Example 20, the subject of Example 19, wherein the executed instructions cause the one or more processors to be used to determine the integrity of the object by causing the one or more processors to perform the following operations: based on the tracked object, the determined The number of objects and the net change of objects in at least a part of the generated map determine the disappearance of one or more objects from at least part of the generated map.In Example 21, the subject matter of Example 20, wherein, in response to causing one or more processors to determine that one or more objects have disappeared from at least a portion of the generated map, the executed instructions may further cause a The or multiple processors are configured to determine a prediction space, where the prediction space includes one or more sets of grid units that the determined disappearing object can currently occupy.In Example 22, the subject matter of Example 21, wherein the executed instructions may cause the one or more processors to be used to determine the prediction space by further causing the one or more processors to use the following operations: For the determined disappearing object, the past sensor data of each of the disappearing objects is used to determine the possible position that the disappearing object can currently occupy.In Example 23, the subject matter of Example 22, wherein the executed instructions cause the one or more processors to determine the coverage completeness may further include the executed instructions cause the one or more processors to: determine the generated One or more unknown grid units of the map of, where the one or more unknown grid units are grid units for which insufficient sensor information is available.In Example 24, the subject matter of Example 23, wherein the executed instructions cause one or more processors to improve the integrity of at least part of the generated map may further include the executed instructions cause the one or more One processor is used to eliminate unknown grid units that are members of the determined prediction space.In Example 25, the subject matter of Example 24, wherein the executed instructions cause one or more processors to eliminate unknown grid units may include the executed instructions cause the one or more processors to eliminate The unknown grid cells are assigned a known state.In Example 26, the subject matter of any one of Examples 23 to 25, wherein the executed instructions cause one or more processors to determine the coverage integrity may further include the executed instructions cause the one or more The processor is used to determine one or more known grid units of the generated map, wherein, in response to determining from the obtained sensor data that the grid unit is occupied or not occupied by the object, the grid unit is known of.In Example 27, the subject matter of any one of Examples 23 to 26, wherein, in response to determining that one or more grid cells are not covered by the sensor field, the one or more grid cells may be determined to be Unknown.In Example 28, the subject of any one of Examples 23 to 27, wherein, in response to determining that one or more grid cells are not covered due to a failure of one or more of the plurality of sensors, the one Or multiple grid cells can be determined to be unknown.In Example 29, the subject of any one of Examples 23 to 28, wherein, in response to determining that one or more grid cells are not covered due to occlusion, the one or more grid cells may be determined to be Is unknown.In Example 30, the subject matter of any one of Examples 19 to 29, wherein the object may be a vehicle.In Example 31, the subject matter of any one of Examples 18 to 30, wherein the executed instructions may further cause one or more processors to: obtain a region of interest, wherein at least a portion of the generated map Is the obtained region of interest.In Example 32, the subject matter of any one of Examples 18 to 31, wherein the executed instructions may further cause one or more processors to: after improving completeness, determine at least a portion of the generated map One or more measures of integrity.In Example 33, the subject matter of any one of Examples 19 to 32, wherein the executed instructions cause one or more processors to determine the integrity of the object may further include the executed instructions cause the one or more The processor is configured to determine that no objects are missing from at least part of the generated map based on the tracked objects, the determined number of objects, and the net change of the objects in at least a part of the generated map.In Example 34, the subject matter of Example 33, wherein, in response to determining that no objects are missing, the executed instructions cause one or more processors to determine that at least a portion of all grid cells are known.Example 35 is a system that includes: one or more sensors; one or more computing devices, wherein the one or more computing devices are configured to: obtain sensor data from multiple sensors over time, the Multiple sensors cover the sensor field; generate a map by fusing the obtained sensor data, wherein the generated map includes multiple grid units at least partially covered by the sensor field; using the obtained sensor data, Determine the object integrity and coverage integrity of the map from the sensor data to determine the integrity of at least a part of the generated map; and reduce at least a portion of the generated map by using the determined object integrity and the determined coverage integrity The amount of unknown area to update the integrity of at least this part of the generated map.In Example 36, the subject matter described in Example 35 may further include one or more inlet sensors and one or more outlet sensors, and wherein the one or more computing devices may be further configured to: The entrance sensor and the exit sensor obtain sensor data; the objects entering at least a part of the generated map and the objects leaving at least a part of the generated map are determined by the data obtained from the entry sensor and the exit sensor; from the obtained sensor data Determine the current number of objects in at least a part of the generated map; and track each of one or more objects in at least a part of the generated map through the obtained sensor data.In Example 37, the subject matter of Example 36, wherein one or more computing devices may be further configured to determine the integrity of the object through the following operations: based on tracking, the number of objects determined, and the generated The net change of objects in at least part of the map determines the disappearance of one or more objects from at least part of the generated map.In Example 38, the subject matter of Example 37, wherein, in response to determining that one or more objects have disappeared from at least a portion of the generated map, the one or more computing devices may be further configured to: determine the prediction Space, where the prediction space includes one or more sets of grid units currently occupied by the determined disappearing object.In Example 39, the subject matter of Example 38, wherein the one or more computing devices configured to determine the prediction space may further include: for each determined disappearing object, the one or more computing devices It is configured to use past sensor data of each of the disappearing objects to determine the possible positions that the disappearing object can currently occupy.In Example 40, the subject matter of Example 39, wherein the one or more computing devices determining the coverage integrity may include one or more computing devices determining one or more unknown grid cells of the generated map, wherein, The one or more unknown grid cells are grid cells for which there is insufficient sensor information.In Example 41, the subject matter of Example 40, wherein the one or more computing devices configured to update at least part of the integrity of the generated map may include one or more computing devices further configured to use Yu: Eliminate unknown grid units that are members of the determined prediction space.In Example 42, the subject matter of Example 41, wherein the one or more computing devices are configured to eliminate the unknown grid unit may include one or more computing devices configured to eliminate the unknown grid Cells are assigned a known state.In Example 43, the subject matter of any one of Examples 41 or 42, wherein the one or more computing devices configured to determine the coverage integrity may further include one or more computing devices further configured to One or more known grid units for determining the generated map, wherein the grid unit is known in response to determining that the grid unit is occupied or not occupied by the object from the obtained sensor data.In Example 44, the subject matter of any one of Examples 41 to 43, wherein the one or more computing devices are configured to: in response to determining that one or more grid cells are not covered by the sensor field, change The one or more grid cells are determined to be unknown.In Example 45, the subject matter of any one of Examples 41 to 44, wherein the one or more computing devices are configured to: in response to determining that the one or more grid cells are due to one of the multiple sensors The failure of one or more sensors without being covered determines the one or more grid cells as unknown.In Example 46, the subject matter of any one of Examples 41 to 45, wherein the one or more computing devices are configured to: in response to determining that one or more grid cells are not covered due to occlusion, The one or more grid cells are determined to be unknown.In Example 47, the subject matter of any one of Examples 36 to 46, wherein the object may be a vehicle.In Example 48, the subject matter of any one of Examples 35 to 47, wherein the one or more computing devices may be further configured to: obtain a region of interest, wherein at least a part of the generated map is The obtained region of interest.In Example 49, the subject matter of any one of Examples 35 to 48, wherein the one or more computing devices may be further configured to: after improving the integrity, determine that at least a part of the generated map One or more integrity measures.In Example 50, the subject matter of any one of Examples 36 to 49, wherein the one or more computing devices configured to determine the integrity of the object may include the one or more computing devices further configured to : Determine that no objects are missing from at least part of the generated map based on the tracked objects, the number of determined objects, and the net change of the objects in at least a part of the generated map.In Example 51, the subject matter of Example 50, wherein one or more computing devices may be configured to: in response to determining that no objects are missing, determine that at least a portion of all grid cells are known.It should be noted that one or more of the features of any of the above examples may be appropriately combined with any of the other examples.The foregoing description is given as an example only, and those skilled in the art will appreciate that modifications can be made without departing from the broader spirit or scope of the present invention as set forth in the claims. Therefore, the description and drawings should be viewed in an illustrative rather than restrictive sense.Therefore, the scope of the present disclosure is indicated by the appended claims and therefore is intended to cover all modifications falling within the equivalent meaning and scope of the claims.Although the present invention has been specifically shown and described with reference to specific embodiments, those skilled in the art should understand that various changes in form and details can be made to the present invention without departing from the present invention as defined by the appended claims Spirit and scope. Therefore, the scope of the present invention is indicated by the appended claims and is therefore intended to cover all changes falling within the equivalent meaning and scope of the claims. |
A common (ground) of a low voltage regulator is connected to a virtual common (ground) of an integrated circuit device that is also connected to transistor sources but isolated from a true ground connected to the substrate of the integrated circuit device. The regulated output voltage from the low voltage regulator rises the same as the virtual ground voltage rises when back-biased sufficient to reduce leakage current to an acceptable level in a given process technology. Therefore, the output of the low voltage regulator will maintain a normal operating voltage for the logic during a power saving back-biased condition. |
What is claimed is: 1. A low voltage regulator coupled to source back-biased capable power domains, comprising: a low voltage regulator having a common thereof coupled to a virtual ground of at least one power domain in an integrated circuit die that is capable of being back- biased, an input coupled to a supply voltage, and an output coupled to and supplying a regulated voltage to transistors in the at least one power domain; and a true ground is coupled to a substrate of the integrated circuit die, wherein when the virtual ground is back -biased relative to the true ground sufficient to reduce leakage current to an acceptable level in a given process technology, the output voltage of the low voltage regulator rises with the virtual ground voltage so as to maintain substantially the same voltage to the transistors in the at least one power domain during back-biasing thereof. 2. The low voltage regulator according to claim 1, wherein the regulated voltage from the low voltage regulator is approximately the normal operating voltage for logic minus an offset voltage at the virtual ground sufficient to reduce the leakage current to the acceptable level in the given process technology. 3. The low voltage regulator according to claim 2, wherein the regulated voltage from the low voltage regulator is approximately 1.2 volts for 180 nanometer process technology. 4. The low voltage regulator according to claim 1, wherein the at least one power domain is back-biased with a ground offset voltage relative to the true ground sufficient to reduce leakage current to an acceptable level in the given process technology. 5. The low voltage regulator according to claim 4, wherein the ground offset voltage is about 0.6 volts for 180 nanometer process technology. 6. The low voltage regulator according to claim 1, wherein the true ground is at substantially zero (0) volts. 7. The low voltage regulator according to claim 1 , wherein bias current of the low voltage regulator is about 100 nanoamperes which is typical for 80 nanometer process technology, 8. The low voltage regulator according to claim 7, wherein the substrate is a p- substrate having holes as majority carriers. 9. The low voltage regulator according to claim 8, wherein the virtual ground is coupled to sources of n-mos transistors fabricated in the p-substrate. 10. The low voltage regulator according to claim 1, wherein the low voltage regulator is used to power the at least one power domain during back-biasing thereof. 1 1. A method for powering a source back-biased capable power domain with a low voltage regulator, said method comprising the steps of: providing a low voltage regulator having a common thereof coupled to a virtual ground of at least one power domain in an integrated circuit die that is capable of being back-biased, an input coupled to a supply voltage, and an output coupled to and supplying a regulated voltage to transistors in the at least one power domain; coupling a true ground to a substrate of the integrated circuit die; and back-biasing the virtual ground relative to the true ground sufficient to reduce leakage current to an acceptable level in a given process technology, wherein the output voltage of the low voltage regulator rises with the virtual ground voltage so as to maintain substantially the same voltage to the transistors in the at least one power domain during the step of back-biasing thereof. 12. The method according to claim 1 1 , wherein the regulated voltage from the low voltage regulator is approximately the normal operating voltage for logic minus an offset voltage at the virtual ground sufficient to reduce the leakage current to the acceptable level in the given process technology. 13. The method according to claim 11, wherein during the step of back-biasing the virtual ground voltage is a ground offset voltage sufficient to reduce leakage current to an acceptable level in the given process technology. 14, The method according to claim 1, wherein the true ground is at substantially zero (0) volts. 15. The method according to claim 1 1, wherein bias current of the low voltage regulator is about 100 nanoamperes which is typical for 180 nanometer process technology. 16. The method according to claim 11, wherein the substrate is a p-substrate having holes as majority carriers. 17. The method according to claim 11, wherein the virtual ground comprises the step of coupling sources of n-mos transistors fabricated in the p-substrate to the virtual ground. 18. The method according to claim 1 1, further comprising the step of powering the at least one power domain during back-biasing thereof with the low voltage regulator. |
USING LOW VOLTAGE REGULATOR TO SUPPLY POWER TO A SOURCE-BIASED POWER DOMAIN This application claims priority to commonly owned United States Provisional Patent Application Serial Number 61/451 ,202; filed March 10, 2011 ; entitled "Using Ultra-Low Power Voltage Regulator to Supply Power to a Source-Biased Power Domain," by James Muha, Tim Wilson, DC Sessions and Yong Yuenyongsgool; which is hereby incorporated by reference herein for all purposes. TP B CiH.Nn. C Α,Ι^ iFlEJLJP The present disclosure relates to voltage regulators, and, more particularly, to using a low voltage regulator to significantly reduce standby, sleep mode current draw in source- biased power domains of an integrated circuit device. BACKGROU D An integrated circuit device may electrically alter the threshold voltage of its NMOS transistors by raising the Vss power rail voltage above the bulk (e.g., well, tub, or substrate) voltage of the integrated circuit substrate (sometimes referred to as a "virtual ground"). This technique is commonly used to reduce the power consumption of the integrated circuit device due to sub-threshold leakage. Generally, the integrated circuit device will have two or more independent voltage domains to service respective core logic circuits that have signal paths therebetween; some of these voltage domains may operate on the virtual ground, and other voltage domains may operate on true ground. Separate voltage supplies may be used to connect to N-MOS and P-MOS bulk regions in multiple well CMOS technologies. Modification of these voltages with respect to the primary power and ground supplies is called well-biasing. These supplies can be modulated to provide a back-bias voltage which causes an increase in the MOS device threshold voltage, Vth, thereby reducing the sub-threshold leakage. Back-bias tap cells have a basic function to provide access to the wells and/or substrate independent of the source connected transistors therein. Back bias tap cells provide power for wells of always-on cells while power is gated for retention of flip-flops states, power gates with buffers and always-on buffers. They also provide well access such that back biasing can be used for leakage optimization. One way to dramatically lower the current of an integrated circuit device in a sleep state is to raise the ground rail voltage used by standard cells above the substrate voltage, commonly referred to as back-biasing. This reduces leakage current. Another way to reduce current while in a sleep state is to utilize a low voltage regulator since a loosely regulated, lower voltage is sufficient to maintain the logic cell states. This reduces bias current of not only the voltage regulator but of supporting macro cells like a band gap voltage reference. The aforementioned two techniques cannot be combined since the low voltage regulator does not provide a high enough voltage to maintain adequate noise margin when standard cells are in a back-biased state. A normal voltage regulator must be used to maintain adequate noise margin. One problem with implementing source back-biasing is that the effective voltage across the biased circuits decreases due to the ground (common source) voltage rising which in turn reduces the reliability of the biased circuits. For example, in a source-biased power domain in 180 nanometer technology, the ground rail, called virtual ground, is raised to approximately 0.6 volts, so it is necessary to supply 1.8 volts to the power rail to allow for 1.2 volts of noise margin. Presently, that requires that the main voltage regulator be in operation since the output voltage of a low voltage regulator in 180 nanometer technology, for example, is only 1.2 volts, leaving just 0.6 volts of noise margin which is insufficient. SUMMARY Therefore it would be desirable for source back-biased circuits to retain the same effective voltage for noise margin when being powered by a low voltage regulator as when these circuits are not being back-biased. According to an embodiment, a low voltage regulator coupled to source back-biased capable power domains may comprise: a low voltage regulator having a common thereof coupled to a virtual ground of at least one power domain in an integrated circuit die that is capable of being back-biased, an input coupled to a supply voltage, and an output coupled to and supplying a regulated voltage to transistors in the at least one power domain; and a true ground is coupled to a substrate of the integrated circuit die, wherein when the virtual ground is back-biased relative to the true ground sufficient to reduce leakage current to an acceptable level in a given process technology, the output voltage of the low voltage regulator rises with the virtual ground voltage so as to maintain substantially the same voltage to the transistors in the at least one power domain during back-biasing thereof. According to a further embodiment, the regulated voltage from the low voltage regulator is approximately the normal operating voltage for logic minus an offset voltage at the virtual ground sufficient to reduce the leakage current to the acceptable level in the given process technology. According to a further embodiment, the regulated voltage from the low voltage regulator is approximately 1.2 volts for 180 nanometer process technology. According to a further embodiment, the at least one power domain is back-biased with a ground offset voltage relative to the true ground sufficient to reduce leakage current to an acceptable level in the given process technology. According to a further embodiment, the ground offset voltage is about 0.6 volts for 180 nanometer process technology. According to a further embodiment, the true ground is at substantially zero (0) volts. According to a further embodiment, bias current of the low voltage regulator is about 100 nanoamperes which is typical for 180 nanometer process technology. According to a further embodiment, the substrate is a p-substrate having holes as majority carriers. According to a further embodiment, the virtual ground i coupled to sources of n-mos transistors fabricated in the p-substrate. According to a further embodiment, the low voltage regulator is used to power the at least one power domain during back-biasing thereof. According to another embodiment, a method for powering a source back-biased capable power domain with a low voltage regulator may comprise the steps of: providing a low voltage regulator having a common thereof coupled to a virtual ground of at least one power domain in an integrated circuit die that is capable of being back-biased, an input coupled to a supply voltage, and an output coupled to and supplying a regulated voltage to transistors in the at least one power domain; coupling a true ground to a substrate of the integrated circuit die; and back-biasing the virtual ground relative to the true ground sufficient to reduce leakage current to an acceptable level in a given process technology, wherein the output voltage of the low voltage regulator rises with the virtual ground voltage so as to maintain substantially the same voltage to the transistors in the at least one power domain during the step of back-biasing thereof. According to a further embodiment of the method, the regulated voltage from the low voltage regulator is approximately the normal operating voltage for logic minus an offset voltage at the virtual ground sufficient to reduce the leakage current to the acceptable level in the given process technology. According to a further embodiment of the method, during the step of back-biasing the virtual ground voltage there is a ground offset voltage sufficient to reduce leakage current to an acceptable level in the given process technology. According to a further embodiment of the method, the true ground is at substantially zero (0) volts. According to a further embodiment of the method, bias current of the low voltage regulator is about 100 nanoamperes which is typical for 180 nanometer process technology. According to a further embodiment of the method, the substrate is a p-substrate having holes as majority carriers. According to a further embodiment of the method, the virtual ground comprises the step of coupling sources of n-mos transistors fabricated in the p-substrate to the virtual ground. According to a further embodiment of the method, the step of powering the at least one power domain during back-biasing thereof is done with the low voltage regulator. BRIEF DESCRIPTION OF THE DRAWINGS A more complete understanding of the present disclosure may be acquired by referring to the following description taken in conjunction with the accompanying drawings wherein: Figure 1 illustrates a schematic elevational view of a portion of an integrated circuit device showing separate substrate and source common (ground) connections that are used to source back-bias transistors in the integrated circuit device, according to a specific example embodiment of this disclosure; Figure 2 illustrates a greatly simplified schematic diagram of a standard voltage regulator; Figure 3 illustrates a greatly simplified schematic diagram of a low voltage regulator; Figure 4 illustrates a greatly simplified schematic diagram of a low voltage regulator, modified according to a specific example embodiment of this disclosure; Figure 5 illustrates a schematic diagram of a low voltage regulator for source-biased power domains, according to a specific example embodiment of this disclosure; and Figure 6 illustrates a schematic block diagram of an integrated voltage regulator comprising switchable main and low voltage regulators, according to a specific example embodiment of this disclosure. While the present disclosure is susceptible to various modifications and alternative forms, specific example embodiments thereof have been shown in the drawings and are herein described in detail. It should be understood, however, that the description herein of specific example embodiments is not intended to limit the disclosure to the particular forms disclosed herein, but on the contrary, this disclosure is to cover all modifications and equivalents as defined by the appended claims. DETAILED DESCRIPTION If the common (ground) of a low voltage regulator is connected to a virtual ground of the integrated circuit die, the regulated output voltage from the low voltage regulator is raised by approximately the same amount that the back-biased virtual ground voltage is raised. Therefore, the output of the low voltage regulator will be approximately the normal operating voltage for logic minus the ground offset voltage. For example, in 180 nanometer process technology, this voltage level is approximately 1 .8 volts which is about 1.2 volts above a 0.6 volt virtual ground. Since the bias current of the main voltage regulator is in the one to two microampere range while the bias current of the low voltage regulator may be 100 nanoamperes for typical 180 nanometer process technology. Therefore significant power savings may be realized without sacrificing adequate noise margin for standard cells by replacing the main voltage regulator with a low voltage regulator, modifying the integrated circuit design such that transistors that previously were connected to true ground are now connected to virtual ground, and substrate taps are connected to true ground. Several microamperes of current may thereby be eliminated in a sleep or deep sleep state while maintaining adequate noise margin. Additionally, the bias current of a band gap voltage reference can be eliminated, thereby saving several more microamperes. Referring now to the drawings, the details of a specific example embodiment is schematically illustrated. Like elements in the drawings will be represented by like numbers, and similar elements will be represented by like numbers with a different lower case letter suffix. Referring to Figure 1, depicted is a schematic elevational view of a portion of an integrated circuit device showing separate substrate and source common (ground) connections that are used to source back-bias transistors in the integrated circuit device, according to a specific example embodiment of this disclosure. An integrated circuit die may comprise a p-substrate 102 having n-mos and p-mos transistors formed therein. A typical n- mos transistor comprises an n+ source 106, a gate 108 and an n+ drain 1 10. A typical p-mos transistor comprises p+ drain 112, a gate 14 and a p+ source 1 6, The p-mos transistor is fabricated in an n-well 120 formed in the p-substrate 102. An n+ tap 122 is formed in the n- well 120 and is coupled to VDD and the p+ source 1 16 with a metal connection 1 18. A p+ tap 104 separate from the n+ source 106 of the n-mos transistor couples the p-substrate 102 to a true ground 128, TGND, connection, and the n+ source 106 is therefore independently connected to a virtual ground 130, VGND, connection. Insulating oxides are not shown for illustrative clarity. Referring to Figure 2, depicted is a greatly simplified schematic diagram of a standard voltage regulator. A standard (main) voltage regulator 232 has a common rail connected to the same true ground, TGND, connection 128 that is also coupled to the p+ (substrate) ties 104. The regulated output voltage of the regulator 232 has to be the normal operating voltage for logic in a given process technology, for example, in 180 nanometer process technology, this voltage level is approximately 1.8 volts to maintain logic circuits that have been source back-biased to reduce current therein. The voltage regulator 232 uses significant current for its own operation, thus limiting battery life. Referring to Figure 3, depicted is a greatly simplified schematic diagram of a low voltage regulator. A low voltage regulator 334 has a common rail connected to the same true ground, TGND, connection 128 that is also coupled to the p+ (substrate) ties 104. The low voltage regulator 334 has only an output voltage that is insufficient to maintain logic circuits that have been source back-biased to reduce current therein. Referring to Figure 4, depicted is a greatly simplified schematic diagram of a low voltage regulator, modified according to a specific example embodiment of this disclosure. The low voltage regulator 436 has a common rail connected to the virtual ground, VGND, connection 130 that is only coupled to the n+ source 106. The low voltage regulator 436 has an output voltage that is substantially the normal operating voltage for the logic minus the ground offset voltage, e.g., 1.2 volts for 180 nanometer process technology. Since the output voltage of the low voltage regulator 436 is referenced to the virtual ground, VGND, connection 130 and not the true ground, TGND, connection 128, it can maintain an output voltage providing substantially the normal operating voltage referenced to true ground, TGND, with reference to the n+ source 106, thereby maintaining logic circuits that have been source back-biased to reduce current therein. Referring to Figure 5, depicted is a schematic diagram of a low voltage regulator for source-biased power domains, according to the teachings of this disclosure. There are two GND inputs to the voltage regulator, "true ground" called TGND and "virtual ground" called VGND. The TGND connection 128 is connected to only substrate ties to keep the substrate as close to zero (0) volts as possible. The VGND connection 130 is connected to various transistor drains, gates, or sources, as dictated by the voltage regulator design, the design of which is not covered herein. The output voltage VOUT is relative to VGND since the circuitry of the regulator 436 connects only to VGND and not TGND. Thus as VGND rises above zero (0) volts the output voltage, VOUT, will rise similarly. Referring to Figure 6, depicted is a schematic block diagram of an integrated voltage regulator comprising switchable main and low voltage regulators, according to a specific example embodiment of this disclosure. An integrated voltage regulator 640 may comprise the main voltage regulator 232 and the low voltage regulator 436 previously described hereinabove, and voltage steering switches 642 and 644, e.g., field effect transistor (FET) switches. A direct current voltage (power) source 646, e.g., 3 volt supply, battery, etc., is coupled to the voltage steering switch 642 and supplies either the main voltage regulator 232 or the low voltage regulator 436 when in normal operation or low power back-biased standby, respectively. The other voltage steering switch 644 couples either the main voltage regulator 232 or the low voltage regulator 436 to VDD for the integrated circuit transistors when in normal operation or low power back-biased standby, respectively. It is contemplated and wit in the scope of this disclosure that the main voltage regulator 232, the low voltage regulator 436, and the voltage steering switches 642 and 644 may be separate or integrated into the integrated voltage regulator 640. While embodiments of this disclosure have been depicted, described, and are defined by reference to example embodiments of the disclosure, such references do not imply a limitation on the disclosure, and no such limitation is to be inferred. The subject matter disclosed is capable of considerable modification, alteration, and equivalents in form and function, as will occur to those ordinarily skilled in the pertinent art and having the benefit of this disclosure. The depicted and described embodiments of this disclosure are examples only, and are not exhaustive of the scope of the disclosure. |
A device structure includes a first interconnect line along a longitudinal direction and a second interconnect line parallel to the first interconnect line, where the first interconnect structure is within a first metallization level and the second interconnect line is within a second metallization level. A first transistor and a laterally separated second transistor are on a same plane above the second interconnect line, where a gate of the first transistor is coupled to the first interconnect line and a gate of the second transistor is coupled to the second interconnect line. A first capacitor is coupled to a first terminal of the first transistor and a second capacitor is coupled to a first terminal of the second transistor. A third interconnect line couples a second terminal of the first transistor with a second terminal of the second transistor. |
A device structure comprising:a first interconnect line along a longitudinal direction wherein the first interconnect line is within a first metallization level;a second interconnect line parallel to the first interconnect line, wherein the second interconnect line is within a second metallization level;a first transistor and a second transistor on a same plane, the second transistor laterally separated from the first transistor, wherein a gate of the first transistor is coupled to the first interconnect line and wherein a gate of the second transistor is coupled to the second interconnect line;a via between the first interconnect line and the gate of the first transistor;a first capacitor coupled to a first terminal of the first transistor and a second capacitor coupled to a first terminal of the second transistor; anda third interconnect line coupling a second terminal of the first transistor with a second terminal of the second transistor, the second interconnect line extending along a direction orthogonal to the longitudinal direction.The device structure of claim 1, wherein the second interconnect line is laterally separated from the first interconnect line by a first distance and the second transistor is laterally separated from the first transistor by a second distance.The device structure of claim 2, wherein the first distance is less than the second distance.The device structure of any one of claims 2-3, wherein the first distance is zero.The device structure of any one of claims 1-4, wherein the first interconnect line laterally overlaps the second interconnect line.The device structure of any one of claims 1-5, wherein the via is a first via and the device structure further comprises a second via coupled directly between the second interconnect line and the gate of the second transistor.The device structure of any one of claims 1-6, wherein the first interconnect line is separated from the second interconnect line by a first vertical thickness measured along a second direction orthogonal to the first and the longitudinal directions, wherein the second interconnect line has a second vertical thickness measured along the second direction, wherein the metallization structure has a third vertical thickness measured along the second direction, wherein the via has a fourth vertical thickness measured along the second direction and wherein the fourth vertical thickness is substantially equal to a sum of the first, the second and the third vertical thicknesses.The device structure of any one of claims 1-7, wherein the first interconnect line and the second interconnect line each have a respective first lateral width as measured along a third direction orthogonal to the longitudinal direction, and wherein the first transistor and the second transistor each have a respective second lateral width as measured along the third direction and wherein the first lateral width is greater than the second lateral thickness.The device structure of any one of claims 1-8, wherein a first terminal of the first capacitor is coupled to the first terminal of the first transistor and a first terminal of the second capacitor is coupled to the first terminal of the second transistor and wherein a second terminal of the first capacitor is coupled to a second terminal of the second capacitor.The device structure of any one of claims 1-9, wherein the via is a first via and the device structure further comprises:a third transistor and a fourth transistor on a same plane, the third transistor laterally separated from the fourth transistor, wherein a gate of the third transistor is coupled to the first interconnect line and wherein a gate of the fourth transistor is coupled to the second interconnect line;a third via between the first interconnect line and the gate of the third transistor;a third capacitor coupled to a first terminal of the third transistor and a fourth capacitor coupled to a first terminal of the fourth transistor; anda fourth interconnect line coupling a second terminal of the third transistor with a second terminal of the fourth transistor, the second interconnect line extending along a direction orthogonal to the longitudinal direction.The device structure of any one of claims 1-10, wherein the via is a first via and the device structure further comprises a second via coupled directly between the second interconnect line and the gate of the second transistor.The device structure of any one of claims 1-11, wherein the first metallization level and the second metallization level are vertically separated by at least 20 nm.A method to fabricate a device structure, the method comprising:forming a first interconnect line within a first metallization level, the first interconnect line extending along a first direction;forming a second interconnect line within a second metallization level, wherein the second metallization level is above the first metallization level;forming a first via on the second interconnect line;forming a second via on the first interconnect line;forming a first transistor on the first via;forming a second transistor on the second via;forming a first capacitor on a first terminal of the first transistor;forming a second capacitor on a first terminal of the second transistor; andforming a third interconnect line connecting a second terminal of the first transistor and a second terminal of the second transistor, wherein the third interconnect line extends orthogonally to the first interconnect line.The method of claim 13, wherein the forming the first via and the second via comprises:depositing a first etch stop layer on the first interconnect line;depositing a dielectric on the first etch stop layer;depositing a second etch stop layer on the dielectric;etching a first opening in the second etch stop layer, in the dielectric and in the first etch stop layer;depositing a first conductive material in the first opening on the first interconnect line;removing excess first conductive material from a region outside of the first opening;forming a second opening in the second etch stop layer;depositing a second conductive material in the second opening on the second interconnect line; andremoving excess second conductive material from a region outside of the second opening.The method of any one of claims 13 -14, wherein the forming the second interconnect line further comprises extending the first interconnect line laterally to overlap the first interconnect line, and wherein forming the first capacitor and second capacitor further comprises forming a bridging plate between a top electrode of the first capacitor with a top electrode of the second capacitor. |
BACKGROUNDGenerally, interconnect lines are arranged in a manner where a series of word lines extend longitudinally in a first direction and are spatially limited to a first plane, and series of bit lines are orthogonal to word lines and extend longitudinally on a second plane. As memory devices that are connected to word lines and bit lines are scaled in size and spacing, word lines (and bit lines ) that are on a single plane are brought closer together. Such an arrangement may cause an increase in word line capacitance for example. Thus, it is necessary to explore interconnect architecture to enable memory device scaling while simultaneously minimizing additional capacitance.BRIEF DESCRIPTION OF THE DRAWINGSThe material described herein is illustrated by way of example and not by way of limitation in the accompanying figures. For simplicity and clarity of illustration, elements illustrated in the figures are not necessarily drawn to scale. For example, the dimensions of some elements may be exaggerated relative to other elements for clarity. Also, various physical features may be represented in their simplified "ideal" forms and geometries for clarity of discussion, but it is nevertheless to be understood that practical implementations may only approximate the illustrated ideals. For example, smooth surfaces and square intersections may be drawn in disregard of finite roughness, corner-rounding, and imperfect angular intersections characteristic of structures formed by nanofabrication techniques. Further, where considered appropriate, reference labels have been repeated among the figures to indicate corresponding or analogous elements.Figure 1A illustrates an isometric illustration of a device structure including a pair of word lines that are separated vertically, in accordance with an embodiment of the present disclosure.Figure 1B illustrates a cross-sectional view through a line A-A' of the device structure in Figure 1A .Figure 1C illustrates a cross-sectional view through a line A-A' of the device structure in Figure 1A , in an embodiment where word lines overlap, in accordance with an embodiment of the present disclosure.Figure 1D illustrates a cross-sectional view of a first transistor coupled to a first word line on a lower lever through a deep via.Figure 1E illustrates a cross-sectional view of a second transistor coupled to a second word line on an upper level though a shallow via.Figure 1F illustrates a cross-sectional view of the first transistor coupled to a first capacitor and the second transistor coupled to a second capacitor, in accordance with an embodiment of the present disclosure.Figure 2A illustrates an isometric illustration of a device structure including a pair of word lines that are separated vertically, and a pair of transistors orthogonally spanning the pair of word lines, in accordance with an embodiment of the present disclosure.Figure 2B is a cross-sectional illustration through a longitudinal direction of a word line in the pair of word lines in the structure of Figure 2A .Figure 2C is a cross-sectional illustration across the pair of word lines and through a single transistor in structure of Figure 2A .Figure 2D is a plan-view illustration of the structure in Figure 2A .Figure 3 is a flow diagram illustrating a method to fabricate the device structure in Figure 1A .Figure 4A is a cross-sectional illustration of a workpiece including a first word line fabricated in a first dielectric formed above a substrate, in accordance with an embodiment of the present disclosure.Figure 4B is a cross-sectional illustration of the structure in Figure 4A following the formation of a first etch stop layer, a second dielectric on the first etch stop layer and fabrication of a second word line in the second dielectric.Figure 4C is a cross-sectional illustration of the structure in Figure 4B following the process to deposit a second etch stop layer on the second word line and following the process to form a first via on the second word line.Figure 4D is a cross-sectional illustration of the structure in Figure 4C following the process to form an opening in a portion of the second etch stop layer, in the second dielectric and in the first etch stop layer to expose a portion of the first word line.Figure 4E is a cross-sectional illustration of the structure in Figure 4D following the process to form a second via in the opening.Figure 4F is a cross-sectional illustration of the structure in Figure 4E following the process to deposit a material layer stack to fabricate a pair of transistors.Figure 5A is a cross-sectional illustration of the structure in Figure 4F following the process to form a first transistor on the second via and a second transistor on the first via, in accordance with an embodiment of the present disclosure.Figure 5B is a plan-view illustration of the structure of Figure 5A following the formation of source and drain terminals in each of the first and the second transistors.Figure 6A is a cross-sectional illustration of the structure in Figure 5A following the process to fabricate a spacer laterally adjacent to the first and second transistors, a third dielectric therebetween, a first interconnect on the first transistor and a second interconnect on the second transistor, in accordance with an embodiment of the present disclosure.Figure 6B is a cross-sectional illustration of the structure in Figure 6A following the process to deposit a fourth dielectric on the third dielectric and on the first and on the second interconnects and fabricate portions of a first capacitor on the first interconnect and fabricate portions of a second capacitor on the second interconnect.Figure 6C is a cross-sectional illustration of the structure in Figure 6B following the process to deposit an electrode material.Figure 7A is a cross-sectional illustration of the structure in Figure 6C following the process to pattern the electrode to form the first and the second capacitor.Figure 7B is a plan view of the structure in Figure 7A .Figure 8 illustrates a computing device in accordance with embodiments of the present disclosure.Figure 9 illustrates an integrated circuit (IC) structure.DETAILED DESCRIPTIONThe material described herein is illustrated by way of example and not by way of limitation in the accompanying figures. For simplicity and clarity of illustration, elements illustrated in the figures are not necessarily drawn to scale. For example, the dimensions of some elements may be exaggerated relative to other elements for clarity. Also, various physical features may be represented in their simplified "ideal" forms and geometries for clarity of discussion, but it is nevertheless to be understood that practical implementations may only approximate the illustrated ideals. For example, smooth surfaces and square intersections may be drawn in disregard of finite roughness, corner-rounding, and imperfect angular intersections characteristic of structures formed by nanofabrication techniques. Further, where considered appropriate, reference labels have been repeated among the figures to indicate corresponding or analogous elements.Various multilevel wordline assembly for embedded DRAM are described. In the following description, numerous specific details are set forth, such as structural schemes and detailed fabrication methods in order to provide a thorough understanding of embodiments of the present disclosure. It will be apparent to one skilled in the art that embodiments of the present disclosure may be practiced without these specific details. In other instances, well-known features, such as transistor operations and operations associated with capacitors, are described in lesser detail in order to not unnecessarily obscure embodiments of the present disclosure. Furthermore, it is to be understood that the various embodiments shown in the Figures are illustrative representations and are not necessarily drawn to scale.In some instances, in the following description, well-known methods and devices are shown in block diagram form, rather than in detail, to avoid obscuring the present disclosure. Reference throughout this specification to "an embodiment" or "one embodiment" or "some embodiments" means that a particular feature, structure, function, or characteristic described in connection with the embodiment is included in at least one embodiment of the disclosure. Thus, the appearances of the phrase "in an embodiment" or "in one embodiment" or "some embodiments" in various places throughout this specification are not necessarily referring to the same embodiment of the disclosure. Furthermore, the particular features, structures, functions, or characteristics may be combined in any suitable manner in one or more embodiments. For example, a first embodiment may be combined with a second embodiment anywhere the particular features, structures, functions, or characteristics associated with the two embodiments are not mutually exclusive.As used in the description and the appended claims, the singular forms "a", "an" and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will also be understood that the term "and/or" as used herein refers to and encompasses any and all possible combinations of one or more of the associated listed items.The terms "coupled" and "connected," along with their derivatives, may be used herein to describe functional or structural relationships between components. It should be understood that these terms are not intended as synonyms for each other. Rather, in particular embodiments, "connected" may be used to indicate that two or more elements are in direct physical, optical, or electrical contact with each other. "Coupled" may be used to indicated that two or more elements are in either direct or indirect (with other intervening elements between them) physical, electrical or in magnetic contact with each other, and/or that the two or more elements co-operate or interact with each other (e.g., as in a cause an effect relationship).The terms "over," "under," "between," and "on" as used herein refer to a relative position of one component or material with respect to other components or materials where such physical relationships are noteworthy. For example, in the context of materials, one material or material disposed over or under another may be directly in contact or may have one or more intervening materials. Moreover, one material disposed between two materials may be directly in contact with the two layers or may have one or more intervening layers. In contrast, a first material "on" a second material is in direct contact with that second material/material. Similar distinctions are to be made in the context of component assemblies. As used throughout this description, and in the claims, a list of items joined by the term "at least one of' or "one or more of' can mean any combination of the listed terms.The term "adjacent" here generally refers to a position of a thing being next to (e.g., immediately next to or close to with one or more things between them) or adjoining another thing (e.g., abutting it).The term "signal" may refer to at least one current signal, voltage signal, magnetic signal, or data/clock signal. The meaning of "a," "an," and "the" include plural references. The meaning of "in" includes "in" and "on."The term "device" may generally refer to an apparatus according to the context of the usage of that term. For example, a device may refer to a stack of layers or structures, a single structure or layer, a connection of various structures having active and/or passive elements, etc. Generally, a device is a three-dimensional structure with a plane along the x-y direction and a height along the z direction of an x-y-z Cartesian coordinate system. The plane of the device may also be the plane of an apparatus which comprises the device.As used throughout this description, and in the claims, a list of items joined by the term "at least one of' or "one or more of' can mean any combination of the listed terms.Unless otherwise specified in the explicit context of their use, the terms "substantially equal," "about equal" and "approximately equal" mean that there is no more than incidental variation between two things so described. In the art, such variation is typically no more than +/-10% of a predetermined target value.The terms "left," "right," "front," "back," "top," "bottom," "over," "under," and the like in the description and in the claims, if any, are used for descriptive purposes and not necessarily for describing permanent relative positions. For example, the terms "over," "under," "front side," "back side," "top," "bottom," "over," "under," and "on" as used herein refer to a relative position of one component, structure, or material with respect to other referenced components, structures or materials within a device, where such physical relationships are noteworthy. These terms are employed herein for descriptive purposes only and predominantly within the context of a device z-axis and therefore may be relative to an orientation of a device. Hence, a first material "over" a second material in the context of a figure provided herein may also be "under" the second material if the device is oriented upside-down relative to the context of the figure provided. In the context of materials, one material disposed over or under another may be directly in contact or may have one or more intervening materials. Moreover, one material disposed between two materials may be directly in contact with the two layers or may have one or more intervening layers. In contrast, a first material "on" a second material is in direct contact with that second material. Similar distinctions are to be made in the context of component assemblies.The term "between" may be employed in the context of the z-axis, x-axis or y-axis of a device. A material that is between two other materials may be in contact with one or both of those materials, or it may be separated from both of the other two materials by one or more intervening materials. A material "between" two other materials may therefore be in contact with either of the other two materials, or it may be coupled to the other two materials through an intervening material. A device that is between two other devices may be directly connected to one or both of those devices, or it may be separated from both of the other two devices by one or more intervening devices.In semiconductor devices such as DRAMs (Dynamic Random-Access Memory), generally each memory cell (bitcell) includes one transistor (such as a thin-film-transistor or TFT) and one capacitor for storing a bit (logical 1 or 0). TFTs may be moved to the back end of line (BEOL) layers of an advanced complementary metal-oxide-semiconductor (CMOS) process, which means that their corresponding capacitors can be implemented in the upper metal layers with correspondingly thicker inter-layer dielectric (ILD) and larger metal pitch to achieve higher capacitance.Interconnect lines or word lines (WLs) in back-end-of-the-line (BEOL) TFT-based embedded DRAM utilizes dense metal lines that are parallel to each other and exit the array and connect to WL drivers. However, architectures considered for scaled embedded DRAM such as 6F2 cells or angled arrays may impose strict pitch limitations on a WL layout. With device scaling, reduction in feature sizes of transistors and spacing between adjacent transistors also reduces spacing between interconnect lines that are coupled with the transistors. A reduction in WL to WL spacing can increase WL capacitance. To preserve capacitance, it becomes necessary to reduce a width of the WLs. However, reduction in WL width can increase electrical line resistance. Increase in electrical line resistance can effectively increase ramp up and ramp down times of WLs, slowing down a bitcell operation.To solve problems arising from increased capacitance and increased line resistance, a pair of parallel word lines can be vertically spaced apart, i.e., on two different levels. For example, a first WL in the pair of parallel word lines may be on a lower level and a second WL on an upper level. Vertical spacing can enable two such WLs to be brought closer as spacing between two adjacent transistors (one transistor coupled to each WL) are reduced. The vertical space between two such word lines may be typically occupied with a material having a low dielectric constant (low-K).In exemplary examples, each word line, extending longitudinally, may include an array of thousands of transistors that are longitudinally spaced apart. In general, the arrays of transistors that are coupled to each WL pair, are on a single plane above each WL, even though the WLs in the WL pair themselves are on two different levels. Each transistor in a single longitudinal array can be coupled to an upper or a lower WL by a single via. For example, a short via may be utilized to couple each transistor in a first array to a wordline on an upper level. Likewise, a tall via may be utilized to couple each transistor in a second array to a wordline on a lower level. Both short and tall vias can have smaller footprints compared to a surface area of the WL on which they land. Because a tall via has a vertical dimension that is significantly less than a length of a WL, the tall via does not appreciably increase capacitance.In some embodiments, vertical separation between two WLs can also facilitate lateral overlap between the WLs on two different levels. A lateral overlap between two WLs can prevent a need to reduce line width because pitch requirements can be relaxed for all WLs on a given upper and lower level.In a different embodiment, a single transistor may span across an upper and lower WL pair. A second transistor may be directly adjacent to a first transistor, where both transistors span over two vertically separated upper and lower WLs. In some such transistor architectures, the two transistors may share a common source or a drain terminal, channel layer and a gate dielectric to further increase DRAM density. In one such embodiment, each transistor includes a separate gate electrode and an opposite of the drain or the source terminal to the shared source or drain terminal. To provide memory storage, each transistor may be coupled to a capacitor, such as a metal-insulator-metal (MIM) capacitor extending over a respective non-shared source or drain terminal of each transistor. To tune a capacitor size, the MIM capacitor may span over more than one WL.In accordance with an embodiment of the present disclosure, a device structure includes a first interconnect line along a longitudinal direction where the first interconnect structure is within a first metallization level. A second interconnect line is parallel to the first interconnect line where the second interconnect line is within a second metallization level. A first transistor and a second transistor are on a same plane. The second transistor is laterally separated from the first transistor, where a gate of the first transistor is coupled to the first interconnect line and a gate of the second transistor is coupled to the second interconnect line. A via is between the first interconnect line and the gate of the first transistor. A first capacitor is coupled to a first terminal of the first transistor and a second capacitor is coupled to a first terminal of the second transistor. A third interconnect line couples a second terminal of the first transistor with a second terminal of the second transistor. The second interconnect line extends along a direction orthogonal to the longitudinal direction.Figure 1A is an isometric illustration of a device structure 100. The device structure 100 includes an interconnect line 102 along a longitudinal direction (y-direction) where the interconnect structure 102 is within a lower metallization level 105A. An interconnect line 104 is parallel to the interconnect line 102 where the interconnect line 104 is within an upper metallization level 105B. Interconnect lines 102 and 104 may be herein referred to as word lines 102 and 104, respectively. Word lines 102 and 104 are vertically separated by a distance, SV. In some embodiments, Sv is at least 20 nm but less than 200 nm.The device structure 100 further includes a plurality of transistors such as transistor 106 and a transistor 108. Transistors 106 and 108 may be thin-film-transistors (TFTs) utilized in a back-end-of-the-line (BEOL). Transistors 106 and 108 may be back gated as shown, where a respective gate of transistors 106 and 108 is below source and drain terminals. As shown, transistors 106 and 108 are on a same plane. The transistor 108 is laterally separated from the transistor 106 where a gate 110 of the transistor 106 is coupled to the interconnect line 102 and a gate 112 of transistor 108 is coupled to the interconnect line 104. A via 114 is between the interconnect line 102 and the gate 110 of the transistor 106. The device structure 100 further includes a capacitor 116 coupled to a terminal 118 of the transistor 106 and a capacitor 120 coupled to a terminal 122 of the transistor 108. Each transistor-capacitor combination such transistor 106 and capacitor 116, for example, constitutes a memory bitcell.For operational advantages the interconnect line 124 couples a terminal 126 of the transistor 106 with a terminal 128 of the transistor 108. The interconnect line 124 extends, along the x-direction, orthogonal to the word lines 102 and 104.Word lines 102 and 104 can be laterally separated or overlap. In the illustrative embodiment, the word lines 102 and 104 are laterally separated by a distance, WLS. Because of vertical separation, word lines 102 and 104 can be laterally brought closer together (decreasing WLS) without prohibitively increasing capacitance. Lateral separation, WLS, may be determined by a minimum spacing between transistors 106 and 108 connected to the word lines 102 and 104, respectively. WLS may also be dependent on a lateral width (along x-direction) of the capacitors 116 and 120. The word lines 102 and 104 each have a lateral width, WLW and a thickness WLT, where and WLW and WLT may be dependent on a minimum line resistance required. In some embodiments, WLT is between 10-50 nm and WLW is between 10-50 nm.Figure 1B is a cross-sectional illustration of the structure in Figure 1A through a line A-A'. Figure 1B includes one or more layers that are omitted in Figure 1A for clarity. In the illustrative embodiment, an upper portion of via 114 is laterally surrounded by an etch stop layer 138 and a lower portion of via 114 is surrounded by etch stop layer 140. Additionally, as shown a dielectric 136 is between the etch stop layer 138 and etch stop layer 140. Dielectric 136 and etch stop layers 138 and 140 include materials that have a low dielectric constant to prevent capacitance build up in the word lines 102 and 104. The device structure 100 further includes a via 134 between transistor 108 and word line 104 where the via 134 is spatially confined within etch stop layer 138. As shown, word line 104 is laterally surrounded by dielectric 136 and etch stop layer 138 extends on a portion of an uppermost surface 104A of the word line 104.Via 114 has a vertical thickness or height, H1, as measured from an uppermost surface 102A of the word line 102. Via 134 has a vertical thickness or height, H2, as measured from surface 104A. The etch stop layer 138 has a thickness that is substantially equal to H2. In exemplary embodiments, H1 is less than H2. As shown, H1 is substantially equal to a sum of WLT H2 and Sv.The dimensions and materials of the word lines 102 and 104 can be customized for the purpose of reducing word line resistance. As shown, word lines 102 and 104 each have a lateral width (along the x-direction), WLW and a vertical thickness, WLT (along the x, and z directions, respectively). A total cross-sectional area, given by a product of WLW and WLT determine the conductance in the word lines 102 and 104. In addition to the cross-sectional area, conductivity of word lines 102 and 104 is determined by a choice of materials utilized. The word lines 102 and 104 may include a material such as copper or aluminum. In exemplary embodiments, word lines 102 and 104 include copper.Depending on embodiments, the transistors 106 and 108 may each have a lateral width that is either confined within or extend beyond the respective lateral width of word lines 102 and 104. Each of the transistors 106 and 108 has a respective lateral width, WT, (also measured along the x-direction). In the illustrative embodiment, WLW is greater than WT. WT may be determined by a target pitch/density of memory bit cells, and transistor performance characteristics as WT can influence drive current.In the illustrative embodiment, transistors 106 and 108 are laterally separated by a distance, TS. TS is determined partially by a lateral thickness of a spacer 130 adjacent to transistor 106 and a spacer 132 adjacent to transistor 108. In the illustrative embodiment, WLS is less than TS. In some embodiments, WLS is between 5 nm and 50 nm. In other embodiments, word lines 102 and 104 extend laterally such that a spacing, S1, between the word line 104 and via 114 is non-zero. In some embodiments, S1 can be 10 nm or less but greater than 1nm. It is to be appreciated that flexibility in reducing S1, by scaling WLW can be advantageous when transistors are scaled, and TS is reduced between them. Independently scaling WLW can advantageously facilitate a minimum line conductivity of word lines 102 and 104 to be preserved when WT and TS are reduced. Reducing transistor gate lengths (into a plane of the Figure) and WT can increase memory density. However, as illustrated, vertical separation SV can enable, WLW to beheld fixed as WT is scaled.As discussed above, in some embodiments, word lines 102 and 104 can overlap when TS and/or WT is reduced, such as is illustrated in Figure 1C . In some embodiments, the overlap, WLO, is a result of reduction in spacing between transistors 106 and 108. Overlapping between word lines 102 and 104 can enable preserving a low line resistance, such as a line resistance below 5000 Ohm. The overlap may be also tuned to provide word lines 102 and 104 with a desired range of electrical line resistance. In some embodiments, the overlap, WLO, may be between 0-20 nm.WT, TS , WLW and WLS may be independently selected, however, there are limitations on how much the word lines 102 and 104 may extend laterally along the x-direction. In general, WLW is substantially the same for each word line 102 and 104. However, lateral width of the upper word line 104 is constrained compared to a lateral width of the lower word line 102 because of the presence of the via 114. As shown, via 114 is laterally distant from sidewall 104A of word line 104 by a spacing S1. In embodiments, via 114 has a substantially vertical or a tapered sidewall. In the illustrative embodiment, via 114 has a substantially vertical sidewall [as measured from a normal to surface 102A] and S1 is a minimum separation between the word line 104 and via 114. In embodiments, S1 is at least 5 nm.In various embodiments, via 114 has a footprint that is smaller than an uppermost surface of word line 102 and less than a width, WT of the transistor 106. Via 114 has a lateral width, WV1 that is less than WT and WLW. As shown via 114 is on a portion of uppermost surface 102A. In embodiments WV1 is between 15-40 nm. In various embodiments, via 114 has a footprint that is smaller than an uppermost surface of word line 104 and less than a width, WT of the transistor 108. Via 118 has a maximum lateral width, WV2 that is less than WT and WLW.Figure 1D is a cross-sectional illustration through a line B-B' of the structure in Figure 1A . The capacitor 116 and interconnect line 124 are not shown for clarity. Depending on the relative width of via 114 and transistor 106, the via 114 may extend under different portions of the transistor 106. As shown transistor 106 has a lateral width, WT1, and the via 114 has a lateral width, WV1. In the illustrative embodiment, the via 114 extends laterally under terminals 126 and 118. As shown terminals 126 and 118 are isolated by a dielectric 139A. The lateral width of the dielectric 139A defines a gate length, LG, of transistor 106. In an embodiment, terminals 126 and 118 may include any suitable electrically conductive material, alloy, or a stack of multiple electrically conductive materials. In some embodiments, terminals 126 and 118 include one or more metals or metal alloys, with metals e.g., copper, ruthenium, palladium, platinum, cobalt, nickel, hafnium, zirconium, titanium, tantalum, and aluminum, tantalum nitride, titanium nitride, tungsten, doped silicon, doped germanium, or alloys and mixtures of these. In some embodiments, terminals 126 and 118 includes one or more electrically conductive alloys, oxides, or carbides of one or more metals. In some embodiments, the terminals 126 and 118 includes a doped semiconductor, such as silicon or another semiconductor doped with an n-type dopant or a p-type dopant, or a compound semiconductor. Metals may provide higher conductivity, while doped semiconductors may be easier to pattern during fabrication. In some embodiments, the terminals 126 and 118 have a thickness (i.e., dimension measured along the z-axis) between about 2 nm and 1000 nm, preferably between about 2 nm and 100 nm.Figure 1E is a cross-sectional illustration through a line D-D' of the structure in Figure 1A . The capacitor 120 and interconnect line 124 are not shown for clarity. Depending on the relative width of the via 134 and transistor 108, the via 134 may extend under different portions of the transistor 108. In the illustrative embodiment, via 134 extends laterally under terminals 128 and 122. As shown transistor 106 has a lateral width, WT2, and the via 134 has a lateral width, WV2. In exemplary embodiments, WT2 is greater than WV2. As shown terminals 128 and 122 are isolated by a dielectric 139B. The lateral width of the dielectric 139B defines a gate length, LG, of transistor 108. In exemplary embodiments, terminals 128 and 122 include a material that is the same or substantially the same as the material of the terminals 126 or 118 described in association with Figure 1D .Figure 1F is a cross-sectional illustration through a line C-C' of the structure in Figure 1A . Figure 1F includes layers that are not illustrated in Figure 1A such as dielectric 142. Capacitors 116 and 120 are laterally surrounded by dielectric 142. Capacitor 116 includes an electrode 116A, an insulator 116B on and adjacent to electrode 116A, and an electrode 116C on and adjacent to insulator 116B. Capacitor 120 includes an electrode 120A, an insulator 120B on and adjacent to electrode 120A, and an electrode 120C on and adjacent to the insulator 120B. As shown, terminal 116A of capacitor 116 is coupled to transistor 106 through an interconnect 146, and terminal 120A of capacitor 120 is coupled to transistor 108 through an interconnect 144. In the illustrative embodiment, interconnect 144 is coupled with terminal 122 and interconnect 146 is coupled with terminal 118. Interconnects 144 and 146 may have a wider or a narrower footprint compared to capacitors 116 or 120, respectively. As shown, the interconnects 144 and 146 have a narrower footprint compared to capacitors 116 and 120, respectively. In an embodiment, the electrodes 116A and 120A include a conductive material such as titanium nitride, tantalum and tantalum nitride. In an embodiment, the insulator 116B and 120B each include a dielectric material such as silicon dioxide, carbon-doped silicon glass or other low dielectric constant oxides. In an embodiment, the electrodes 116C and 120C each include a conductive material such as titanium nitride, tantalum and tantalum nitride.In the illustrative embodiment, electrodes 116C and 120C, respectively, are electrically coupled by a bridging plate 148 that extends between the capacitors of 116 and 120. Electrically coupling electrodes 116C and 120C enables a single programming voltage to be applied on bridging plate 148. Programming of capacitors 116 and 120 can then be accomplished by individually applying voltages on electrodes 116A and 120A, respectively. In an embodiment, each of the capacitors 116 or 120 have a lateral width WC. In exemplary embodiments, WC is substantially the same for each capacitor 116 and 120. WC may be less than or greater than WT.Referring again to Figure 1B , transistors 106 and 108 include gate electrodes 110 and 112, respectively, gate dielectric layers 162A and 162B, respectively, and channel layers 164A and 164B, respectively. In the illustrative embodiment, isolation 139A and 139B are above the channel layers 164A and 164B, respectively. Terminals 118, 122, 126 and 128 are not illustrated in the cross-sectional illustration.The channel layers 164A and 164B may include semiconductor materials including, for example, n-type or p-type materials. In some embodiments, the channel layers 164A and 164B may include a high mobility oxide semiconductor material, such as tin oxide, antimony oxide, indium oxide, indium tin oxide, titanium oxide, zinc oxide, indium zinc oxide, indium gallium zinc oxide, gallium oxide, titanium oxynitride, ruthenium oxide, or tungsten oxide. In general, the channel layers 164A and 164B may include one or more of tin oxide, cobalt oxide, copper oxide, antimony oxide, ruthenium oxide, tungsten oxide, zinc oxide, gallium oxide, titanium oxide, indium oxide, titanium oxynitride, indium tin oxide, indium zinc oxide, nickel oxide, niobium oxide, copper peroxide, indium gallium zinc oxide (IGZO), indium telluride, molybdenite, molybdenum diselenide, tungsten diselenide, tungsten disulfide, n- or p-type amorphous or polycrystalline silicon, germanium, indium gallium arsenide, silicon germanium, gallium nitride, aluminum gallium nitride, indium phosphite, and black phosphorus, each of which may possibly be doped with one or more of gallium, indium, aluminum, fluorine, boron, phosphorus, arsenic, nitrogen, tantalum, tungsten, and magnesium, etc. In particular, the channel layers 164A and 164B may be formed of a thin film material. Some such materials may be deposited at relatively low temperatures, which allows depositing them within the thermal budgets imposed on back-end fabrication to avoid damaging any front-end components. In some embodiments, the channel layers 164A and 164B may have a thickness between about 5 nm and 30 nm.In various embodiments, the gate dielectric layers 162A and 162B includes one or more high-k dielectric materials and may include elements such as hafnium, silicon, oxygen, titanium, tantalum, lanthanum, aluminum, zirconium, barium, strontium, yttrium, lead, scandium, niobium, and zinc. Examples of high-k materials that may be used in the gate dielectric layers 162A and 162B may include, but are not limited to, hafnium oxide, hafnium silicon oxide, lanthanum oxide, lanthanum aluminum oxide, zirconium oxide, zirconium silicon oxide, tantalum oxide, titanium oxide, barium strontium titanium oxide, barium titanium oxide, strontium titanium oxide, yttrium oxide, aluminum oxide, tantalum oxide, tantalum silicon oxide, lead scandium tantalum oxide, and lead zinc niobate. In some embodiments, an annealing process may be carried out on the gate dielectric layers 162A and 162B during manufacture of the transistors 106 and 108 to improve the quality of the gate dielectric layers 162A and 162B. In some embodiments, the gate dielectric layers 162A and 162B have a thickness between about 0.5 nanometers and 3 nanometers, including all values and ranges therein, e.g., between about 1 and 3 nanometers, or between about 1 and 2 nanometers.The gate electrodes 110 and 112 may include at least one p-type work function metal or n-type work function metal, depending on whether the transistors 106 and 108, respectively, are a P-type metal oxide semiconductor (PMOS) transistors or N-type metal oxide semiconductor (NMOS) transistors. For a PMOS transistor, the gate electrodes 110 and 112 may include a metal such as, but are not limited to, ruthenium, palladium, platinum, cobalt or nickel, or conductive metal oxides (e.g., ruthenium oxide). For an NMOS transistor, gate electrodes 110 and 112 may include a metal such as, but are not limited to hafnium, zirconium, titanium, tantalum, aluminum, or alloys of these metals, or carbides of these metals (e.g., hafnium carbide, zirconium carbide, titanium carbide, tantalum carbide, and aluminum carbide). In some embodiments, the gate electrodes 110 and 112 includes a stack of two or more metal layers, where one or more metal layers are work function metal layers and at least one metal layer is a fill metal layer. Further metal layers may be included for other purposes, such as to act as a diffusion barrier layer, where the diffusion barrier layer may be directly adjacent to the via 134 or 114.Referring again to a Figure 1A , device 100 further includes a transistor 150 and a transistor 152 on a same plane. The transistor 152 is laterally separated from the transistor 150 (along the x-direction). As shown a gate 154 of the transistor 150 is coupled to the word line 102 and a gate 156 of the transistor 152 is coupled to the word line 104. A via 158 is coupled between the word line 102 and gate 154 of transistor 150 and a via 174 is coupled between the word line 104 and gate 156. Device 100 further includes a capacitor 160 coupled to a terminal 162 of the transistor 150 and a capacitor 164 coupled to a terminal 166 of the transistor 152. As shown, an interconnect line 168 couples a terminal 170 of the transistor 150 with a terminal 172 of transistor 152. In the illustrative embodiment, interconnect line 168 extends along a direction parallel to the interconnect line 124.In some embodiments, transistors 150 and 152 have one or more features of transistor 106 or 108. In exemplary embodiments transistors 150 and 152 are the same or substantially the same as transistors 106 or 108. In some embodiments, capacitors 160 and 164 have one or more features of capacitors 116 or 120. In exemplary embodiments capacitors 160 and 164 are the same or substantially the same as capacitors 116 or 120.Although not shown, the word lines 102 and 104 extend along the y-direction and can accommodate hundreds of transistor, such as transistors 106 and 108, respectively, that is sufficient to fabricate at least 256K bitcell memory.In other embodiments, device density can be increased with a similar word line structure as 102 and 104, but with a different transistor architecture. In some such embodiments, a pair of transistors can share one or more elements such as, for example, a channel layer and/or a terminal to reduce space between the transistors.Figure 2A is an isometric illustration of a device structure 200 that includes a pair of transistors 210 and 216 that each straddle a pair of word lines. As shown, device structure 200 includes a word line 202 and a word line 204, extending along a y-direction in the Figure. Word lines 202 and 204 are arranged identically to word lines 102 and 104, respectively and have one or more features of word lines 102 and 104 (described in association with Figures 1A-1F ). Word line 202 is within a metallization level 205A, and word line 204 is within a metallization level 205B and is parallel to word line 202. The device structure 200 further includes a via 206 coupled between the word line 202 and a gate 208 of transistor 210. A via 212 (not visible) is also coupled between the word line 204 and a gate 214 of a transistor 216.The transistors 210 and 216 include many shared components to facilitate a smaller footprint without loss of functionality. Each of the transistors 210 and 216 include a shared gate dielectric 218 (herein gate dielectric 218) on each of the respective gates 208 and 214, and a shared channel layer 220 (herein channel layer 220) on the gate dielectric 218. The transistors 210 and 216 also includes a shared terminal 226, where a portion of the terminal 226 is over the gate 208 and a portion of the terminal 226 is over gate 214. Shared terminal can simultaneously function as a source or a drain for transistors 210 and 216. Transistor 210 further includes a terminal 224 on a portion of the channel layer 220, where terminal 224 is separated from the terminal 226 by isolation 250. In the illustrative embodiment, terminal 224 is over a portion of the gate 214. Transistor 216 further includes a terminal 222 on a portion of the channel layer 220 where terminal 222 is separated from the terminal 226 by isolation 250. In the illustrative embodiment, terminal 224 is over a portion of the gate 214. It is to be appreciated that respective gates, channel layers, gate dielectric layers and terminals of each transistors 210 and 216 extend over each of the word lines 202 and 204 as illustrated in the plan view illustration of Figure 2D taken across a plane through gates 208 and 214.Referring again to Figure 2A , transistors 210 and 216 each have a length L1 and a width, W1 and W2, respectively. In embodiments, L1 is less than a combined lateral widths, WLW, of each word line 202 and 204, and spacing WLS. Transistor 210 and 216 having a length L1 that is less than or equal to 2WLW + WLS can facilitate a large collection of word line pairs. The terminals 222 and 224 is sufficiently long to provide a length for coupling with capacitors above. As shown, capacitor 228 is coupled to the terminal 224 and capacitor 230 coupled to the terminal 222. Capacitors 228 and 230 may also extend over both word lines 202 and 204 to provide a larger volume for increased charge storage. In the illustrative embodiment, capacitors 228 and 230 do not extend a full length of the transistor. In other examples, the capacitors 228 and 230 can extend a full length, L1, of transistors 210 and 216.Although not shown, the word lines 202 and 204 extend along the y-direction and can accommodate hundreds of transistor pairs, such as transistors 210 and 216, that is sufficient to fabricate at least 256K bitcell memory.Figure 2B is a cross-sectional illustration along the line A-A' in Figure 2A . Figure 2B includes layers that are not illustrated in Figure 2A such as dielectrics 232 and 234, one or more layers within capacitors 228 and 230, and etch stop layer 238. In embodiments, the capacitors 228 and 230 have one or more features of the capacitors 116 and 120 (described in association with Figure IF). Capacitors 228 and 230 are laterally surrounded by dielectric 234. Dielectrics 232 and 234 each include a material that is the same or substantially the same as the material of dielectric 136.Capacitor 228 includes an electrode 228A, an insulator 228B on and adjacent to electrode 228A, and an electrode 228C on and adjacent to insulator 228B. Capacitor 230 includes an electrode 230A, an insulator 230B on and adjacent to electrode 230A, and an electrode 230C on and adjacent to the insulator 230B . As shown, electrode 228A of capacitor 228 is coupled to terminal 224 through an interconnect 242 and electrode 230A of capacitor 230 is coupled to terminal 222 through an interconnect 244. In the illustrative embodiment, interconnects 242 and 244 have a smaller footprint than capacitors 228 or 230, respectively. In some embodiments, electrodes 228A and 230A include a material that is the same or substantially the same as the material of the electrode 116A (described in association with Figure IF). In some embodiments, electrodes 228C and 230C includes a material that is the same or substantially the same as the material of the electrodes 116C (described in association with Figure IF). In some embodiments, insulator 228B and 230B includes a material that is the same or substantially the same as the material of the insulator 116B (described in association with Figure IF).In the illustrative embodiment, electrodes 228B and 230B, respectively, are electrically coupled by an electrode 246 that extends between the capacitors of 228 and 230. Electrically coupling electrodes 228B and 230B enables a single programming voltage to be applied on electrode 246. Programming of capacitors 228 and 230 can then be accomplished by individually applying voltages on electrodes 228A and 230A, respectively. In an embodiment, each of the capacitors 228 or 230 have a lateral width WC. In exemplary embodiments, WC is substantially the same for each capacitor 228 and 230. WC is less than W1 or W2.The lateral spacing between terminals 224 and 226, and between 222 and 226 defines a respective gate length, LG, for transistor 210 and 216. In the illustrative embodiment, terminals 224 and 226 are separated by isolation 250, and terminals 222 and 226 are separated by isolation 252 that extend along a length (into the negative x direction) of the transistors 210 and 216, respectively. As shown, the interconnect 242 is partially on the terminal 224 and on the isolation 250. In other embodiments, interconnect 242 is only on the terminal 224. As shown, the interconnect 244 is partially on the terminal 222 and on the isolation 252. In other embodiments, interconnect 244 is only on the terminal 222.In the illustrative embodiment via 212 is laterally surrounded by etch stop layer 238. In the illustrative embodiment, via 212 and etch stop layer 238 have a substantially same height that is a result of a process flow utilized to fabricate device 200. Via 212 has a height, H2, as measured from an uppermost surface 204A of the word line 204. Via 212 may be substantially confined (along the y-direction) to gate 216 to prevent shorting with gates of adjacent transistors that may be present along the y-direction. In the illustrative embodiment, via 212 is laterally confined within a boundary of gate 216. In the illustrative embodiment, via 206 and word line 202 are superimposed for illustrative purposes. Via 206 and word line 202 are on a different plane.Figure 2C is a is a cross-sectional illustration along the line B-B' through the structure in Figure 2A . Figure 2C includes layers that are not illustrated in Figure 2A such as dielectric 232 and etch stop layers 238 and 240. As shown, via 206 extends from an uppermost surface 202A of the word line 202 to the gate 208. Via 206 has a vertical thickness, H1, as measured from an uppermost surface 202A. Via 206 is laterally surrounded by dielectric 232 and etch stop layers 238 and 240. Dielectric 238 and an etch stop layers 238 and 240 include materials that have a low dielectric constant to prevent capacitance build up. Via 206 has one or more features of via 114 (described in association with Figure IB). In the illustrative embodiment, via 206 includes a liner layer 206A adjacent to the dielectric 232 and etch stop layers 238 and 240, and a fill metal 206B adjacent to the liner layer 206A. As shown, the liner layer 206A and fill metal 206B are both in contact with the gate 206.In some embodiments, word lines 202 and 204 can overlap when L1 is reduced, such as is indicated by extensions 202B and 204B within dashed lines. Overlapping between word lines 202 and 204 can enable preserving a low line resistance, such as a line resistance below 5000 Ohm. The overlap, WLO, may be also tuned to provide word lines 202 and 204 with a desired range of electrical line resistance. In some embodiments, the overlap, WLO, may be between 0-20 nm.In general, WLW is substantially the same for each word line 202 and 204. However, Word line 202 may extend laterally under word line 204. In some embodiments, word line 202 may extend laterally beyond a sidewall 204C of the word line 204. However, it is desirable to confine word line 202 to within sidewall 204C to avoid contacting adjacent word lines on level 205A. However, the lateral width, WLW of the upper word line 204 is constrained compared to the lateral width of the lower word line 202 because of the presence of the via 206. As shown, via 206 is laterally distant from sidewall 204D of word line 204 by a spacing S1. In general, the spacing S1 is dependent on a profile of sidewalls of the via 206. In some embodiments, via 206 has a substantially tapered sidewall profile. In some such embodiments S1 is a maximum spacing between via 206 and word line 204, where the maximum spacing, S1, is at a top surface of the wordline 204 and gradually increases towards word line 202. In the illustrative embodiment, via 206 has a substantially vertical sidewall profile, and S1 is substantially fixed. In embodiments, S1 is at least 5nm. In the illustrative embodiment, via 212 is superimposed for illustrative purposes. Via 212 and via 206 are on different planes.Figure 3 is a method 300 to fabricate a device structure having word lines on multiple levels coupled with transistors and capacitors, in accordance with an embodiment of the present disclosure. The method 300 begins at operation 310 by receiving a workpiece including a first word line within a first dielectric. The method 300 continues at operation 320 with the process to deposit a first etch stop layer and a second dielectric on the workpiece and following the formation of a second word line within the second dielectric. The method 300 continues at operation 330 with the formation of a first via on the second word line. The method 300 continues at operation 340 with the formation of a second via in the second opening. The method 300 continues at operation 350 with the formation of a first transistor on the first via and a second transistor on a second via. The method 300 concludes at operation 360 with the formation of a first capacitor coupled with the first transistor and a second capacitor coupled with the second transistor.Figure 4A is a cross-sectional illustration of a workpiece 400 including a word line 102 fabricated in a dielectric 404, formed above a substrate 402, in accordance with an embodiment of the present disclosure. In an embodiment, the dielectric 404 is blanket deposited by a (PECVD) or a chemical vapor deposition (CVD) process. In an embodiment, the dielectric 404 includes silicon and one or more of nitrogen, oxygen and carbon such as, silicon nitride, silicon dioxide, carbon doped silicon nitride, silicon oxynitride or silicon carbide. In some embodiments, an opening is formed in the dielectric 404 and a conductive material is deposited into the opening. In exemplary embodiments, the conductive material is copper (Cu) which provides much lower resistance compared to other metals such as aluminum, tungsten, titanium. The conductive material is then planarized to form the word line 102. The lateral width, WLW and vertical thickness, WLT, of the word line 102 is chosen to obtain a requisite line resistance. In some embodiments, metal-diffusion barrier material such as ruthenium, tantalum nitride (TaN), tantalum (Ta), titanium zirconium nitride (e.g., TixZr1-xN, such as X = 0.53), titanium nitride (e.g., TiN) or titanium tungsten (TiW) is deposited prior to deposition of the conductive material.Figure 4B is a cross-sectional illustration of the structure in Figure 4A following the formation of etch stop layer 140, a dielectric 136 on the etch stop layer 140 and fabrication of a word line 104 in the dielectric 136. In an embodiment, etch stop layer 140 is blanket deposited on the dielectric 404 and on the word line 102 by a (PECVD) or a chemical vapor deposition (CVD) process. The etch stop layer 140 includes a material that can prevent or help prevent diffusion or migration of copper (Cu) from the word line 102 into the dielectric 136. In exemplary embodiments, etch stop layer 140 includes silicon, nitrogen and one or more of oxygen and carbon. In an embodiment, the dielectric 136 is blanket deposited by a (PECVD) or a chemical vapor deposition (CVD) process. In an embodiment, the dielectric 136 includes silicon and one or more of nitrogen, oxygen and carbon such as, silicon nitride, silicon dioxide, carbon doped silicon nitride, silicon oxynitride or silicon carbide. The etch stop layer 140 and dielectric 136 are deposited to have a combined vertical thickness, H4. H4 may be determined by a desired height of a via to be fabricated above word line 102 and a vertical thickness of word line 104 to be fabricated in the dielectric 136.In some embodiments, an opening is formed in the dielectric 136 and a conductive material is deposited into the opening. The conductive material is then planarized to form the word line 104. The lateral width, WLw and vertical thickness, WLT, of the word line is chosen to obtain a requisite line resistance in the word line 104. The word line 104 includes a material that is the same or substantially the same as the material of the word line 102 and may be fabricated in the same or substantially the same manner as word line 102.Figure 4C is a cross-sectional illustration of the structure in Figure 4B following the process to deposit an etch stop layer 138 on the word line 104 and following the process to form a via 134 on the word line 104.In an embodiment, etch stop layer 138 may be deposited to a thickness that is favorable for forming a via on the word line 104. The etch stop layer 138 includes a material that is the same or substantially the same as the material of the etch stop layer 140 and has one or more properties of etch stop layer 136. The etch stop layer 136 includes a material that can prevent or help prevent diffusion or migration of copper (Cu) from the word line 104 towards transistors to be fabricated above the via 134.In an embodiment, an opening is formed in the etch stop layer 138. As shown the opening has a lateral width, WV2. In an exemplary embodiment, WV2 is less than WLW of word line 104. After formation of the opening a conductive material is deposited into the opening on the word line 104 and on uppermost surface of the etch stop layer 138. The excess etch stop layer material above the etch stop layer 138 may be removed via a planarization process. In an embodiment, a chemical mechanical planarization (CMP) process may be utilized to isolate and form via 134.Figure 4D is a cross-sectional illustration of the structure in Figure 4C following the process to form an opening 406 in a portion of the etch stop layer 138 and in the dielectric 136 to expose a portion of the word line 102. In an embodiment, a mask 408 is formed on the dielectric 136 and on the via 134. A plasma etch process may be utilized to form the opening 406 by etching the etch stop layer 138 and dielectric 140 until a portion of the word line 102 is exposed as shown. The profile of the opening 406 may be tapered or substantially vertical as shown. The opening 406 has a width WV1, that is less than WLW of word line 102. In some embodiments, where the mask 408 includes a photoresist material that is lithographically patterned, the mask 408 is removed after the etch process.Figure 4E is a cross-sectional illustration of the structure in Figure 4D following the process to form a via 114 in the opening. In an embodiment, a barrier layer 114A is deposited into the opening 408 on the word line 102, on the etch stop layer 138 and on the via 134. A conductive fill metal 114B is deposited on the surface of the barrier layer 114A, filling the opening 406 The barrier layer 114A may facilitate adhesion for the conductive fill metal 114B. In embodiments, the barrier layer includes a material such as ruthenium, tantalum nitride (TaN), tantalum (Ta), titanium zirconium nitride (e.g., TixZr1-xN, such as X = 0.53), titanium nitride (e.g., TiN) or titanium tungsten (TiW). In exemplary embodiments, the fill metal can include a material such as tungsten or cobalt.Figure 4F is a cross-sectional illustration of the structure in Figure 4E following the process to deposit a material layer stack 410 to fabricate a pair of transistors. In an embodiment, forming the material layer stack 410 includes sequentially depositing individual layers. In an embodiment, a gate electrode material 412 is blanket deposited on vias 134 and 114 and on etch stop layer 138. A gate dielectric layer 414 is deposited on the gate electrode material 412 and a channel material 416 is deposited on the gate dielectric layer 414. In the illustrative embodiment, a conductive material 418 is deposited on the channel material 416. Conductive material 418 is patterned in a downstream operation to provide a source or a drain. In an embodiment, the gate electrode material 412 is blanket deposited by a PVD or a PECVD or an ALD process. In exemplary embodiments, gate electrode material 412 includes a material that is the same or substantially the same as the material of the gate 110 or 112. In an embodiment, the gate dielectric layer 414 is deposited by an ALD process. In exemplary embodiments, gate dielectric layer 414 includes a material that is the same or substantially the same as the material of the gate dielectric layer 162A or 162B. In an embodiment, the channel material 416 is deposited on the gate dielectric layer 414 by a PVD or a PECVD process. In embodiments, channel material 416 includes a material that is the same or substantially the same as the material of the channel layer 164A or 164B. In an embodiment, the conductive material 418 is deposited by a PVD or a PECVD process. In exemplary embodiments, conductive material 418 includes a material that is the same or substantially the same as the material of the source 118 or drain 126.In an embodiment, a mask 420 is formed on the conductive layer 418. The mask 420 may be lithographically patterned or may include a dielectric material. The mask 420 defines a location where transistors are formed above each via 134 and 114. The mask 420 also defines the lateral width, WT of transistors to be formed and a space, TS, between two adjacent transistors.In some embodiments, the conductive material may be deposited by a damascene process after patterning of the channel material 416, gate dielectric layer 414 and the gate electrode material 412.Figure 5A is a cross-sectional illustration of the structure in Figure 4F following the process to form a transistor 106 on the via 114 and a transistor 108 on the via 134. The material layer stack 410 (described in association with Figure 4F ) is patterned to form transistors 106. In an embodiment, a plasma etch process is utilized to etch the material layer stack 410 and form gate electrode 110, gate dielectric layer 162A and channel layer 164A in transistor 106, and gate electrode 112, gate dielectric layer 162B and channel layer 164B in transistor 108. The conductive material portions 418A and 418B are also formed by patterning conductive material 418, where each of the conductive material portions 418A and 418B have a same or substantially the same footprint as the channel layer 164A or 164B.As shown a spacer 130 is formed laterally adjacent to sidewalls of transistor 106 and a spacer 132 is formed laterally adjacent to sidewalls of the transistor 108 after patterning. The spacers 130 and 132 laterally surround the transistors 106 and 108, respectively as shown in the plan-view illustration of Figure 5B .Figure 5B is a plan-view illustration of the structure of Figure 5A following the formation of source and drain terminals in each of the transistors 106 and 108. In an embodiment, a dielectric 422 is deposited on the structure of Figure 5A and planarized. In some embodiments, a mask is formed that defines an opening above the conductive material portions 418A and 418B. The conductive material portions 418A and 418B may be etched to expose channel layers 164A and 164B (not visible in the illustration). The etch forms terminals 118, 122 126 and 128. In the illustrative embodiment, a dielectric material is then deposited blanket deposited and planarized to form isolation 139A between terminals 118 and 126 and isolation 139B between terminals 122 and 126.Figure 6A is a cross-sectional illustration of the structure in Figure 5A following the process to fabricate an interconnect 144 on the transistor 108 and an interconnect 146 on the transistor 106. In an embodiment, a dielectric 424 is deposited on the transistors 106 and 108 and on the dielectric 422. The dielectric 424 is then patterned to form openings above the terminals 118 and 122. In some embodiments, dielectric 424 includes a material that is the same or substantially the same as the material of the dielectric 422. In the illustrative embodiment, a conductive material is deposited into the openings and planarized to form interconnects 144 and 146. The interconnects may be wider or narrower compared to WT. As shown the interconnects have a lateral width that is less than WT.While not on the plane of Figure 6A , an interconnect line structure such as line structure 124 (indicated by dashed line) is fabricated connecting terminals on transistors 106 and 108. The terminals are on a same plane as the plane of the interconnect structure 124. In an embodiment, interconnect line structure 124 is also formed by a damascene process.Figure 6B is a cross-sectional illustration of the structure in Figure 6A following the process to deposit a dielectric 426 on the dielectric 424 and on the interconnects 144 and 146, and portions of a respective capacitor in each of openings 427A and 427B. In some embodiments, dielectric 426 includes a material that is the same or substantially the same as the material of the dielectric 424 and is deposited by a PECVD, CVD or a PVD process. In an embodiment, openings 427A and 427B are formed in the dielectric 426 to expose interconnects 144 and 146, respectively. The openings 427A and 427B may be formed by a plasma etch process.Fabrication of capacitor 116 and 120 includes deposition of an electrode material in the openings formed in the dielectric 426. The electrode material is deposited and maybe patterned to form electrodes 116A and 120A. In the illustrative embodiment, the electrodes 116A and 120A are recessed below an uppermost surface of the dielectric 426 during the patterning process. An insulator layer is deposited on the electrodes 116A and 120A. The insulator layer may be patterned or removed from above the dielectric 426 by a planarization process to form insulator 116B and 120B.Figure 6C is a cross-sectional illustration of the structure in Figure 6B following the process to deposit an electrode material 428 on the insulator 116B and 120B and on the dielectric 426. In an embodiment, the electrode material 428 is blanket deposited into the openings 427Aand 427B and planarized. In some embodiments, a CMP process may be utilized to planarize the electrode material 428.Figure 7A is a cross-sectional illustration of the structure in Figure 6C following the process to pattern the electrode. In an embodiment, a mask is formed on the electrode material 428. The mask may be designed to form individual capacitors 116 and 120. In the illustrative embodiment, the mask includes a bridging portion between capacitors 116 and 120. In an embodiment, a plasma etch process is utilized to etch the electrode material 428 to form electrodes 116C and 120C and a bridging plate 148 connecting electrodes 116C and 120C.Figure 7B is a plan view of the structure in Figure 7A . As shown, electrodes 116C and 120C are connected by a bridging plate 148. In the illustrative embodiment, electrodes 116C and 120C laterally extend, in both x and y directions, beyond external sidewalls of electrodes 116A and 120A. While bridging plate 148 facilitates voltage to be applied simultaneously to electrodes 116C and 120C, capacitors 116 and 120 can be programmed independently.While two transistor capacitor pairs have been discussed with respect to Figures 4A-7B . The process described can be extended to apply to formation of a large array of transistor capacitor pairs.Figure 8 illustrates a computing device 800 in accordance with embodiments of the present disclosure. As shown, computing device 800 houses a motherboard 802. Motherboard 802 may include a number of components, including but not limited to a processor 801 and at least one communications chip 804 or 805. Processor 801 is physically and electrically coupled to the motherboard 802. In some implementations, communications chip 805 is also physically and electrically coupled to motherboard 802. In further implementations, communications chip 805 is part of processor 801.Depending on its applications, computing device 800 may include other components that may or may not be physically and electrically coupled to motherboard 802. These other components include, but are not limited to, volatile memory (e.g., DRAM), non-volatile memory (e.g., ROM), flash memory, a graphics processor, a digital signal processor, a crypto processor, a chipset 806, an antenna, a display, a touchscreen display, a touchscreen controller, a battery, an audio codec, a video codec, a power amplifier, a global positioning system (GPS) device, a compass, an accelerometer, a gyroscope, a speaker, a camera, and a mass storage device (such as hard disk drive, compact disk (CD), digital versatile disk (DVD), and so forth).Communications chip 805 enables wireless communications for the transfer of data to and from computing device 800. The term "wireless" and its derivatives may be used to describe circuits, devices, systems, methods, techniques, communications channels, etc., that may communicate data through the use of modulated electromagnetic radiation through a non-solid medium. The term does not imply that the associated devices do not contain any wires, although in some embodiments they might not. Communications chip 805 may implement any of a number of wireless standards or protocols, including but not limited to Wi-Fi (IEEE 801.11 family), WiMAX (IEEE 801.11 family), long term evolution (LTE), Ev-DO, HSPA+, HSDPA+, HSUPA+, EDGE, GSM, GPRS, CDMA, TDMA, DECT, Bluetooth, derivatives thereof, as well as any other wireless protocols that are designated as 3G, 4G, 5G, and beyond. Computing device 800 may include a plurality of communications chips 804 and 805. For instance, a first communications chip 805 may be dedicated to shorter range wireless communications such as Wi-Fi and Bluetooth and a second communications chip 804 may be dedicated to longer range wireless communications such as GPS, EDGE, GPRS, CDMA, WiMAX, LTE, Ev-DO, and others.Processor 801 of the computing device 800 includes an integrated circuit die packaged within processor 801. In some embodiments, the integrated circuit die of processor 801 includes one or more interconnect structures, volatile memory devices, non-volatile memory devices, and device structures such as device structures 100 or 200 described in association with Figures 1A-1F and 2A-2C, respectively. Referring again to Figure 8 , the term "processor" may refer to any device or portion of a device that processes electronic data from registers and/or memory to transform that electronic data into other electronic data that may be stored in registers and/or memory.Communications chip 805 also includes an integrated circuit die packaged within communication chip 805. In another embodiment, the integrated circuit die of communications chips 804, 805 includes one or more interconnect structures, non-volatile memory devices and transistor coupled with capacitors. Depending on its applications, computing device 800 may include other components that may or may not be physically and electrically coupled to motherboard 802. These other components may include, but are not limited to, volatile memory (e.g., DRAM) 807, 808, non-volatile memory (e.g., ROM) 810, a graphics CPU 812, flash memory, global positioning system (GPS) device 813, compass 814, a chipset 806, an antenna 816, a power amplifier 809, a touchscreen controller 811, a touchscreen display 817, a speaker 815, a camera 803, and a battery 818, as illustrated, and other components such as a digital signal processor, a crypto processor, an audio codec, a video codec, an accelerometer, a gyroscope, and a mass storage device (such as hard disk drive, solid state drive (SSD), compact disk (CD), digital versatile disk (DVD), and so forth), or the like. In further embodiments, any component housed within computing device 800 and discussed above may contain a stand-alone integrated circuit memory die that includes one or more arrays of nonvolatile memory devices.In various implementations, the computing device 800 may be a laptop, a netbook, a notebook, an Ultrabook, a smartphone, a tablet, a personal digital assistant (PDA), an ultra-mobile PC, a mobile phone, a desktop computer, a server, a printer, a scanner, a monitor, a set-top box, an entertainment control unit, a digital camera, a portable music player, or a digital video recorder. In further implementations, the computing device 800 may be any other electronic device that processes data.Figure 9 illustrates an integrated circuit (IC) structure 900 that includes one or more embodiments of the disclosure. The integrated circuit (IC) structure 900 is an intervening substrate used to bridge a first substrate 902 to a second substrate 904. The first substrate 902 may be, for instance, an integrated circuit die. The second substrate 904 may be, for instance, a memory module, a computer mother, or another integrated circuit die. Generally, the purpose of an integrated circuit (IC) structure 900 is to spread a connection to a wider pitch or to reroute a connection to a different connection. For example, an integrated circuit (IC) structure 900 may couple an integrated circuit die to a ball grid array (BGA) 907 that can subsequently be coupled to the second substrate 904. In some embodiments, the first substrate 902 and the second substrate904 are attached to opposing sides of the integrated circuit (IC) structure 900. In other embodiments, the first substrate 902 and the second substrate904 are attached to the same side of the integrated circuit (IC) structure 900. And in further embodiments, three or more substrates are interconnected by way of the integrated circuit (IC) structure 900.The integrated circuit (IC) structure 900 may be formed of an epoxy resin, a fiberglass-reinforced epoxy resin, a ceramic material, or a polymer material such as polyimide. In further implementations, the integrated circuit (IC) structure may be formed of alternate rigid or flexible materials that may include the same materials described above for use in a semiconductor substrate, such as silicon, germanium, and other group III-V and group IV materials.The integrated circuit (IC) structure may include metal interconnects 908 and vias 910, including but not limited to through-silicon vias (TSVs) 912. The integrated circuit (IC) structure 900 may further include embedded devices 914, including both passive and active devices. Such embedded devices 914 include capacitors, resistors, inductors, fuses, diodes, transformers, device structure including transistors, such as device structures 100 or 200 described in association with Figures 1A-1F and 2A-2D , respectively. Referring again to Figure 9 , the integrated circuit (IC) structure 900 may further include embedded devices 914 such as one or more resistive random-access devices, sensors, and electrostatic discharge (ESD) devices. More complex devices such as radiofrequency (RF) devices, power amplifiers, power management devices, antennas, arrays, sensors, and MEMS devices may also be formed on the integrated circuit (IC) structure 900.Example 1: The device structure includes a first interconnect line along a longitudinal direction where the first interconnect line is within a first metallization level, a second interconnect line parallel to the first interconnect line, where the second interconnect line is within a second metallization level. The device structure further includes a first transistor and a second transistor on a same plane, where the second transistor is laterally separated from the first transistor, where a gate of the first transistor is coupled to the first interconnect line and where a gate of the second transistor is coupled to the second interconnect line. There is a via between the first interconnect line and the gate of the first transistor. The device structure further includes a first capacitor coupled to a first terminal of the first transistor and a second capacitor coupled to a first terminal of the second transistor. A third interconnect line couples a second terminal of the first transistor with a second terminal of the second transistor, where the second interconnect line extends along a direction orthogonal to the longitudinal direction.Example 2: The device structure according to example 1, where the second interconnect line is laterally separated from the first interconnect line by a first distance and the second transistor is laterally separated from the first transistor by a second distance.Example 3: The device structure according to any of one examples 1 through 2, where the first distance is less than the second distance.Example 4: The device structure according to any of one examples 1 through 3, where the first distance is zero.Example 5: The device structure according to any of one examples 1 through 4, where the first interconnect line laterally overlaps the second interconnect line.Example 6: The device structure according to any of one examples 1 through 5, where the via is a first via and the device structure further includes a second via coupled directly between the second interconnect line and the gate of the second transistor.Example 7: The device structure according to any of one examples 1 through 6, where the first interconnect line is separated from the second interconnect line by a first vertical thickness measured along a second direction orthogonal to the first and the longitudinal directions, where the second interconnect line has a second vertical thickness measured along the second direction, where the metallization structure has a third vertical thickness measured along the second direction, where the via has a fourth vertical thickness measured along the second direction, and where the fourth vertical thickness is substantially equal to a sum of the first, the second and the third vertical thicknesses.Example 8: The device structure according to any of one examples 1 through 7, where the first interconnect line and the second interconnect line each have a respective first lateral width as measured along a third direction orthogonal to the longitudinal direction, and where the first transistor and the second transistor each have a respective second lateral width as measured along the third direction and where the first lateral width is greater than the second lateral width.Example 9: The device structure according to any of one examples 1 through 8, where a first terminal of the first capacitor is coupled to the first terminal of the first transistor and a first terminal of the second capacitor is coupled to the first terminal of the second transistor and where a second terminal of the first capacitor is coupled to a second terminal of the second capacitor.Example 10: The device structure according to any of one examples 1 through 9, where the via is a first via and the device structure further include a third transistor and a fourth transistor on a same plane, where the third transistor is laterally separated from the fourth transistor, where a gate of the third transistor is coupled to the first interconnect line and where a gate of the fourth transistor is coupled to the second interconnect line. A third via is between the first interconnect line and the gate of the third transistor and a third capacitor is coupled to a first terminal of the third transistor and a fourth capacitor is coupled to a first terminal of the fourth transistor. A fourth interconnect line couples a second terminal of the third transistor with a second terminal of the fourth transistor, where the second interconnect line extends along a direction orthogonal to the longitudinal direction.Example 11: The device structure according to any of one examples 1 through 10, where the via is a first via and the device structure further includes a second via coupled directly between the second interconnect line and the gate of the second transistor.Example 12: The device structure according to any of one examples 1 through 11, where the first metallization level and the second metallization level are vertically separated by at least 20 nmExample 13: A device structure includes a first interconnect line along a longitudinal direction where the first interconnect line is within a first metallization level. A second interconnect line is parallel to the first interconnect line, where the second interconnect line is within a second metallization level. A first via is coupled between the first interconnect line and a first gate of a first transistor. A second via is coupled between the second interconnect line and a second gate of a second transistor, where the first transistor and the second transistor further include a shared gate dielectric on each of the respective first gate and the second gate, a shared channel layer on the shared gate dielectric and a shared third terminal between the first terminal and the second terminal, where the shared third terminal is over a portion of the first and the second gates. The first transistor further includes a first terminal on a first portion of the shared channel layer, where the first terminal is over a portion of the first gate, where the second transistor further includes a second terminal on a second portion of the shared channel layer and where the second terminal is over a portion of the second gate. A first capacitor is coupled to the first terminal of the first transistor and a second capacitor is coupled to the second terminal.Example 14: The device structure according to example 13, where each of the first gate and the second gate, the gate dielectric and the shared channel layer laterally extend over the first interconnect line and the second interconnect line.Example 15: The device structure according to any of one examples 13 through 14, where the first gate is laterally separated along the longitudinal direction from the second gate by a first distance.Example 16: The device structure according to any of one examples 13 through 15, where second interconnect line laterally is separated from the first interconnect line by a first distance.Example 17: The device structure according to any of one examples 13 through 16, where the first interconnect line laterally overlaps the second interconnect line.Example 18: A method to fabricate a device structure includes forming a first interconnect line within a first metallization level, where the first interconnect line extends along a first direction. The method further includes forming a second interconnect line within a second metallization level, where the second metallization level is above the first metallization level and forming a first via on the second interconnect line. The method further includes forming a second via on the first interconnect line and forming a first transistor on the first via. The method further includes forming a second transistor on the second via and forming a first capacitor on a first terminal of the first transistor. The method further includes forming a second capacitor on a first terminal of the second transistor and forming a third interconnect line connecting a second terminal of the first transistor and a second terminal of the second transistor, where the third interconnect line extends orthogonally to the first interconnect line.Example 19: The method according to example 18, where the forming the first via and the second via includes depositing a first etch stop layer on the first interconnect line, depositing a dielectric on the first etch stop layer and depositing a second etch stop layer on the dielectric. The method further includes etching a first opening in the second etch stop layer, in the dielectric and in the first etch stop layer and depositing a first conductive material in the first opening on the first interconnect line. The method further includes removing excess first conductive material from a region outside of the first opening, forming a second opening in the second etch stop layer, depositing a second conductive material in the second opening on the second interconnect line and removing excess second conductive material from a region outside of the second opening.Example 20: The method according to example 18, where the forming the second interconnect line further includes extending the first interconnect line laterally to overlap the first interconnect line, and where forming the first capacitor and second capacitor further includes forming a bridging plate between a top electrode of the first capacitor with a top electrode of the second capacitor.Device structures including vertically and laterally separated word lines each coupled with transistors that are further coupled with a respective capacitor to form bitcells are described herein. In the above description, for purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of certain embodiments. It will be apparent, however, to one skilled in the art that certain embodiments can be practiced without these specific details. In other instances, structures and devices are shown in block diagram form in order to avoid obscuring the description.Besides what is described herein, various modifications may be made to the disclosed embodiments and implementations thereof without departing from their scope. Therefore, the illustrations and examples herein should be construed in an illustrative, and not a restrictive sense. The scope of the invention should be measured solely by reference to the claims that follow. |
Methods, systems, computer-readable media, and apparatuses for providing intuitive, functional, and convenient ways of enabling a user of a head-mounted display unit or another augmented reality enabled device to interact with various user interfaces and other features provided by such a unit or device are presented. In some embodiments, a computing device, such as a head-mounted display unit, may receive camera input of a scene. Subsequently, the computing device may identify at least one reference object in the scene, for example, based on detecting one or more rectangles in the received camera input. The computing device then may receive input that defines a surface segment relative to the at least one reference object. Thereafter, the computing device may render the surface segment. |
WHAT IS CLAIMED IS: 1. A method comprising: receiving camera input of a scene; identifying ai least one reference object in the scene, wherein the at least one reference object is a physical object; receiving input defining a surface segment relative to the at least one reference object; and causing the surface segment to be rendered, 2. The method of claim 1 , wherein identifying at least one reference object in the scene comprises detecting one or more feaiitre points in the received camera input. 3. The method of claim 1, wherein identifying at least one reference object in the scene comprises receiving a wireless communication from the at least one reference object, 4. The method of claim 1, wherein identifying at least one reference object in the scene comprises detecting one or more rectangles in the received camera input. 5. The method of claim 4, further comprising: determining, based on the one or more detected rectangles, a current perspective of the scene, wherein the surface segment is rendered relative to the current perspective of the scene, 6. The method of claim 5, further comprising: determining that the current perspective of the scene has changed; and dynamically updating the rendered surface segment. 7. The method of claim 1, wherein the input defining the surface segment is a finger movement defining a shape of the surface segment, wherein the finger movement is performed by a user of a device causmg the surface segment to be rendered. 8. The method of claim 7, wherein causing the surface segment to be rendered includes: rendering, in a virtual workspace, the surface segment at a first angle; and rendering, in the virtual workspace, a second surface segment defined by the user at a second angle different from the first angle. 9. The method of claim 8, further comprising: determining that the at feast one reference object has been removed from the scene; and closing the virtual workspace. 10. The method of claim 9, further comprising: determining that the at least one reference object has been reintroduced into the scene or another scene; and opening the virtual workspace. 1 1. The method of claim 8, further comprising: detecting a second reference object different from the at least one reference object; and opening a second virtual workspace different from the virtual workspace. 12. The method of claim 1 , wherein the identifying the at least one reference object is based on an image displayed by the at least one reference object. 13. The method of claim 1 , further comprising: transmitting a data signal comprising a virtual workspace that includes the surface segment to least one other device, wherein the virtual workspace is defined by a first user that provided the input defining the surface segment, and wherein the at least one other device is associated with a second user different from the first user. 14. The method of claim 13, wherein causing the surface segment to he rendered comprises: rendering the virtual workspace for the first user; and based on interaction of the second user with one or more virtual objects included in the virtual workspace, dynamically updating the rendering of the virtual workspace for the first user. 15. The method of claim 1, further comprising: determining a pose of the at least one reference object based at least in part on the camera input; and determining a virtual surface separate from the at least one reference object based at least in part on the determined pose, wherein the surface segment is rendered on the virtual surface. 16. A system comprising: an optical sensor configured to detect an image of a scene; a display; a processor configured to: identify at least one reference object in the scene, wherein the at least one reference object is a physical object; receive input defining a surface segment relative to the at least one reference object; and render the surface segment in the display. 17. The system of claim 16, wherein the processor is configured to identify the at least one reference object in the scene based on detecting one or more feature points in the image of the scene. 18. The system of claim 16, wherem the processor is configured to identify the at least one reference object in the scene based on receiving a wireless communication from the at least one reference object, 19. The system of claim 16, wherein the processor is configured to identify the at least one reference object in the scene based on detecting one or more rectangles in the image of the scene. 20. The system of claim 19, wherem the processor is further configured to: determine, based on the one or more detected rectangles, a current perspective of the scene, wherein the surface segment is rendered relative to the current perspective of the scene. The system of claim 20, wherein the processor is further configured to determine that the current perspective of the scene has changed; and dynamically update the rendered surface segment. 22. The system of claim 16, wherein the input defining the surface segment is a finger movement defining a shape of the surface segment, wherein the finger mo v ement is performed by a user of a device causing the surface segment to be rendered. 23. The system of claim 22, wherein rendering the surface segment includes: rendering, in a virtual workspace, the surface segment at a first angle; and rendering, in the virtual workspace, a second surface segment defined by the user at a second angle different from the first angle. 24. The sy stem of claim 23, wherem the processor is further configured to: determine when the at least one reference object has been removed from scene; and close the virtual workspace in response to determining that the at least one reference object has been removed. 25. The system of claim 24, wherem the processor is further configured to determine that the at least one reference object has been reintroduced the scene or another scene; and open the virtual workspace in response to determining the at least one reference object has been reintroduced into the scene or another scene. The system of claim 23, wherem the processor is further configured to: detect a second reference object different from the at least one reference open a second virtual workspace different from the virtual workspace. 27. The system of claim 16, wherein the identifying the at least one reference object is based on an image displayed by the at least one reference object, 28. The system of claim 16, further comprising: a transmitter configured to transmit a data signal comprising a virtual workspace that includes the surface segment to at least one other device. wherem the virtual workspace is defined by a first user that provided the input defining the surface segment, and wherein the at least one other device is associated with a second user different from the first user. 29. The system of claim 28, further comprising: a receiver configured to receive the data signal; and wherem rendering the surface segment includes: rendering the virtual workspace for the first user; and based on interaction of the second user with one or more virtual objects included in the virtual workspace, dynamically updating the rendering of the virtual workspace for the first user. 30. The system of claim 16, wherein the processor is further configured to determine a pose of the at least one reference object based at least in part on the image of the scene, and determine a virtual surface separate from the at least one reference object based at least in part on the determined pose, wherein the surface segment is rendered on the virtual surface 31. A system comprising: a means for optically sensing an image of a scene; a means for identifying at least one reference object in the scene, wherein the at least one reference object is a physical object; a means for receiving input defining a surface segment relative to the at least one reference object; and a means for rendering the surface segment. 32. A non-transient computer readable medium comprising program code, which when executed by a processor is configured to cause the processor to: receive a camera input of a scene; identify at least one reference object in the scene, wherein the at least one reference object is a physical object; receive input defining a surface segment relative to the at least one reference object; and cause the surface segment to be rendered. |
AUGMENTED REALITY SURFACE DISPLAYING BACKGROUND Aspects of the disclosure relate to computing technologies, including computer software and computer hardware, in particular, various aspects of the disclosure relate to techniques and devices that can provide augmented reality (AR). [0002] Increasingly, people are using various types of existing and new computing devices in a number of different ways for a number of different purposes. One type of device that has been proposed and may become increasingly popular is the head- mounted display (HMD) unit. Such a head-mounted display unit may, for example, include processing components, or communicate with another device that includes one or more processmg components, to render and/or otherwise provide content to a user of the head-mounted display unit. These user interfaces may, for instance, be rendered by the head-mounted display unit on special lenses that are worn by the user over his or her eyes, such that the content appears to be wholly or partially overlaid on, and/or otherwise displayed in relation to, the user's actual physical surroundings. [1)803] Conventional and/or currently available head-mounted display units are typically limited by the processing power and other resources that are required to provide these functionalities. Further, the content provided by these head-mounted display units may be rudimentary and/or inconvenient, BRIEF SUMMARY [0804] Certain embodiments are described that provide more intuitive, functional, and convenient ways of enabling a user of a head-mounted display unit to interact with various user interfaces and other features provided by the head-mounted display unit. [0005] In some embodiments, and as discussed in greater detail below, a real-world surface may be segmented in real time, using a combination of rectangle tracking techniques and finger tracking techniques that utilize input received via a head-mounted camera. In at least one arrangement, an everyday object, such as a smartphone, may be used as a reference object, which also may be referred to as a "reference object," for tracking various objects and determining surface plane alignment. Such object tracking and surface plane alignment determinations may, for example, be subsequently used in rendering, via the head-mounted display unit, a user interface or other virtual object that is correctly aligned to the physical surface upon which it is anchored. 0006J Some examples of the user interfaces and other virtual objects that can, in accordance with various aspects of the disclosure, be rendered on a surface segment via a head-mounted display unit include: web pages, shared and/or collaborative workspaces, navigable applications, games, virtual keyboards and/or other virtual peripherals and input devices, video and/or media playback applications, statistical visualizations and/or other data representations, and various three-dimensional objects. While these types of user interfaces and virtual objects are listed here as examples of what can be rendered using a head-mounted display unit any other type of user interface or virtual object likewise may be rendered and/or otherwise provided instead of and/or in addition to those listed above. [0007] in some embodiments a computing device, such as a head-mounted display unit, may receive camera input of a scene. Subsequently, the computing device may identify at least one reference object in the scene, for example, based on detecting one or more rectangles in the received camera input. The computing device then may receive input that defines a surface segment relative to the at least one reference object. Thereafter, the computing device may render the surface segment. [0008] in some embodiments a system for augmented reality surface segmentation may comprise a means for optical sensing an image of a scene; a means for identifying at least one reference object in the scene, wherein the reference object is a physical object; a means for receiving input defining a surface segment relative to the at least one reference object; and a means for causing the surface segment to be rendered. [0009] in some embodiments of the system, the means for identifying at least one reference object in the scene includes means for detecting one or more feature points in the received camera input. [0010] in some embodiments of the system, the means for identifying at least one reference object in the scene includes means for receiving a wireless communication from the at least one reference object. [0011] In some embodiments of the system, the means for identifying at feast one reference object in the scene includes means for detecting one or more rectangles in the received camera input. [0812] In some embodiments the system for augmented reality surface segmentation may further comprise: a means for determining, based on the one or more detected rectangles, a current perspective of the scene, wherein the surface segment is rendered relative to the current perspective of the scene. [Θ8Ϊ3] In some embodiments the system for augmented reality surface segmentation may further comprise: a means for dynamically updating the rendered surface segment based on determining that the current perspective of the scene has changed. [0014] In some embodiments of the system, the input defining the surface segment is a finger movement defining a shape of the surface segment wherein the finger movement is performed by a user of a device causing the surface segment to be rendered. [0015] In some embodiments of the system, the means for causing the surface segment to be rendered comprises: means for rendering, in a virtual workspace, the surface segment at a first angle; and means for rendering, in the virtual workspace, a second surface segment defined by the user at a second angle different from the first angle. [0016] In some embodiments the system for augmented reality surface segmentation may further comprise: a means for closing the virtual workspace based on determining that the at least one reference object has been removed from the scene. [1)817] In some embodiments the sy stem for augmented reality surface segmentation may further comprise: a means for opening the virtual workspace based on determining that the at least one reference object has been reintroduced into the scene or another scene. [0018] In some embodiments the system for augmented reality surface segmentation may further comprise: a means for opening a second virtual workspace different from the virtual workspace based on detecting a second reference object different from the at least one reference object. [00 ] In some embodiments of the system, detecting the second reference object comprises identifying the second reference object based on a code displayed by the second reference object. [0820] In some embodiments the sy stem for augmented reality surface segmentation may further comprise: a means for transmitting a data signal comprising a virtual workspace that includes the surface segment to at least one other device. [0021J In some embodiments of the system, the virtual workspace is defined by a first user that provided the input defining the surface segment, and the at least one other device is associated with a second user different from the first user. [0022] In some embodiments of the system, the means for causing the surface segment to be rendered comprises: means for rendering the virtual workspace for the first user; and means for dynamically updating, based on the second user's interaction with one or more virtual objects included in the virtual workspace, the rendering of the virtual workspace for the first user. [0023] In some embodiments the system for augmented reality surface segmentation may further comprise a means for determining a pose of the at least one reference object based at least in part on the camera input, and a means for determining a virtual surface separate from the at least one reference object based at least in pai on the determined pose, wherein the surface segment is rendered on the virtual surface. [0024] In some embodiments a system for augmented reality surface segmentation may comprise a non-transient computer readable medium comprising program code, which when executed by a processor is configured to cause the processor to: receive a camera input of a scene; identify at least one reference object in the scene, wherein the reference object is a physical object; receive input defining a surface segment relative to the at least one reference object; and render the surface segment. [0825] In some embodiments identifying at least one reference object in the scene includes detecting one or more feature points in the received camera input. [0026] In some embodiments identifying at least one reference object in the scene includes receiving a wireless communication from the at least one reference object. [0827] In some embodiments identifying at least one reference object in the scene includes detecting one or more rectangles in the received camera input. [0028] In some embodiments a system for augmented reality surface segmentation may further comprise program code, which when executed by a processor is configured to cause the processor to: determine, based on the one or more detected rectangles, a current perspective of the scene, wherein the surface segment is rendered relative to the current perspective of the scene. [0829] In some embodiments a system for augmented reality surface segmentation may further comprise program code, which when executed by a processor is configured to cause the processor to: dynamically update the rendered surface segment based on determining that the current perspective of the scene has changed. [8830] In some embodiments the input defining the surface segment is a finger movement defining a shape of the surface segment wherein the finger movement is performed by a user of a device causing the surface segment to be rendered. [0031 J In some embodiments rendering the surface segment comprises: rendering, in a virtual workspace, the surface segment at a first angle; and rendering, in the virtual workspace, a second surface segment at a second angle different from the first angle. [0832] In some embodiments a system for augmented reality surface segmentation may further comprise program code which when executed by a processor is configured to cause the processor to: determine that the at least one reference object has been removed from the scene, and close the virtual workspace based on determining that the at least one reference object has been removed from the scene. [8833] In some embodiments a system for augmented reality surface segmentation may further comprise program code which when executed by a processor is configured to cause the processor to: determine that the at least one reference object has been reintroduced into the scene or another scene, and opening the virtual workspace based on determining that the at least one reference object has been reintroduced into the scene or another scene. [8834] In some embodiments a system for augmented reality surface segmentation may further comprise program code which when executed by a processor is configured to cause the processor to: detect a second reference object different from the at least one reference object, and open a second virtual workspace different from the virtual workspace based on detecting a second reference object different from the at least one reference object, [8835] In some embodiments detecting the second reference object includes identifying the second reference object based on a code displayed by the second reference object. 003 ] in some embodiments a system for augmented reality surface segmentation may further comprise program code which when executed by a processor is configured to cause the processor to: transmit a data signal comprising a virtual workspace that includes the surface segment to at least one other device. [1)837] in some embodiments the virtual workspace is defined by a first user that provided the input defining the surface segment, and the at least one other device is associated with a second user different from the first user. [0838] In some embodiments rendering the surface segment comprises: rendering the virtual workspace for the first user; and determining the user's interaction with one or more virtual objects included in the virtual workspace, and dynamically updating the rendering of the virtual workspace for the first user based on the second user's interaction with one or more virtual objects included in the virtual workspace. [0839] In some embodiments a system for augmented reality surface segmentation may further comprise program code which when executed by a processor is configured to cause the processor to determine a pose of the at least one reference object based at least in part on the camera input, and determine a virtual surface separate from the at least one reference object based at least in part on the determined pose, wherein the surface segment is rendered on the virtual surface. [00401 In some embodiments a method for use with an augmented reality enabled device may comprise detecting motion of a user; and defining a virtual workspace based on the detected motion. [0041] In some embodiments detecting the motion comprises tracking an eye gaze of the user. [0842] In some embodiments detecting the motion comprises tracking at feast one finger of the user. [0043] In some embodiments detecting the motion comprises using at least one inertia! sensor to determine motion of the device. [0844] In some embodiments the virtual workspace is defined with respect to a virtual object. [0045] In some embodiments the virtual workspace comprises one or more windows for display on the augmented reality enabled device, the windows being displayed with respect to one or more surfaces in a scene visible to a user of the augmented reality enabled device. [0846] In some embodiments a method for use with an augmented reality enabled device may further comprise saving a representation of the virtual workspace for transmission or future access. [0847] In some embodiments a method for use with an augmented reality enabled device may comprise obtaining information descriptive of a user-defined virtual workspace; identifying at least one anchor object; identifying a surface based at least in part on the identified anchor object; and rendering at least a portion of the virtual workspace with respect to the surface. 0848] In some embodiments the obtaining comprises accepting user input from a camera. [0849] In some embodiments the obtaining comprises receiving the information from the anchor object. [0858] in some embodiments the obtaining comprises downloading the information from a server. [8851] In some embodiments a method for use with an augmented reality enabled device may comprise receiving information describing a virtual workspace for display to a first user on a first augmented reality enabled device, the virtual workspace having been defined relative to a remote object by a remote user of a second augmented reality enabled device; identifying a reference object; and causing at least a portion of the virtual workspace to be displayed to the first user on the first augment reality enable device, the virtual workspace being displayed relative to the reference object. [8852] In some embodiments the virtual workspace is displayed to the first user such that it appears upright to the first user. 0853] In some embodiments an upright appearance is defined by how the remote user is viewing the virtual workspace on the first augmented reality enable device. [8854] In some embodiments the identified reference object comprises the remote object. [0855] In some embodiments the absolute position of elements of the virtual workspace are maintained with respect to the remote object regardless of a position of the first user or the remote user. [0856] In some embodiments the identified reference object comprises the remote object. [0657] In some embodiment elements of the virtual workspace are displayed in positions or orientations relative to the remote object which are different than positions or orientations of the elements as displayed relative to the remote object to the remote user. [0858] In some embodiments the remote object and the identified reference object are different objects, [0059] In some embodiments a method for use with an augmented reality enabled device may further comprise determining whether the first user has permission to adjust the virtual workspace, and if the first user has permission to adjust the virtual workspace, determining whether to adjust the virtual workspace locally or whether to adjust the virtual workspace remotely. [0060] In some embodiments remote adjustment of the virtual workspace adjusts how the virtual workspace is being displayed to the remote user on the second augmented reality enabled device. BRIEF DESCRIPTION OF THE DRA WINGS [0861] Aspects of the disclosure are illustrated by way of example. In the accompanying figures, like reference numbers indicate similar elements, and: [0062] FIG. 1 illustrates a simplified diagram of a system that may incorporate one or more embodiments; [0863] FIGS, 2-7 illustrate a sequence of diagrams that depict an example of providing augmented reality surface segmentation using reference object detection according to some embodiments; [0064] FIG. 8 illustrates a flowchart that depicts an example method of providing augmented reality surface segmentation using reference object detection according to some embodiments; [0865] FIG. 9 illustrates a flowchart that depicts an example method of providing augmented reality surface segmentation using reference object detection according to some embodiments; and [0866] FIG. 10 illustrates an example of a computing system in which one or more embodiments may be implemented. DETAILED DESCRIPTION [Θ067] Several illustrative embodiments will now be described with respect to the accompanying drawings, which form a part hereof. While particular embodiments, in which one or more aspects of the disclosure may be implemented, are described below, other embodiments may be used and various modifications may be made without departing from the scope of the disclosure or the spirit of the appended claims. [0868] As noted above, various aspects of the disclosure relate to ne ways of interacting with head-mounted display units, particularly head-mounted display units that are capable of detecting hand movements, such as hand movements and or finger movements which can be interpreted as interactions with virtual content that is displayed through the head-mounted display unit. Using the technologies described herein, a virtual desktop can be provided in which a user may "lay" out different pieces of data, and have representations of such data rendered through their head-mounted display unit, in close proximity to and/or otherwise in relation to a. physical surface ihai exists in the user's physical environment. [0869] In one or more arrangements, a head-mounted display unit may render such a virtual desktop, as well as one or more surface segments which may form the virtual desktop, in relation to a physical surface on which a reference object is placed. The reference object may, for instance, be a smart phone or other physical object (e.g., a pad of paper, a stack of sticky notes, etc.) that can be detected by the head-mounted display unit, and can further be used in determining the current perspective at which the user of the head-mounted display unit is viewing the physical surface on which the reference object is placed and, correspondingly, the perspective at which the user is viewing the virtual workspace being rendered by the head-mounted display unit. In particular, the head-mounted display unit, and/or a processing device connected to the head-mounted display unit, may use one or more reference object detection algorithms to determine the current perspective at which the user of the head-mounted display unit is viewing both the physical surface(s) and the virtual surface(s). These reference object detection algorithms may, for instance, be able to determine such a perspective, which may also be referred to as the "camera pose," by identifying one or more rectangles that are visible in a scene captured by a bead-mounted camera, and by subsequently determining a current viewing angle of the identified rectangle(s) based on the fact that, when viewed straight on, such rectangle(s) would each have perpendicular corners and parallel sides. [Θ870] As a result of these and other features, various aspects of the disclosure provide a number of advantages over existing and conventional computing devices. For example, some embodiments may allow a user to place, position, and/or otherwise lay ¬out digital content and/or other virtual items in a virtual workspace that is larger in size than what might typically be capable of being displayed on a conventional computer display screen. In addition, the variable perspective and variable object alignment, as may be provided via a head-mounted display unit, may allow for better customization of a user experience. [0071] Furthermore, a real world object may be used as a reference point not only for positioning an interface that may be displayed via a head-mounted display unit, but also as a reference identifier for accessing one or more particular virtual workspaces across the durations of one or more sessions provided via the head-mounted display unit. This functionality in turn may, for example, allow a user to pack up their digital data very quickly, without having to worry about needing to recreate their virtual workspace, or other particular layout of digital data, in the future. [0072] For example, in accordance with some embodiments, when a user of a head- mounted display unit picks up an object that is being used as a reference object, such as their smartphone, all of the windows and/or other virtual objects included in a virtual workspace displayed by the head-mounted display unit may automatically disappear from the rendered display. Subsequently, when the user places the reference object back down and/or otherwise in view of the head-mounted display unit, the head- mounted display unit may render the virtual workspace as it was when the reference object was removed, thereby allowing the user to resume their session at the point at which the user left it. The head-mounted display unit may, for instance, detect such introduction and/or removal of a particular reference object using a camera included in and'or otherwise communicatively coupled to the head-mounted display unit. Not only may the relative positions of different surface segments be maintained between when the reference object is removed and reintroduced, but the current state of programs or functionality associated with one or more of the surface segments may be maintained. For example, a movie playing in one of the segments may be paused and automatically resumed, or a document in one of the segments may saved and redisplayed in its previous state. [0073] Various embodiments will now be discussed in greater detail with reference to the accompanying figures, beginning with FIG. 1. [0874] FIG. 1 illustrates a simplified diagram of a system 100 that may incorporate one or more embodiments. As seen in FIG. 1 , system 100 may include a memory 105, as well as multiple subsystems, including an input/output subsystem 1 10, a reference object detection subsystem 1 15, a surface segment management subsystem 120, a control object tracking subsystem 125, and a rendering subsystem 140. One or more communication paths ma be provided that enable the one or more subsystems to communicate with and exchange data with each other. In addition, the various subsystems illustrated in FIG. 1 may be implemented in software, hardware, or combinations thereof. In some embodiments, system 100 may be incorporated in a computing device, such as a computing device that is communicatively coupled to a head-mounted display (HMD) unit. In some other embodiments, system 100 may be incorporated directly into an HMD unit itself or another type of heads-up display. In some embodiments, the elements of system 100 may be incorporated into a type of augmented reality-enabled device - for example, a mobile phone, a tablet computer, and/or a television configured to implement augmented reality - other than an HMD. In some embodiments, all of the components shown in FIG. 1 may be incorporated into a HMD. In other embodiments, some of the components shown in FIG. 1 may be incorporated into a HMD, while the remainder of the components may be incorporated into another device that is communicatively connected to the HMD. For example, some components shown in FIG. 1 may be incorporated into an HMD, and the remainder of the components shown, in FIG. 1 may be incorporated into mobile device, such as a smartphone, that is communicatively connected to the HMD, Θ075] in various embodiments, system 100 may include other subsystems than those shown in FIG, 1 . Additionally, the embodiment shown in FIG, 1 is only one example of a system that may incorporate some embodiments, and in other embodiments, system 100 may have more or fewer subsystems than those illustrated in FIG. 1, may combine two or more subsystems, or may have a different configuration or arrangement of subsystems. [0876] In some embodiments, input/output subsystem ί 10 may provide one or more interfaces that enable input to be received from, and/or output to be provided to, a user of system 100. For example, input/output subsystem 1 10 may include one or more input devices, such as one or more buttons or keys, one or more ports (e.g., a serial port), and/or other input devices. In at least one arrangement, input/output subsystem 1 10 further may include one or more cameras. In some instances, at least one of the cameras included in input/output subsystem 110 may, for example, be worn by a user in such a way as to operate as a head-mounted camera, and may further be configured to capture an image of a scene viewed by the user. In other arrangements, input/output subsystem 1 10 may additionally or alternatively include one or more other input systems and/or sensors that may be configured to capture input from a user of system 100, such as one or more inertial sensors, one or more microphones, one or more gaze tracking sensors, one or more grip sensors, and/or the like. In other embodiments, one or more of ultrasound or other audio, infrared, Ultra Violet, Electro-Magnetic radiation, Microelectromechanical Systems (MEMS) devices, etc. may form a component of input/output subsystem 1 10. In addition, input/output subsystem 1 10 may include one or more output devices, such as one or more display screens, one or more audio speakers, and/or other output devices. In some instances, at least one of the display screens included in input output subsystem 1 10 may be worn by a user in a way that wholly or partially encompasses the user's field of view, which may thereby enable system 100 to operate as a head-mounted display unit. [0877] In some embodiments, reference object detection subsystem 1 15 may enable system 100 to identify and/or otherwise detect one or more reference objects in an image of a scene captured by system 100. In addition, reference object detection subsystem 1 15 may enable system 100 to determine, based on the one or more identified reference objects, a camera pose for system 100, or some other description of the user's perspective of the scene that is currently being viewed. As noted above, the camera pose may define the perspective at which a user of system 100 is viewing the scene that includes the one or more identified and/or otherwise detected reference objects. Additionally, detecting and/or using one or more reference objects in this way- may provide better power efficiency and/or pose detection functionalities than some other techniques. In some arrangements, reference object detection subsystem 1 15 may additionally or alternatively be configured to detect other objects that are not rectangular in shape (e.g., by detecting other shapes in an image of a scene captured by system 100). Additionally, in some instances, multiple objects may be detected and/or used in determining a camera pose, and reference object detection subsystem 1 15 may be user-configurable, in that a user may be able to set or select which objects and/or shapes are detected (e.g., based on feature points associated with such objects and/or shapes). For example, reference object detection subsystem 1 15 may be configured to detect one or more of a plurality of shapes. Further, reference object detection subsystem 1 15 may be configured to detect shapes such as rectangles, squares, circles, or triangles. In other embodiments, reference object detection subsystem 1 15 may be configured to detect specific objects, e.g. devices, artwork, writing utensils, hand drawings, or images displayed on one or more devices. In some embodiments, reference object detection subsystem 1 15 may be configured to detect a mobile phone and/or a rectangle, for example when the reference object is a mobile phone of the user. Such embodiments may be beneficial because the user may be likeiy to have their phone with them and thus may be able to conveniently utilize one or more of the embodiments described here. Further, rectangle detection may be efficiently implemented in some embodiments, as described in greater detail below, Θ078] in some embodiments, surface segment management subsystem 120 may enable system 100 to render a virtual workspace and/or one or more surface segments included in such a virtual workspace. For example, in one embodiment, a virtual workspace may comprise one or more virtual objects associated with user applications. In other embodiments, a virtual workspace may comprise virtual objects associated with a user's virtual desktop. These virtual objects may comprise tools the user uses to complete various tasks, e.g. graphical software, text software, presentation software, or other type of software commonly associated with a workplace. In some embodiments, the virtual workspace comprises one or more segments or regions which a user has defined and/or associated with a particular application or function. For example, one segment may have been designated by the user as including a media player, while another segment may have been designated by the user for use with a messaging application such as an SMS application or an email program. In some embodiments, one or more segments may be associated with two dimensional content such that the segment appears to be flush against a surface, while one or more other segments may be associated with three dimensional content, for example such that the content may appear to the user as if it is a hologram. Any number of other segments or configurations may be implemented. In some embodiments, surface segment management subsystem 120 may be configured to generate one or more user interfaces and'or other virtual objects to be rendered by system 100, determine the perspective at which such user interfaces and/or virtual objects should be rendered at any particular moment (e.g., based on the current perspective determined by reference object detection subsystem 1 15), and/or provide the generated user interfaces and/or virtual objects, along with any relevant perspective information, to rendering subsystem 140. [0879] In some embodiments, control object tracking subsystem 125 may enable system 100 to identify and/or otherwise detect one or more reference objects, such as one or more reference objects with respect to which one or more surface segments and/or one or more virtual workspaces can be provided. For example, control object tracking subsystem 125 may enable system 100 to identify a particular reference object based on the unique features of the reference object that may be detected in an image of a physical scene which includes the reference object. In addition, control object tracking subsystem 125 may enable system 100 to identify a particular reference object that is in the vicinity of system 100 based on one or more electronic signals transmitted to and/or received from the reference object. Further still, control object tracking subsystem 125 may enable system 100 to identify and/or load one or more particular virtual workspaces and/or surface segments based on the identity of the reference object and/or a code displayed by the reference object or other captured information associated with the reference object that uniquely identifies one or more particular virtual workspaces and/or surface segments to be displayed (e.g., when the reference object is in view of a user of system 100). For example, a visual indicator, such as a Quick Response (QR) code, may be displayed on and'or by a reference object, and control object tracking subsystem 125 of system 100 may be configured to identify the reference object based on the visual indicator. In some embodiments this visual indicator may comprise another indicator, for example, a known pattern such as a bar code, a multidimensional bar code, or a known image. In other instances, control object tracking subsystem 125 may be configured to identify a reference object based on a wireless signal transmitted by the reference object (e.g., a Bluetooth signal, a Near Field Communications (NFC) signal, etc.). [ΘΘ8Θ] In some embodiments, memory 105 may be configured to store and/or retrieve various types of information that may be used by system 100 and'or the various subsystems included in system 100. For example, memory 105 may store reference object information 130 and virtual workspace information 135. Reference object information 130 may, for instance, include information describing and/or defining various properties of one or more reference objects, such as properties that uniquely identify particular reference objects. For example, in one embodiment, a mobile device may be in communication with a HMD or other Augmented Reality enabled device, and the smartplione may be the reference object, in such an embodiment, the mobile device may be configured to recognize itself as the reference object. In another embodiment, the mobile device may display an image, such as a graphic or bar code associated with the reference object. In such an embodiment, this may enable the HMD or other Augmented Reality enabled device to detect the mobile device as the reference object. In addition, virtual workspace information 135 may include information describing and/or defining various properties of one or more virtual workspaces that can be provided via system 100, including various properties of one or more surface segments, one or more user interfaces, and/or one or more other virtual objects that may be included in such virtual workspaces. While these types of information are listed here as examples of the types of information that may be stored by memory 105 in some embodiments, memory 105 may store one or more other types of information instead of and/or in addition to the types of information discussed here, [0081] in some embodiments, rendering subsystem 140 may enable system 100 to draw, render, and/or otherwise display one or more virtual workspaces, one or more surface segments, and/or one or more other virtual objects. For example, one or more other subsystems of system 100 may provide rendering subsystem 140 with information about one or more virtual workspaces, one or more surface segments, and/or one or more other virtual objects to be rendered, and rendering subsystem 140 may accordingly cause the one or more virtual workspaces, one or more surface segments, and/or one or more other virtual objects to be displayed by system 100. In some instances, the information provided to ami/or otherwise used by rendering subsystem 140 may include camera pose information and or other perspective information, as this may enable rendering subsystem 140 to draw, render, and/or otherwise display various virtual objects at particular orientations and/or angles relative to the physical surroundings of system 100. [0082] An example of the ways in which a device such as system 100 can be used will now be discussed in greater detail with respect to FIGS. 2-7. In particular, FIGS. 2- 7 illustrate a sequence of diagrams that depict an example of providing augmented reality surface segmentation using reference object detection according to some embodiments. As illustrated in FIG. 2, a user 205 of a head-mounted display unit 2.10 or other Augmented Reality enabled device may view a physical surface 215, which may be a table or desk, for instance, on which a mobile device 220 has been placed. The mobile device 220 may, for example, be used by head-mounted display unit 210 as a reference object in providing a virtual workspace, in accordance with various aspects of the disclosure. In addition, a camera included in head-mounted display unit 210 may feature a wide field of view, and further may be used by head-mounted display unit 210 in tracking the position and/or movement of rectangular objects in the field of view, as well as the position and/or movement of other objects in the field of view, such as the position and/or movement of the user's fingers. In some embodiments, separate algorithms that are concurrently executed may be used to track the position and/or movement of rectangular objects in the field of view and the position and'or movement of control objects (e.g., the user's fingers) in the field of view. In addition, these tracking algorithms may be executed on head-mounted display unit 210 itself in some embodiments, while in other embodiments, these tracking algorithms may be executed on a mobile computing device that is wirelessly connected to head-mounted display unit 210, such as mobile device 220. In some embodiments, the mobile device 220 comprises a mobile telephone or tablet computer. [Θ884] Subsequently, as illustrated in FIG. 3, one or more of the tracking algorithms being executed may detect and/or track the shape of mobile device 220. In addition, based on the detection and/or tracking of the shape of mobile device 220, a visual highlight 305 may be rendered via head-mounted display unit 210 (e.g., for viewing by user 205). [0885] Thereafter, as illustrated in FIG. 4, user 205 may place his index fingers 405 and 410 at a starting point 415 that is in the proximity of mobile device 220 on the physical surface 215. Subsequently, as illustrated in FIG. 5, user 205 may draw his index fingers 405 and 410 apart. A highlight or marquee 505 that delineates a rectangle may follow the movement of the fingertips of index fingers 405 and 410, with each of the fingertips of index fingers 405 and 410 corresponding to a corner of the rectangle delineated by marquee 505. In one or more embodiments, the rectangle delineated by marquee 505 may be aligned with a plane of the physical surface 215 on which mobile device 220 lias been placed. In addition, mobile device 220 and/or any other tracked reference object may be used as a reference for the rectangle defined by marquee 505. In some instances, where a fixed aspect ratio may be required by certain content, the dimensions of the rectangle that is delineated by marquee 505 may expand and/or contract in a fixed ratio in proportion to the distance traveled by the fingertips of index fingers 405 and 410 from their starting point 415. Further, in some embodiments input to a user interface or input defining a segment may be provided using a touchscreen, for example similar to how a touchpad or trackpad may be used, or other input of mobile device 220, whether or not mobile device 220 is itself the reference object. For example, in one embodiment, the reference object may comprise a painting on a wall, and in such an embodiment, the user may be able to use an input device on mobile device 220 to defend the segments of a virtual workspace, even though the mobile de vice 220 is not itself the reference object. [0886] While the example illustrated in FIG. 4 and discussed above involves a highlight or marquee 505 that is rectangular in shape, in some instances, the shape of the highlight or marquee, along with the shape of the surface segment that may be formed based on the highlight or marquee (e.g., as discussed below with respect to FIG. 6), might not be rectangular. For example, the highlight or marquee may be circular in shape, or may have some other shape that varies from the rectangular and circular shapes discussed in these examples. In addition, rather than drawing his or her fingers apart to create the shape of the highlight or marquee, as in the example discussed above, in some additional and/or alternative embodiments, the user may trace an outline of the shape that is to be formed. In other instances, other input may be provided by the user to define such a shape via any and/or all of the interfaces discussed above with respect to input/output subsystem 1 10 of system 100. For example, in one embodiment, the user may draw an outline of a shape using optical tracking technology. In such an embodiment, input/output subsystem 1 10 may comprise eye tracking software. In such an embodiment, the user may define a shape by moving his or her eyes to form an outline of that shape. In another embodiment, input/output subsystem 1 10 may comprise a touchscreen. In such an embodiment, the user may form a shape by drawing on outline of the shape on the surface of the touchscreen. In still another embodiment, input/output subsystem I I 0 may comprise inertial sensors. Thus, in such an embodiment, the user may form a shape by moving the device in a pattern forming an outline of the shape. The inertia] sensors may detect this motion, and thus transmit a signal associated with the outline of the shape to he formed. [0087] Subsequently, as illustrated in FIG. 6, once user 205 lifts his fingertips of index fingers 405 and 410 off of the physical surface 215, a rectangle 605 may be created in the virtual workspace and rendered via head- mounted display unit 210. The size of rectangle 605 may, for instance, correspond to the size of the rectangle delineated by marquee 505 at the point at which user 205 lifted his fingertips away from the surface. In addition, rectangle 605 may be populated with a user interface 610, which may be rendered via head-mounted display unit 210, so as to enable user 205 to view and/or interact with the user interface. Furthermore, user interface 610 may include one or more active contact areas that user 205 may interact with using his fingertips 405 and 410, in a manner similar to ho the user can interact with a user interface displayed on a touch-sensitive display screen. [0088] Turning now to FIG. 7, in instances in which a more precise or unique reference object might be needed (e.g., to enable multi-user collaboration in a virtual workspace), a dynamically generated marker 705 can be displayed on the screen of mobile device 220. Marker 705 may, for example, be encoded with a unique pattern that can be interpreted by another augmented reality or head-mounted display unit or other connected device, thereby enabling the other augmented reality or head-mounted display or other connected device to initiate a connection with one or more other head- mounted display units that are rendering, facilitating interaction with, and'Or otherwise providing the virtual workspace, such as head-mounted display unit 210. [0889] While the example discussed above with respect to FIGS. 2-7 illustrates how a single segmentation task may be completed (e.g., to create a single surface segment, namely, rectangle 605, in a particular virtual workspace), in other instances, multiple areas can similarly be segmented by the same user using the same reference object (e.g., mobile device 220, as in the example above). In this way, a number of different virtual windows can be delineated and populated in a virtual workspace across a wider area of the physical surface on which the reference object is placed (e.g., on the tabletop or desktop that is before the user). [0090] In some embodiments, when a user moves his or her head, and thus changes the field of view of a camera included in the head-mounted display unit, tracking of a reference object may be lost (e.g., if the user turns around to face the opposite direction and the reference object is no longer within the field of view of the camera). At this point, the user interfaces and/or other virtual objects included in the virtual workspace being rendered by the head-mounted display unit may disappear. However, the virtual workspace, including its associated user interfaces and other virtual objects, may be re- rendered at the same relative size and position relative to the reference object, once the reference object is re-acquired by the camera (e.g., if and when the user turns around to again face the reference object being tracked via the camera included in the head- mounted display unit). [0891] In some embodiments, multiple reference objects may be tracked across a wider area, and this may enable greater interaction with a larger surface, and allow for a wider range of head movement on the part of the user before tracking of the reference object(s) is lost. Additionally, two or more head- mounted display units may be linked, along with one or more other connected devices that may be equipped with cameras, and be worn and/or used by different users. In this way, the different users can share the signature(s) associated with the various reference object(s), and a linked virtual workspace that includes one or more user interfaces and/or other virtual objects may be shared among and/or otherwise provided to the various users of the linked devices to allow for collaborative interactions with the virtual workspace. [0092] FIG. 8 illustrates a flowchart, that depicts an example method of providing augmented reality surface segmentation using reference object detection according to some embodiments. The processing illustrated in FIG. 8 may be implemented in software (e.g., computer-readable instructions, code, programs, etc.) that can be executed by one or more processors and/or other hardware components. Additionally or alternatively, the software may be stored on a non-transitory computer-readable storage medium. In some embodiments, the method illustrated in FIG. 8 may be performed by a head-mounted display unit, while in other embodiments, the method illustrated in FIG. 8 may be performed by a computing device that is communicatively coupled to, connected to, ami/or otherwise linked to a head-mounted display unit. In still other embodiments, the method illustrated in FIG. 8 may be performed in combination by a head-mounted display unit and a computing device that is communicative coupled to, connected to, and/or otherwise linked to the head-mounted display unit. [0093] As seen in FIG. 8, the method may be initiated in step 805, in which camera input may be received. For example, in step 805, image and/or video input may be received as camera input by a head-mounted display unit, and/or a computing device connected to the head-mounted display unit, from one or more cameras included in the head-mounted display unit. The camera input may, for instance, include one or more images of a scene that is before a user of the head-mounted display unit. As in the examples discussed above, such a scene may include a physical surface on which one or more reference objects may be placed, and such reference object(s) may be used by the head-mounted display unit in providing a virtual workspace. In some embodiments, in step 805, system 100 may receive camera input using input/output subsystem 1 10. [0094] In step 810, one or more reference objects may be detected. In some embodiments, these reference objects may comprise one or more rectangles. For example, in step 810, the head-mounted display unit, and/or a computing device connected to the head-mounted display unit, may analyze the camera input received in step 805 in order to detect the presence of one or more reference objects included in the camera input. In detecting the one or more reference objects, one or more reference object detection algorithms may be used, which may identify one or more reference objects included in the scene based on identifying the physical objeet(s) in the image data associated with the camera input that match the expected profile of a reference object (e.g., characteristics such as parallel sides and perpendicular comers for a rectangular reference object, or in other embodiments rounded corners, a specific image, etc.). In some embodiments, in step 810, system 100 may detect one or more reference objects using reference object detection subsystem 1 15. [0095] In some embodiments, a current perspective of the scene also may be determined based on the one or more detected reference objects. In particular, the perspective at which the user of the head- mounted display unit is viewing the scene may be determined based on the one or more detected reference objects and the angles at which such reference objects appear. For example, when the reference object comprises a rectangle, this perspective may, for instance, be determined based on these parameters in view of the fact that, when viewed straight on, such rectangle(s) would have parallel sides and perpendicular corners, among other characteristics. [0096] In step 815, one or more reference objects may be identified. For example, in step 815, one or more reference objects may be identified by the head-mounted display unit, and/or a computing device connected to the head-mounted display unit, based on information describing one or more unique properties of the various reference objects. In some embodiments, the various reference objects may be rectangular in shape, and identifying reference objectfs) in the camera input may be based on the results of the reference object detection performed in step 810. In particular, such reference object detection may be used to identify candidates of real-world objects that may be reference objects, and subsequently, the head-mounted display unit and/or the connected computing device may analyze the candidate objects in order to determine which of the candidate object(s) is or are reference objects, in some embodiments, in step 815, system 100 may identify one or more reference objects using control object tracking subsystem 125. [00.97] Subsequently, in step 820, input that defines one or more surface segments may be received. Such input may, in some embodiments, define one or more surface segments relative to a physical surface included in the scene and/or relative to one or more reference objects, such as the one or more reference objects identified in step 815, that may be placed on the physical surface. In some instances, the input defining the one or more surface segments may, for example, be user input that delineates and'Or otherwise corresponds to a rectangle outlined by the user on the physical surface. For example, the user may place his or her fingers at a starting point in view of the camera included on the head-mounted display, and then draw his or her fingers outwards to define the opposite corners of a rectangle in which a user interface and'Or other virtual objects may be rendered, as in the example discussed above with respect to FIGS. 2-7. In some embodiments, in step 820, system 100 may receive input that defines one or more surface segments using input/Output subsystem 1 10. [0098] Referring again to FIG. 8, in step 825, one or more surface segments of a virtual workspace may be rendered. For example, in step 825, the head-mounted display unit and/or the connected computing device may render the surface segment defined in step 820, along with one or more other virtual objects and/or other user interfaces associated with a virtual workspace that includes the defined surface segment. As discussed above, such a surface segment and'Or the entire virtual workspace may be associated with a particular reference object, such as the reference object identified in step 815, such that removal and/or replacement of the reference object results in the closing and'Or opening, respectively, of the virtual workspace. In some embodiments, in step 825, system 100 ma render one or more surface segments of a virtual workspace using surface segment management subsystem 120 and'Or rendering subsystem 140. [0899] As discussed above, some embodiments provide a head-mounted display unit that is configured to perform finger tracking and rectangle recognition, and thereby provide a user of such a head-mounted display unit with the ability to iook at a combination of real-world objects and virtual objects that are rendered in a perspective matching the user's own field of view (e.g., of the real-world objects). In addition, certain embodiments might not require the user of such a head-mounted display unit to handle a physical device when working with interfaces presented in a virtual workspace. [001 Θ8] While some conventional systems may provide other ways of displaying information, these conventional systems are typically inconvenient to use and require a great detail of computational power to provide. For example, some conventional systems may be capable of performing three-dimensional reconstruction of a physical surface by analyzing depth data. But this approach may need to computationally reconstruct a full three dimensional scene. Thus this approach may require the use of more power-hungry depth sensors, as well as a great deal of computational power, to reconstruct a three-dimensional scene. Further, such devices may be heavier and more expensive, thus reducing the likelihood that user's will quickly adopt these technologies. In contrast, the present disclosure provides systems and methods for determining a surface plane which may circumvent the need for these power hungry and expensive processors and sensors. [00101] Rather than using the three-dimensional reconstruction techniques that may be implemented by some conventional systems, some embodiments instead may incorporate reference object detection, for example rectangle detection when the reference object comprises a rectangle such as when the reference object comprise a mobile phone, and object tracking functionalities, which may be more computationally efficient than conventional techniques. In addition, by using reference object detection techniques such as rectangle detection, as discussed above, a head-mounted display unit might not require depth data to correctly render a virtual workspace and/or one or more virtual objects included in such a workspace. Rather, detection of a reference object alone may enable determination of a camera pose. In particular, by knowing what a. reference object should look like when viewed head on, knowing that a particular object is, for example, a rectangle, and knowing what the object looks like in currently captured camera data, the camera pose or actual perspective of the head-mounted display unit can be determined, and virtual objects may be rendered, based on how such virtual objects shouid appear in relation to what is known to be the reference object. For example, a virtual surface may be defined with respect to the reference object based on the camera pose or a pose of the reference object, and a virtual workspace may be rendered within that virtual surface or with respect to that virtual surface. [00102] In some embodiments, a virtual workspace may include segments, user interfaces, and/or other virtual objects that are placed on a number of different surfaces. For example, one window in a virtual workspace can be aligned with a desk that is before a user, and another window in the virtual workspace can be aligned with a wall that is before the user and behind the desk. In this example, the two windows may be displayed at different angles, as a result of their alignment with different real-world objects. In other embodiments, windows within a virtual workspace might not be aligned with any real-world objects. Rather, such windows may simply be defined at any angle and at any position in virtual space, in relation to one or more reference objects. [Θ81 Θ3] In some embodiments, a user may be able to define the contents of particular windows in a virtual workspace using his or her fingers. For example, a user may be able to define a first window in a virtual workspace that mcludes a web browser, and a second window in the virtual workspace that includes a media player. As a user defines the rectangles corresponding to these windows with his or her fingers, a head-mounted display unit can render the windows and allow the user to populate each window by specifying what application(s) should be loaded in each of the spaces, for example by selecting from a list of potential applications or by performing a gesture indicating a particular application, [Θ81Θ4] In some embodiments, as a component of input/output subsystem 1 10, one or more wide-angle cameras may be incorporated into a head-mounted display unit in order to enhance the head-mounted display unit's ability to track various reference objects. In some instances, even if a user cannot see a particular reference object (e.g., because the reference object is not in the user's field of view), one of the tracking cameras included in the head-mounted display unit may be able to see the reference object, and the head-mounted display unit can render a virtual workspace accordingly. For example, in some embodiments, one or more of the cameras included in the head- mounted display unit may feature a fisheye lens that enables such camera(s) to have a wider field of view than might otherwise be achieved. [081 OS] In some embodiments, the reference object detection algorithms and/or other tracking algorithms used by a head-mounted display unit can transition between various reference objects included in a field of view in order to provide larger and extended virtual workspaces. For example, based on moments in time in which two or more reference objects are in the field of view of the one or more tracking cameras of the head-mounted display unit, the head-mounted display unit may be able to determine a spatial relationship between the various reference objects. In addition, the head- mounted display unit may subsequently use this spatial relationship in providing the virtual workspace(s). [06106] In some embodiments, particular virtual workspaces of a number of different virtual workspaces may be associated with particular reference objects of a plurality of available reference objects. For example, a user may have one virtual workspace that is defined in relation to his or her smartphone, and another virtual workspace that is defined in relation to his or her tablet computer, in some embodiments, a user can place both of these reference objects next to each other on a physical surface (e.g., the user's desk), and a head-mounted display unit may render the two virtual workspaces as being adjacent to each other. In other embodiments, the head-mounted display unit may prompt the user to select one of the virtual workspaces to be displayed, in some embodiments, where multiple virtual workspaces are available (e.g., in the example above when the user's smartphone and tablet computer are in view of a tracking camera included in a head-mounted display unit), a user may be able to move user interfaces and/or other virtual objects between the various virtual workspaces. [00107] in some embodiments, a virtual workspace may be generated only when a plurality of reference objects are present (e.g. in the example above, a smartphone and a desktop computer). Furthermore, in some embodiments, multiple workspaces may be associated with a single object. In some such embodiments, the user may be prompted to select which workspace is correct. In other embodiments, a most recent workspace may be automatically opened or a context of the user may be used to automatically determine an appropriate workspace. For example, if the user is at work a workspace including email and word processing segments may be opened, but if the user is at home a workspace including a media player segment and a social media segment may be opened. It may be possible in some embodiments for the user to scroll through different workspaces associated with an object, for example using specific hand gestures or voice commands, in some embodiments, a special icon may be shown on a reference object or other notification might be given to the user to alert the user that a virtual workspace can be opened for that object. In such an embodiment, the user might perform a motion (e.g. nodding his or her head, a specific eye motion, a specific hand motion, a movement of a mobile device, etc.) or "click" on the icon to indicated that the user wants to open a specific one of the available virtual workspaces. The icon may be displayed by that reference object, for example when the reference object is a mobile phone, or may be displayed by the HMD so as to appear to be located on or near the reference object. In some embodiments, a reference object may display a visual indicator, such as for example, a Quick Response (QR) code or some other kind of recognizable code or image in order to cause a particular virtual workspace to be displayed and/or otherwise rendered by rendering subsystem 140 of head-mounted display unit. Such a code may, for example comprise all of the information needed to link to another device and enable a virtual sharing with the other device. For instance, such a code may advertise that certain augmentations and/or other virtual workspaces are available to be displayed by a head-mounted display unit, so that other devices in the vicinity of the reference object displaying the code may render and/or otherwise provide the virtual workspace, and so that other users of such devices can interact with and/or collaborate in the virtual workspace. In some instances, a code might be transmitted as a signal by a reference object, rather than being displayed as an image. For example, a reference object may transmit a Bluetooth signal (or any other wireless signal as may be desired) which notifies devices in the vicinity that are capable of providing the virtual workspace that such a virtual workspace is available. In some embodiments, the code may contain information describing the virtual workspace. In some embodiments, the code may comprise information indicating where a definition of the workspace may be retrieved from or may merely indicate that a virtual workspace is available, for example from a known source or social networking function. [00109] In some embodiments, a user may exit the virtual workspace by concealing the recognizable code so that it is no longer recognizable to reference object detection subsystem 1 15. For example, in an embodiment where a user wishes to be in a private virtual workspace, the user may place the reference object with its display facing down. In such an embodiment, when the recognizable code is no longer visible, the virtual workspace may be closed. Or in some other embodiments, when the visual indicator is concealed the system may place the user in a private virtual workspace that other users cannot access. 08118] in some embodiments, one or more virtual workspaces may be stored in the cloud (e.g., on a remote server), so as to further enhance the ability to share virtual workspaces between different users and different devices. For example, one person can give a particular reference object to another person, and the other person may then view and/or interact with a virtual workspace associated with the reference object while using his or her own head-mounted display unit, which can load data associated with the virtual workspace from a remote server. Additionally or alternatively, multiple users of multiple different head-mounted display units can interact with the same virtual workspace simultaneously in either a local sharing scenario (e.g., in which ail users are sitting at the same table and viewing the same reference object) or in a remote sharing scenario (e.g., in which users are physically located at different locations, but interacting with the same virtual workspace, in a shared session). [00111] in some instances in which a virtual workspace is shared between different users and/or different devices, each individual device may adjust the viewing angle for the virtual workspace for its corresponding user. In other words, while the contents and layout of different user interfaces and/or other virtual objects of a viriual workspace may be defined relative to a reference object, the perspective or viewing angle at which such user interfaces and/or other virtual objects are presented might vary for each of the different users sharing the virtual workspace. Additionally, in some embodiments, a remote user (e.g., a user of a device that is not physically located at the same place as the other users and/or the reference object with respect to which the virtual workspace is being provided io the other users) may be able to select his or her own reference object at his or her own location to be used in providing the virtual workspace. In some instances, such a reference object may be manually selected by the remote user, while in other instances, the remote user's augmented reality or head-mounted display unit may automatically select a reference object to be used in providing the shared virtual workspace. [08112] In some embodiments, viriual workspaces that are stored in the cloud may be accessed by a link, such as a hyperlink, that may be shared by and/or between the various users of the virtual workspace (e.g., via email, via text message, through a QR code, etc.). In addition, any and/or all of the information relevant to the virtual workspace may be directly communicated to the various devices and/or users thereof, for example, visually through a displayed code, wirelessly via a transmitted signal, and/or using other means. In some embodiments, permissions may be defined for each workspace and used to determine whether other users are able to access and/or edit a respective workspace. For example, a user may openly share his workspace for with everyone, may create private workspaces that only the user can access, or may grant permissions to everyone within a social circle or friends network . [00113] In some embodiments, a reference object might not be an electronic device. Rather, in some embodiments, a reference object may be another physical object, for example that is rectangular in shape (e.g., a notepad, a business card, a piece of paper, etc.). Whether or not the reference object is an electronic device, a head-mounted display unit may, in accordance with one or more embodiments, provide one or more virtual workspaces in the various ways discussed above. [00114] In still other embodiments, a reference object may be a physical object of any shape. For example, an augmented reality or head-mounted display unit may be configured to identify unique features of the object, and use the object as a reference object in accordance with the various features discussed above. For instance, a circular object, such as a coaster (e.g., a coaster that may be placed on a coffee table), may be used as a reference object, and a head-mounted display unit may be configured to detect the shape of the coaster in a captured image, select the shape as a reference object, and define one or more virtual workspaces in relation to the coaster, similar to how such virtual workspaces may be defined in the examples discussed above. [Θ8115] FIG. 9 illustrates a flowchart that depicts an example method of providing augmented reality surface segmentation using reference object detection according to some embodiments. The processing illustrated in FIG. 9 may be implemented in software (e.g., computer-readable instructions, code, programs, etc.) that can be executed by one or more processors and/or other hardware components. Additionally or alternatively, the software may be stored on a non-transitory computer-readable storage medium. In some embodiments, the method illustrated in FIG. 9 may be performed by a head-mounted display unit, while in other embodiments, the method illustrated in FIG. 9 may be performed by a computing device that is communicatively coupled to, connected to, and/or otherwi.se linked to a head-mounted display unit. In still other embodiments, the method illustrated in FIG. 9 may be performed in combination by a head-mounted display unit and a computing device that is communicative coupled to, connected to, and/or otherwise linked to the head-mounted display unit. As seen in FIG. 9, the method may be initiated in step 905, receive camera input of a scene. For example, in step 905, image and/or video input may be received as camera input by a head-mounted display unit, and/or a computing device connected to the head- mounted display unit, fro one or more cameras included in the head-mounted display unit. The camera input may, for instance, include one or more images of a scene that is before a user of the head-mounted display unit. As in the examples discussed above, such a scene may include a physical surface on which one or more reference objects may be placed, and such reference object(s) may be used by the head-mounted display unit in providing a virtual workspace. In some embodiments, in step 905, system 100 may receive camera input using input/output subsystem 1 10. [08116] The method continues to step 910, identify at least one reference object in the scene. In some embodiments, these reference objects may be physical objects in the scene. For example, the reference objects may comprise physical three dimensional objects. For example, in step 910, one or more reference objects may be identified by the head-mounted display unit, and or a computing device connected to the head- mounted display unit, based on information describing one or more unique properties of the various reference objects. In some embodiments, the various reference objects may be rectangular in shape, and identifying reference object(s) in the camera input may be based on the results of the reference object detection step (not shown in FIG. 9). In particular, such reference object detection may be used to identify candidates of real- world objects that may be reference objects, and subsequently, the head-mounted display unit and/or the connected computing device may analyze the candidate objects in order to determine which of the candidate object(s) is or are reference objects. In some embodiments, in step 910, system 100 may identify' one or more reference objects using control object tracking subsystem 125. [00117] Subsequently, in step 915, receive input defining a surface segment. Such input may, in some embodiments, define one or more surface segments relative to a physical surface included in the scene and/or relative to one or more reference objects, such as the one or more reference objects identified in step 910 that may be placed on the physical surface. I some instances, the input defining the one or more surface segments may, for example, be user input that delineates and/or otherwise corresponds to a rectangle outlined by the user on the physical surface. For example, the user may place his or her fingers at a starting point in view of the camera included on the head- mounted display, and then draw his or her fingers outwards to define the opposite corners of a rectangle in which a user interface and/or other virtual objects may be rendered, as in the example discussed above with respect to FIGS. 2-7. In some embodiments, in step 915, system 100 may receive input that defines one or more surface segments using input/output subsystem 1 10. [ΘΘ118] Referring again to FIG. 9, in step 920, cause the surface segment to be rendered. For example, in step 920, the head-mounted display unit and/or the connected computing device may render the surface segment defined in step 915, along with one or more other virtual objects and/or other user interfaces associated with a virtual workspace that includes the defined surface segment. As discussed above, such a surface segment and/or the entire virtual workspace may be associated with a particular reference object, such as the reference object identified in step 910, such that removal and/or replacement of the reference object results in the closing and/or opening, respectively , of the virtual workspace. In some embodiments, in step 920, system 100 may render one or more surface segments of a virtual workspace using surface segment management subsystem 120 and/or rendering subsystem 140. [00119] FIG. 10 illustrates an example of a computing system in which one or more embodiments may be implemented. In some embodiments, a computer system 1000 as illustrated in FIG. 10 may be incorporated as part of a computing device, which may implement, perform, and/or execute any and/or all of the features, methods, and/or method steps described herein. For example, computer system 1000 may represent some of the components of a head-mounted display unit, a mobile device, or any other computing device, such as a laptop computer, a tablet computer, a smart phone, or a desktop computer. In addition, computer system 1000 may represent some of the components of system 100 of FIG. 1 (e.g., memory 1035 may represent memory 105; input devices 1015 and output device 1020 may represent input/output subsystem 1 10; processor 1010 and/or memory 1035 may provide one or more of the various subsystems of system 100 discussed above, such as reference object detection subsystem 1 15, surface segment management subsystem 120, control object tracking subsystem 125, rendering subsystem 140; etc.). FIG. 10 provides a schematic illustration of one embodiment of a computer system 1000 that can perform the methods provided by various other embodiments, as described herein. FIG. 10 is meant only to provide a generalized illustration of various components, any and/or all of which may be utilized as appropriate. FIG. 10, therefore, broadly illustrates how individual system elements ma be implemented in a relatively separated or relatively more integrated manner. [88128] The computer system 1000 is shown comprising hardware elements that can be electrically coupled via a bus 1005 (or may otherwise be in communication, as appropriate). The hardware elements may include one or more processors 1010, including without limitation one or more general-purpose processors and/or one or more special-purpose processors (such as digital signal processing chips, graphics acceleration processors, and/or the like); one or more input devices 1015, which can include without limitation a camera, a mouse, a keyboard and/or the like; and one or more output devices 1020, which can include without limitation a display unit, a printer and/or the like. [08121] The computer system 1000 may further include (and/or be in communication with) one or more non-transitory storage devices 1025, which can comprise, without limitation, local and/or network accessible storage, and/or can include, without limitation, a disk drive, a drive array, an optical storage device, a solid-state storage device such as a random access memory ("RAM") and/or a read-only memory ("ROM"), which can be programmable, fiash-updateable and/or the like. Such storage devices may be configured to implement any appropriate data storage, including without limitation, various file systems, database structures, and/or the like. [00122] The computer system 1000 might also include a communications subsystem 1030, which can include without limitation a modem, a network card (wireless or wired), an infrared communication device, a wireless communication device and/or chipset (such as a Bluetooth® device, an 802.11 device, a WiFi device, a WiMax device, cellular communication facilities, etc.), and/or the like. The communications subsystem 1030 may permit data to be exchanged with a network (such as the network described below, to name one example), other computer systems, and/or any other devices described herein. In many embodiments, the computer system 1000 will further comprise a non-transitory working memory 1035, which can include a RAM or ROM device, as described above. [08123] The computer system 1000 also can comprise software elements, shown as being currently located within the working memory 1035, including an operating system 1040, device drivers, executable libraries, and/or other code, such as one or more application programs 1045, which may comprise computer programs provided by various embodiments, and/or may be designed to implement methods, and/or configure systems, provided by other embodiments, as described herein. Merely by way of example, one or more procedures described with respect to the method(s) discussed above, for example as described with respect to FIG. 8, might be implemented as code and/or instnictions executable by a computer (and/or a processor within a computer); in an aspect, then, such code and/or instructions can be used to configure and/or adapt a general purpose computer (or other device) to perform one or more operations in accordance with the described methods. [ΘΘ124] A set of these instructions and/or code might be stored on a computer- readable storage medium, such as the storage deviee(s) 1025 described above. In some cases, the storage medium might be incorporated within a computer system, such as computer system 1000. Tn other embodiments, the storage medium might be separate from a computer system (e.g., a removable medium, such as a compact disc), and/or provided in an installation package, such that the storage medium can be used to program, configure and/or adapt a general purpose computer with the instructions/code stored thereon. These instructions might take the form of executable code, which is executable by the computer system 1000 and/or might take the form of source and/or installable code, which, upon compilation and/or installation on the computer system 1000 (e.g., using any of a variety of generally available compilers, installation programs, compression/decompression utilities, etc.) then takes the form of executable code. [Θ8125] Substantial variations may be made in accordance with specific requirements. For example, customized hardware might also be used, and/or particular elements might be implemented in hardware, software (including portable software, such as applets, etc.), or both. Further, connection to other computing devices such as network input/output devices may be employed. [00126] Some embodiments may employ a computer system (such as the computer system 1000) to perform methods in accordance with the disclosure. For example, some or all of the procedures of the described methods may be performed by the computer system 1000 in response to processor 1010 executing one or more sequences of one or more instructions (which might be incorporated into the operating system 1040 and/or other code, such as an application program 1045) contained in the working memory 1035. Such instructions may be read into the working memory 1035 from another computer-readable medium, such as one or more of the storage device(s) 1025. Merely by way of example, execution of the sequences of instructions contained in the working memory 1035 might cause the processor(s) 1010 to perfonn one or more procedures of the methods described herein, for example one or more steps of the method(s) described with respect to FIG. 8. [Θ8127] The terms "machine-readable medium" and "computer- readable medium," as used herein, refer to any medium that participates in providing data that causes a machine to operate in a specific fashion. In an embodiment implemented using the computer system 1000, various computer-readable media might be involved in providing instructions/code to processor(s) 1010 for execution and/or might be used to store and/or carry such instructions/code (e.g., as signals). In many implementations, a computer-readable medium is a physical and/or tangible storage medium. Such a medium may take many forms, including but not limited to, non-volatile media, volatile media, and transmission media. Non- volatile media include, for example, optical and/or magnetic disks, such as the storage deviee(s) 1025. Volatile media include, without limitation, dynamic memory, such as the working memor '- 1035. Transmission media include, without limitation, coaxial cables, copper wire and fiber optics, including the wires that comprise the bus 1005, as well as the various components of the communications subsystem 1030 (and/or the media by which the communications subsystem 1030 provides communication with other devices). Hence, transmission media can also take the form of waves (including without limitation radio, acoustic and/or light waves, such as those generated during radio-wave and infrared data communications). [ΘΘ128] Common forms of physical and/or tangible computer-readable media include, for example, a floppy disk, a flexible disk, hard disk, magnetic tape, or any other magnetic medium, a CD-ROM, any other optical medium, punch cards, paper tape, any other physical medium with patterns of holes, a RAM, a PROM, EPROM, a FLASH-EPROM, any other memory chip or cartridge, a carrier wave as described hereinafter, or any other medium from which a computer can read instructions and/or code. [00129] Various forms of computer-readable media may be involved in carrying one or more sequences of one or more instructions to the processor(s) 1010 for execution. Merely by way of example, the instructions may initially be carried on a magnetic disk and/or optical disc of a remote computer. A remote computer might load the instructions into its dynamic memory and send the instructions as signals over a transmission medium to be received and'Or executed by the computer system 1000. These signals, which might be in the form of electromagnetic signals, acoustic signals, optical signals and/or the like, are all examples of carrier waves on which instructions can be encoded, in accordance with various embodiments of the invention. [06130] The communications subsystem 1030 (and/or components thereof) generally will receive the signals, and the bus 1005 then might carry the signals (and'Or the data, instructions, etc. carried by the signals) to the working memory 1035, from which the processor(s) 1010 retrieves and executes the instructions. The instructions received by the working memory 1035 may optionally be stored on a non-transitory storage device 1025 either before or after execution by the processor(s) 1010. [00131] The methods, systems, and devices discussed above are examples. Various embodiments may omit, substitute, or add various procedures or components as appropriate. For instance, in alternative configurations, the methods described may be performed in an order different from that described, and/or various stages may be added, omitted, and/or combined. Also, features described with respect to certain embodiments may be combined in various other embodiments. Different aspects and elements of the embodiments may be combined in a similar manner. Also, technology evolves and, thus, many of the elements are examples that do not limit the scope of the disclosure to those specific examples. [00132] Specific details are given in the description to provide a thorough understanding of the embodiments. However, embodiments may be practiced without these specific details. For example, well-known circuits, processes, algorithms, structures, and techniques have been shown without unnecessary detail in order to avoid obscuring the embodiments. This description provides example embodiments only, and is not intended to limit the scope, applicability, or configuration of the invention. Rather, the preceding description of the embodiments will provide those skilled in the art with an enabling description for implementing embodiments of the invention. Various changes may be made in the function and arrangement of elements without departing from the spirit and scope of the invention. [00133] Also, some embodiments were described as processes depicted as flow diagrams or block diagrams. Although each may describe the operations as a sequential process, many of the operations can be performed in parallel or concurrently. In addition, the order of the operations may be rearranged. A process may have additional steps not included in the figure. Furthermore, embodiments of the methods may be implemented by hardware, software, firmware, middleware, microcode, hardware description languages, or any combination thereof. When implemented in software, firmware, middleware, or microcode, the program code or code segments to perform the associated tasks may be stored in a computer-readable medium such as a storage medium. Processors may perform the associated tasks. [08134] Having described several embodiments, various modifications, alternative constructions, and equivalents may be used without departing from the spirit of the disclosure. For example, the above elements may merely be a component of a larger system, wherein other rules may take precedence over or otherwise modify the application of the invention. Also, a number of steps may be undertaken before, during, or after the above elements are considered. Accordingly, the above description does not limit the scope of the disclosure. |
Circuitry configured for dynamically adjusting clock signal quality based on an operating mode for power savings is described. The circuitry includes clock generation circuitry. The circuitry also includes mode control circuitry. The mode control circuitry provides a drive signal based on an operating mode. The circuitry also includes clock buffer circuitry coupled to the clock generation circuitry and to the mode control circuitry. The clock buffer circuitry adjusts a clock signal quality based on the drive signal. |
CLAIMS1. Circuitry configured for dynamically adjusting clock signal quality based on an operating mode for power savings, comprising:clock generation circuitry;mode control circuitry, wherein the mode control circuitry provides a drive signal based on an operating mode; andclock buffer circuitry coupled to the clock generation circuitry and to the mode control circuitry, wherein the clock buffer circuitry adjusts a clock signal quality based on the drive signal.2. The circuitry of claim 1, wherein the clock signal quality is continually adjusted based on an operating mode indicator.3. The circuitry of claim 1, wherein a drive signal strength is reduced and the clock signal quality is reduced for a reduced quality operating mode.4. The circuitry of claim 3, wherein reducing the drive signal strength conserves power.5. The circuitry of claim 1, wherein a drive signal strength is increased and the clock signal quality is increased for a highest quality operating mode.6. The circuitry of claim 1, wherein the operating mode is based on the clock signal quality required for proper operation of recipient circuitry.7. The circuitry of claim 1, wherein the clock signal quality is based on one of a group consisting of phase noise, frequency drift, amplitude, temperature compensation, jitter and another clock quality parameter.8. The circuitry of claim 1, wherein the clock generation circuitry comprises a crystal and crystal oscillator circuitry.9. The circuitry of claim 1, wherein the mode control circuitry and the clock buffer circuitry are included in a power management circuit.10. The circuitry of claim 1, wherein the mode control circuitry and the clock buffer circuitry are included in an electronic device.11. A method for dynamically adjusting clock signal quality by circuitry based on an operating mode for power savings, comprising:generating a clock signal;providing a drive signal based on an operating mode; andadjusting a clock signal quality based on the drive signal.12. The method of claim 11, wherein the clock signal quality is continually adjusted based on an operating mode indicator.13. The method of claim 11, wherein a drive signal strength is decreased and the clock signal quality is decreased for a reduced quality operating mode.14. The method of claim 13, wherein decreasing the drive signal strength conserves power.15. The method of claim 11, wherein a drive signal strength is increased and the clock signal quality is increased for a highest quality operating mode.16. The method of claim 11, wherein the operating mode is based on the clock signal quality required for proper operation of recipient circuitry.17. The method of claim 11, wherein the clock signal quality is based on one of a group consisting of phase noise, frequency drift, amplitude, temperature compensation, jitter and another clock quality parameter.18. The method of claim 11, wherein the clock signal is generated using a crystal and crystal oscillator circuitry.19. The method of claim 11, wherein the method is performed by circuitry included in a power management circuit.20. The method of claim 11 , wherein the method is performed by circuitry included in an electronic device.21. A computer-program product for dynamically adjusting clock signal quality based on an operating mode for power savings, comprising a non-transitory tangible computer- readable medium having instructions thereon, the instructions comprising:code for causing circuitry to generate a clock signal;code for causing the circuitry to provide a drive signal based on an operating mode; andcode for causing the circuitry to adjust a clock signal quality based on the drive signal.22. The computer-program product of claim 21, wherein the clock signal quality is continually adjusted based on an operating mode indicator.23. The computer-program product of claim 21, wherein a drive signal strength is decreased and the clock signal quality is decreased for a reduced quality operating mode.24. The computer-program product of claim 23, wherein decreasing the drive signal strength conserves power.25. The computer-program product of claim 21, wherein a drive signal strength is increased and the clock signal quality is increased for a highest quality operating mode.26. The computer-program product of claim 21, wherein the clock signal quality is based on one of a group consisting of phase noise, frequency drift, amplitude, temperature compensation, jitter and another clock quality parameter.27. An apparatus for dynamically adjusting clock signal quality based on an operating mode for power savings, comprising:means for generating a clock signal;means for providing a drive signal based on an operating mode; andmeans for adjusts a clock signal quality based on the drive signal.28. The apparatus of claim 27, wherein the clock signal quality is continually adjusted based on an operating mode indicator.29. The apparatus of claim 27, wherein a drive signal strength is decreased and the clock signal quality is decreased for a reduced quality operating mode.30. The apparatus of claim 29, wherein decreasing the drive signal strength conserves power.31. The apparatus of claim 27, wherein a drive signal strength is increased and the clock signal quality is increased for a highest quality operating mode.32. The apparatus of claim 27, wherein the clock signal quality is based on one of a group consisting of phase noise, frequency drift, amplitude, temperature compensation, jitter and another clock quality parameter. |
DYNAMICALLY ADJUSTING CLOCK BUFFER CIRCUITRY FORPOWER CONSERVATIONRELATED APPLICATIONS[0001] This application is related to and claims priority from U.S. Provisional Patent Application Serial No. 61/349,751 filed May 28, 2010 for "DYNAMIC CLOCK BUFFER POWER OPTIMIZATION BASED ON MODES OF OPERATION."TECHNICAL FIELD[0002] The present disclosure relates generally to electronic devices. More specifically, the present disclosure relates to dynamically adjusting clock buffer circuitry for power conservation.BACKGROUND[0003] In the last several decades, the use of electronics has become common. In particular, advances in electronic technology have reduced the cost of increasingly complex and useful electronic devices. Cost reduction and consumer demand have proliferated the use of electronic devices such that they are practically ubiquitous in modern society. As the use of electronic devices has expanded, so has the demand for new and improved features of electronics. More specifically, electronic devices that perform functions faster, more efficiently or with higher quality are often sought after.[0004] Many electronic devices (e.g., electronic circuits, cellular phones, smart phones, computers, etc.) use clock signals. These electronic devices may use clock signals for various purposes. For example, an electronic device may use a clock signal to time processing operations, to perform signal processing, to track time, to transmit and/or receive signals, etc. For instance, a cellular phone may use a clock signal for signal processing (e.g., modulation/demodulation, encoding, etc.) and coordinating communications. In another instance, a computer may use a clock signal to time processing operations. [0005] Clock signals are often derived from a source such as a physical crystal, whose output is often processed in order to improve their quality. For example, some devices or components may require higher quality clock signals than others. However, processing clock signals requires electrical power. Increased electrical power is often needed to produce increased clock signal quality. Providing a higher quality clock signal than is required may thus consume more electrical power than is needed, thus wasting energy. Systems and methods that help to conserve power may be beneficial.SUMMARY[0006] Circuitry configured for dynamically adjusting clock signal quality based on an operating mode for power savings is disclosed. The circuitry includes clock generation circuitry. The circuitry also includes mode control circuitry. The mode control circuitry provides a drive signal based on an operating mode. The circuitry also includes clock buffer circuitry coupled to the clock generation circuitry and to the mode control circuitry. The clock buffer circuitry adjusts a clock signal quality based on the drive signal. The clock generation circuitry may include a crystal and crystal oscillator circuitry.[0007] The clock signal quality may be continually adjusted based on an operating mode indicator. A drive signal strength may be reduced and the clock signal quality may be reduced for a reduced quality operating mode. Reducing the drive signal strength may conserve power. A drive signal strength may be increased and the clock signal quality may be increased for a highest quality operating mode.[0008] The operating mode may be based on the clock signal quality required for proper operation of recipient circuitry. The clock signal quality may be based on one of a group consisting of phase noise, frequency drift, amplitude, temperature compensation, jitter and another clock quality parameter.[0009] The mode control circuitry and the clock buffer circuitry may be included in a power management circuit. The mode control circuitry and the clock buffer circuitry may be included in an electronic device.[0010] A method for dynamically adjusting clock signal quality by circuitry based on an operating mode for power savings is also disclosed. The method includes generating a clock signal. The method also includes providing a drive signal based on an operating mode. The method further includes adjusting a clock signal quality based on the drive signal.[0011] A computer-program product for dynamically adjusting clock signal quality based on an operating mode for power savings is also disclosed. The computer-program product includes a non-transitory tangible computer-readable medium with instructions. The instructions include code for causing circuitry to generate a clock signal. The instructions also include code for causing the circuitry to provide a drive signal based on an operating mode. The instructions further include code for causing the circuitry to adjust a clock signal quality based on the drive signal.[0012] An apparatus for dynamically adjusting clock signal quality based on an operating mode for power savings is also disclosed. The apparatus includes means for generating a clock signal. The apparatus also includes means for providing a drive signal based on an operating mode. The apparatus further includes means for adjusts a clock signal quality based on the drive signal.BRIEF DESCRIPTION OF THE DRAWINGS[0013] Figure 1 is a block diagram illustrating one configuration of clock buffer circuitry that may be dynamically adjusted for power conservation;[0014] Figure 2 is a flow diagram illustrating one configuration of a method for dynamically adjusting clock buffer circuitry for power conservation;[0015] Figure 3 is a flow diagram illustrating a more specific configuration of a method for dynamically adjusting clock buffer circuitry for power conservation;[0016] Figure 4 is a block diagram illustrating a more specific configuration of clock buffer circuitry that may be dynamically adjusted for power conservation;[0017] Figure 5 is a diagram illustrating one example of dynamically adjusting clock buffer circuitry for power conservation;[0018] Figure 6 is a block diagram illustrating one example of clock buffer circuitry that may be dynamically adjusted for power conservation; [0019] Figure 7 is a block diagram illustrating another example of clock buffer circuitry that may be dynamically adjusted for power conservation;[0020] Figure 8 is a block diagram illustrating one configuration of power management circuitry;[0021] Figure 9 is a block diagram illustrating one configuration of a wireless communication device in which systems and methods for dynamically adjusting clock buffer circuitry for power conservation may be implemented;[0022] Figure 10 illustrates various components that may be utilized in an electronic device; and[0023] Figure 11 illustrates certain components that may be included within a wireless communication device.DETAILED DESCRIPTION[0024] The systems and methods disclosed herein may be applied to a variety of electronic devices. Examples of electronic devices include voice recorders, video cameras, audio players (e.g., Moving Picture Experts Group-1 (MPEG-1) or MPEG-2 Audio Layer 3 (MP3) players), video players, audio recorders, desktop computers, laptop computers, personal digital assistants (PDAs), gaming systems, tablet devices, appliances, etc. One kind of electronic device is a communication device, which may communicate with another device. Examples of communication devices include telephones, laptop computers, desktop computers, cellular phones, smartphones, wireless or wired modems, e-readers, tablet devices, gaming systems, cellular telephone base stations or nodes, access points, wireless gateways and wireless routers.[0025] An electronic device or communication device may operate in accordance with certain industry standards, such as International Telecommunication Union (ITU) standards and/or Institute of Electrical and Electronics Engineers (IEEE) standards (e.g., Wireless Fidelity or "Wi-Fi" standards such as 802.11a, 802.11b, 802.1 lg, 802.11η and/or 802.1 lac). Other examples of standards that a communication device may comply with include IEEE 802.16 (e.g., Worldwide Interoperability for Microwave Access or "WiMAX"), Third Generation Partnership Project (3 GPP), 3 GPP Long Term Evolution (LTE), Global System for Mobile Telecommunications (GSM) and others (where a communication device may be referred to as a User Equipment (UE), Node B, evolved Node B (eNB), mobile device, mobile station, subscriber station, remote station, access terminal, mobile terminal, terminal, user terminal, subscriber unit, etc., for example). While some of the systems and methods disclosed herein may be described in terms of one or more standards, this should not limit the scope of the disclosure, as the systems and methods may be applicable to many systems and/or standards.[0026] It should be noted that some communication devices may communicate wirelessly and/or may communicate using a wired connection or link. For example, some communication devices may communicate with other devices using an Ethernet protocol. The systems and methods disclosed herein may be applied to communication devices that communicate wirelessly and/or that communicate using a wired connection or link. In one configuration, the systems and methods disclosed herein may be applied to a communication device that communicates with another device using a satellite.[0027] Some configurations of the systems and methods disclosed herein allow dynamic clock buffer power conservation or savings (e.g., optimization) based on modes of operation. For example, clock buffers in a power management integrated circuit (PMIC) may have several modes of operation that vary in power consumption and performance. Leaving the buffers set at a fixed high-power setting can result in wasted power for modes that do not have stringent phase noise (PN), jitter and other clock signal quality requirements. Rather, lower power modes of operation for the clock buffers may be used for such modes. The systems and methods disclosed herein may help solve the wasted power problem by dynamically changing the power settings of the buffers (analog and digital) based on the requirements of the loads. This may result in power savings at a battery, for instance.[0028] The systems and methods disclosed herein may provide power reduction techniques. For example, clock buffers may be dynamically configured according to different power modes based on the load requirements in different operating modes (e.g., modes of operation). This is in contrast to a traditional approach. Traditionally, clock buffers (e.g., PMIC clock buffers) are left in a static configuration for all modes of operation. However, some modes can tolerate worse performance and could be put in a lower power consuming mode.[0029] The systems and methods disclosed herein may reduce (e.g., optimize) power consumption. As dictated by concurrency requirements, for example, sensitive loads that need a cleaner clock may use higher power modes. However, loads that can handle a noisier clock may use the lower power modes. This may result in power savings. Such power savings may be particularly useful for devices that use a battery to provide power. Thus, the systems and methods disclosed herein may provide power savings due to configurable clock buffers. The clock buffers may be configured based on a load's clock signal quality requirements. The systems and methods disclosed herein may be applied to a wide variety of devices, such as electronic circuitry, computing devices, wireless communication devices, etc.[0030] It should be noted that the terms "couple," "coupling," "coupled" or other variations of the word couple as used herein may indicate either an indirect connection or a direct connection. For example, if a first component is "coupled" to a second component, the first component may be either indirectly connected (e.g., through another component) to the second component or directly connected to the second component.[0031] It should be noted that as used herein, designating a component, element or entity (e.g., transistor, capacitor, resistor, power supply, circuit, etc.) as a "first," "second," "third" or "fourth" component may be arbitrary and is used to distinguish components for explanatory clarity. It should also be noted that labels used to designate a "second," "third" or "fourth," etc., do not necessarily imply that elements using preceding labels "first," "second" or "third," etc., are included or used. For example, simply because an element or component is labeled a "third" component does not necessarily imply that "first" and "second" elements or components exist or are used. In other words, the numerical labels (e.g., first, second, third, fourth, etc.) are labels used for ease in explanation and do not necessarily imply a particular number of elements or a particular structure. Thus, the components may be labeled or numbered in any manner. [0032] It should be noted that the term "circuitry" as used herein may denote one or more circuit components (e.g., resistors, capacitors, inductors, transistors, etc.). Circuitry may additionally or alternatively use other components, such as processing and/or memory cells, etc. Thus, "circuitry" may be implemented in hardware, software or a combination of both. Examples of circuitry include integrated circuits (ICs), application specific integrated circuits (ASICs), processors, memory cells, registers, amplifiers, etc.[0033] Various configurations are now described with reference to the Figures, where like reference numbers may indicate functionally similar elements. The systems and methods as generally described and illustrated in the Figures herein could be arranged and designed in a wide variety of different configurations. Thus, the following more detailed description of several configurations, as represented in the Figures, is not intended to limit scope, as claimed, but is merely representative of the systems and methods.[0034] Figure 1 is a block diagram illustrating one configuration of clock buffer circuitry 112 that may be dynamically adjusted for power conservation. For example, Figure 1 illustrates circuitry 100 configured for dynamically adjusting clock signal quality based on an operating mode for power savings. The circuitry 100 may include clock generation circuitry 108 (that may or may not include a crystal, for example), mode control circuitry 102 and/or clock buffer circuitry 112. In some configurations, the circuitry 100 may include recipient circuitry 116. The clock buffer circuitry 112 may be coupled to the mode control circuitry 102, the clock generation circuitry 108 and/or the recipient circuitry 116. The clock generation circuitry 108 may generate an input clock signal 110. For example, the clock generation circuitry 108 may comprise a crystal and crystal oscillator circuitry used to generate the input clock signal 110. In some configurations, the clock generation circuitry 108 may include components used to compensate for variations in the input clock signal 110. The input clock signal 110 may include variations and other impairments. For example, the input clock signal 110 may not have adequate peak-to-peak amplitude for some applications and/or may be subject to phase noise, jitter, frequency drift and/or temperature variation.[0035] The clock buffer circuitry 112 may be used to improve one or more aspects of the input clock signal 110. For example, the clock buffer circuitry 112 may amplify the input clock signal 110, may filter the input clock signal 1 10 and/or may convert the input clock signal 110 to a digital (e.g., square wave) signal. Additionally or alternatively, the clock buffer circuitry 112 may compensate for phase noise, frequency drift and/or temperature variation in the input clock signal 110.[0036] The clock buffer circuitry 112 may operate based on a drive signal 106. For example, the clock buffer circuitry 112 may modify the input clock signal 110 to produce an output clock signal 114 based on a drive signal 106 strength. For instance, the clock buffer circuitry 112 may provide a "cleaner" or higher quality output clock signal 114 with increased drive signal 106 strength. A "cleaner" or higher quality output clock signal 114 may exhibit reduced phase noise, temperature variation, frequency drift and/or may provide a more accurate (e.g., desirable for the recipient circuitry 116) peak-to-peak amplitude. However, with decreased drive signal 106 strength, the clock buffer circuitry 112 may provide an output clock signal 114 that exhibits increased phase noise, jitter, temperature variation, frequency drift and/or a less accurate (e.g., lower, increased variation in, less desirable, etc.) peak-to-peak amplitude.[0037] In one configuration, an operating mode, a drive signal 106 (e.g., drive signal 106 strength) and an output clock signal 114 may be characterized in terms of a highest quality operating mode and one or more reduced quality operating modes. For example, the recipient circuitry 116 may operate according to a set of operating modes including at least one highest quality operating mode and one or more reduced quality operating modes. The highest quality operating mode of the recipient circuitry 116 may require an output clock signal 114 that satisfies the most stringent operating requirements of the recipient circuitry 116. For instance, the highest quality operating mode of the recipient circuitry 116 may require a particular peak-to-peak amplitude, less phase noise, less frequency drift, less jitter and/or less temperature variation of the output clock signal 114 to operate properly in the highest quality operating mode. In other words, the highest quality operating mode may require a higher quality output clock signal 114 compared to the one or more reduced quality operating modes in the set of operating modes. The highest quality operating mode may correspond to a highest drive signal 106 strength in a set of drive signal 106 strengths and to a highest output clock signal 114 quality in a set of output clock signal 114 qualities. It should be noted that the use of the term "highest" may not denote an absolute "highest possible," but may denote a "highest in a set." In one example, global positioning system (GPS) circuitry may require a high quality clock with low jitter and low frequency drift.[0038] Hypothetically speaking, providing a reduced quality output clock signal 114 to a recipient circuitry 116 when in a highest quality operating mode may cause the recipient circuitry 1 16 to provide degraded functionality and/or to malfunction. In other words, a highest quality clock signal 114 (and hence, a highest drive signal 106 strength) may be required for the recipient circuitry 116 to operate properly while in a highest operating mode. It should be noted that the recipient circuitry 116 may still operate properly while in a reduced quality operating mode if a highest quality output clock signal 114 were provided. However, the recipient circuitry 116 may still function properly while in a reduced quality operating mode when provided with a reduced quality output clock signal 114. Providing the reduced quality output clock signal 114 instead of the highest quality output clock signal 114 may conserve power, for instance, since the drive signal 106 strength may be reduced.[0039] A reduced quality operating mode may correspond to a reduced drive signal 106 strength and to a lower quality output clock signal 114 when respectively compared to the highest drive signal 106 strength and to a highest quality output clock signal 114. For example, a reduced quality operating mode may correspond to a reduced drive signal 106 strength compared to a highest drive signal 106 strength in a set. Furthermore, a reduced quality operating mode may correspond to a reduced output clock signal 114 quality compared to a highest output clock signal 106 quality in a set. For example, a reduced quality output clock signal 1 14 may exhibit increased phase noise, increased frequency drift, a lower peak-to-peak amplitude, etc. However, the reduced quality operating mode may provide a reduced quality output clock signal 114 that allows the recipient circuitry 116 to operate properly while providing power savings. In one example, modem circuitry may tolerate a low quality clock with slewed clock edges and some jitter.[0040] In some configurations, the clock buffer circuitry 112 may provide multiple output clock signals 1 14. For example, the clock buffer circuitry 112 may provide different output clock signals 114 of the same or differing qualities. For instance, the clock buffer circuitry 112 may provide a low-quality output clock signal 114, a medium-quality output clock signal 114 and/or a high-quality output clock signal 114. The output clock signal(s) 114 may be provided to recipient circuitry 116. For example, a first output clock signal 114 may be provided to a first recipient circuitry 116 and a second output clock signal 114 may be provided to a second recipient circuitry 1 16.[0041] The recipient circuitry 116 may use the output clock signal(s) 114 to perform one or more operations. Examples of recipient circuitry 116 include processors, global positioning system (GPS) circuitry, Bluetooth circuitry, a frequency modulation (FM) receiver chip, interface circuitry (e.g., ports, etc.), signal processing circuitry (e.g., radio frequency (RF) chips), communications circuitry (e.g., modulators, demodulators, encoders, etc.) and/or timers, etc. For instance, the recipient circuitry 116 may use the output clock signal 114 to execute instructions, receive a signal, transmit a signal, encode a signal, decode a signal, modulate a signal, demodulate a signal, track time and/or coordinate communications, etc.[0042] In some configurations, the recipient circuitry 116 may function according to differing operating modes. For example, the recipient circuitry 116 may require a particular quality of output clock signal 114 while in a first operating mode (e.g., highest quality operating mode), but may not require the same quality of output clock signal 114 in a second operating mode (e.g., reduced quality operating mode). For instance, an RF chip may require a high quality output clock signal 114 while actively transmitting and receiving payload data, but may be able to tolerate a lower quality output clock signal 114 (with increased phase noise, frequency drift, etc., for example) while not transmitting or receiving payload data.[0043] In one configuration, the recipient circuitry 116 may send an operating mode indicator 104 to the mode control circuitry 102. In other words, the recipient circuitry 116 may control changes (e.g., transitions) in an operating mode by providing the operating mode indicator 104. The operating mode indicator 104 may explicitly or implicitly indicate an operating mode for the recipient circuitry 116. The mode control circuitry 102 may control the drive signal 106 based on the operating mode indicator 104. In one configuration, the mode control circuitry 102 may include mode mapping circuitry and one or more registers. The mode mapping circuitry may map the operating mode indicator 104 to register bits that control drive signal 106 strength. For example, if the operating mode indicator 104 indicates that a high quality output clock signal 1 14 is required, the mode mapping circuitry may produce a set of corresponding register bits. The register bits may configure one or more registers to increase the drive signal 106 strength in order to cause the clock buffer circuitry 112 to provide a high quality output clock signal 114. The mode control circuitry 102 may similarly decrease the drive signal 106 strength when an operating mode indicator 104 indicates that a lower quality output clock signal 114 is sufficient.[0044] In one example, if an RF chip (e.g., recipient circuitry 116) is about to enter an active operating mode, the RF chip may send an operating mode indicator 104 to the mode control circuitry 102 indicating an operating mode that requires a high quality output clock signal 114. The mode control circuitry 102 may increase the drive signal 106 strength, thereby causing the clock buffer circuitry 112 to output a high quality output clock signal 114. Continuing the example, if the RF chip (e.g., recipient circuitry 116) is about to enter a passive mode, the RF chip may send an operating mode indicator 104 to the mode control circuitry 102 indicating an operating mode that does not require a high quality output clock signal 114 (when the RF chip or recipient circuitry 116 can tolerate a lower quality output clock signal 114). Accordingly, the mode control circuitry 102 may reduce the drive signal 106 strength, thereby causing the clock buffer circuitry 112 to provide a lower quality output clock signal 114 to the RF chip. Electrical power may be conserved by reducing the drive signal 106 strength while in operating modes that do not require a high quality output clock signal 114. Thus, the clock buffer circuitry 112 and the recipient circuitry 116 may operate more efficiently. This may be particularly useful in a configuration where the mode control circuitry 102, clock buffer circuitry 112 and/or one or more recipient circuitries 116 are included in an electronic device powered by a battery.[0045] In some configurations, multiple recipient circuitries 116 may be used in accordance with the systems and methods disclosed herein. For example, a first recipient circuitry 116 may require a high quality output clock signal 114 during an active operating mode, but not during a passive operating mode. A second recipient circuitry 116 may only require a lower quality output clock signal 114 (with increased phase noise, frequency drift, etc.). While in an active operating mode, the mode control circuitry 102 may increase drive signal 106 strength to provide a high quality output clock signal 114. This may come as a result of an active operating mode indicator 104 provided by the first recipient circuitry 116. The high quality output clock signal 114 may be provided to both the first recipient circuitry 116 and the second recipient circuitry 116. However, if the operating mode indicator 104 indicates that a lower quality output clock signal 114 is sufficient, the mode control circuitry 102 may lower the drive signal 106 strength, causing the clock buffer circuitry 112 to provide a lower quality output clock signal 114. This may reduce power consumption.[0046] While in a high quality operating mode, some recipient circuitry 116 may require high quality output clock signals 114. In one configuration, global positioning system (GPS) circuitry (e.g., recipient circuitry 116) and radio frequency (RF) circuitry (e.g., recipient circuitry 116) may require high quality output clock signals 114 while in a high quality operating mode. On the other hand, some recipient circuitry 116 may be able to tolerate low quality output clock signals 114. In one configuration, modem circuitry (e.g., recipient circuitry 116) may be able to tolerate a low quality output clock signal 114. It should be noted, however, that GPS circuitry, RF circuitry and modem circuitry may have different requirements and different tolerances. Furthermore, the differing requirements and differing tolerances may vary according to operating mode (e.g., high quality operating mode, medium quality operating mode, low quality operating mode, etc.).[0047] In some configurations, multiple different operating modes may be used. For example, one or more recipient circuitries 116 may require a range of output clock signal 114 qualities based on multiple operating modes. One or more recipient circuitries 116 may thus provide multiple operating mode indicators 104. The mode control circuitry 102 may accordingly provide multiple drive signal 106 strengths. Furthermore, the clock buffer circuitry 112 may provide multiple output clock signal 114 qualities. Additionally or alternatively, the mode control circuitry 102 may provide multiple drive signals 106 to different clock buffers included in the clock buffer circuitry 112, thereby allowing multiple output clock signals 114 of the same or differing qualities.[0048] In some configurations, one or more operating mode indicators 104 may additionally or alternatively be provided from circuitry other than the recipient circuitry 116. For example, controller circuitry (not illustrated in Figure 1) may dictate when an operating mode may change in addition to or alternatively from the recipient circuitry 116. In such a case, the controller circuitry may provide one or more operating mode indicators 104 to the mode control circuitry 102. Examples of controller circuitry include processors, computer-program products, integrated circuits (ICs), modems, etc. In one configuration, controller circuitry may control one or more aspects of operation of the clock generation circuitry, the mode control circuitry 102, the clock buffer circuitry 112 and/or the recipient circuitry 116. Thus, the controller circuitry may control changes (e.g., transitions) in an operating mode by providing the operating mode indicator 104. This may be in addition to or alternatively from control provided by the recipient circuitry 116 for changes (e.g., transitions) in the operating mode.[0049] It should be noted that the output clock signal 114 quality may be continually adjusted based on an operating mode indicator 104. For example, operating modes (e.g., required output clock signal 114 quality) may vary with time, which may be reflected by the operating mode indicator 104.[0050] It should be noted that drive signal 106 may be implemented as a current, voltage or data. Thus, for example, the drive signal 106 strength may be increased or decreased by respectively increasing or decreasing an electrical current, by increasing or decreasing a voltage and/or by sending data (e.g., a message, indicator, etc.).[0051] Figure 2 is a flow diagram illustrating one configuration of a method 200 for dynamically adjusting clock buffer circuitry for power conservation. Clock generation circuitry 108 may generate 202 a clock signal (e.g., input clock signal 110). For example, the clock generation circuitry 108 may include a crystal and crystal oscillator circuitry for generating the clock signal 110.[0052] Mode control circuitry 102 may provide 204 a drive signal 106 to clock buffer circuitry 112 based on an operating mode. For example, the mode control circuitry 102 may receive an operating mode indicator 104 that indicates an operating mode of recipient circuitry 116. The mode control circuitry 102 may control a drive signal 106 strength based on the operating mode indicator 104. For example, the mode control circuitry 102 may set the drive signal 106 strength that will produce a clock signal 114 quality that corresponds to the recipient circuitry 116 operating mode.[0053] The clock buffer circuitry 112 may adjust 206 a clock signal 114 quality based on the drive signal 106. In one configuration, the clock signal 114 quality may be adjusted such that it is sufficient to adequately support the recipient circuitry 116 in the current operating mode. For example, if the recipient circuitry 116 requires increased clock signal 114 quality for the current operating mode, the clock buffer circuitry 112 may increase the clock signal 114 quality according to the drive signal 106 such that the recipient circuitry 116 may function properly in the current operating mode. However, if the recipient circuitry 116 can tolerate a lower quality output clock 114 signal in the current operating mode, the clock buffer circuitry 112 may reduce the clock signal 114 quality according to the drive signal 106 in order to conserve energy or power.[0054] In one configuration, a clock signal 114 quality that is sufficient to support proper operation of the recipient circuitry 116 in an operating mode may be expressed with a margin or tolerance of operation. For example, each operating mode may specify an amount of frequency variation, an amount of tolerable jitter, an amount of tolerable phase noise, etc. For instance, a highest quality operating mode for recipient circuitry 116 may specify a tolerance that is less than or equal to ±10 parts per million (ppm) for frequency (e.g., frequency drift). Reduced quality operating modes may allow a larger tolerance for frequency (e.g., frequency drift), for example.[0055] Figure 3 is a flow diagram illustrating a more specific configuration of a method 300 for dynamically adjusting clock buffer circuitry for power conservation. Clock generation circuitry 108 may generate 302 a clock signal (e.g., input clock signal 110). For example, the clock generation circuitry 108 may include a crystal and crystal oscillator circuitry for generating the clock signal 110.[0056] Mode control circuitry 102 may receive 304 an operating mode indicator 104. For example, the mode control circuitry 102 may receive 304 a message, signal, bit(s), etc., that may indicate a current or anticipated operating mode of the recipient circuitry 116. The operating mode indicator 104 may be an explicit or implicit indicator. For instance, the recipient circuitry 116 or controller circuitry may send an explicit indicator that may be received 304 by the mode control circuitry 102 that corresponds to a particular operating mode of the recipient circuitry 116. Additionally or alternatively, the mode control circuitry 102 may receive an implicit indicator (e.g., an RF chip begins communication procedures) that indicates a particular operating mode.[0057] The mode control circuitry 102 may determine 306 a drive signal 106 strength based on the operating mode indicator 104. For example, the mode control circuitry 102 may determine 306 a drive signal 106 strength that is required to produce an adequate clock signal 114 quality sufficient for the operating mode of the recipient circuitry 116. This determination 306 may be made differently based on the configuration of the systems and methods used. In one configuration, the mode control circuitry 102 could use a look-up table to determine the drive signal 106 strength that corresponds to a particular operating mode (as given by the operating mode indicator 104, for example) of the recipient circuitry 116. In another configuration, the mode control circuitry 102 may include a multiplexer that produces register bits that correspond to a drive signal 106 strength required to produce a sufficient clock signal 114 quality to satisfy the recipient circuitry 116 in a current or anticipated operating mode.[0058] Mode control circuitry 102 may provide 308 the drive signal 106 to clock buffer circuitry 112. For example, the mode control circuitry 102 may provide the drive signal 106 strength to the clock buffer circuitry 112 that will produce a clock signal 114 quality that corresponds to the recipient circuitry 116 operating mode.[0059] The clock buffer circuitry 112 may adjust 310 a clock signal 114 quality based on the drive signal 106. In one configuration, the clock signal 114 quality may be adjusted such that it is sufficient to adequately support the recipient circuitry 116 in the current operating mode. For example, if the recipient circuitry 116 requires increased clock signal 114 quality for the current operating mode, the clock buffer circuitry 112 may increase the clock signal 114 quality according to the drive signal 106 such that the recipient circuitry 116 may function properly in the current operating mode. However, if the recipient circuitry 116 can tolerate a lower quality output clock 114 signal in the current operating mode, the clock buffer circuitry 112 may reduce the clock signal 114 quality according to the drive signal 106 in order to conserve energy or power.[0060] The clock buffer circuitry 112 may provide 312 the clock signal 114 to the recipient circuitry 116. For example, the clock buffer circuitry 112 may provide 312 the clock signal 114 of the quality indicated by the operating mode indicator 104 to the corresponding recipient circuitry 116. In one configuration, the clock buffer circuitry 112 may provide 312 different clock signal 114 qualities and/or different clock signal 114 types (e.g., analog, digital) in accordance with the operating mode specified.[0061] Figure 4 is a block diagram illustrating a more specific configuration of clock buffer circuitry 412 that may be dynamically adjusted for power conservation. For example, Figure 4 illustrates circuitry 400 configured for dynamically adjusting clock signal quality based on an operating mode for power savings. The clock buffer circuitry 412 may be coupled to mode control circuitry 402, clock generation circuitry 408 and/or one or more recipient circuitries 416. The clock generation circuitry 408 may generate one or more input clock signals 410. For example, the clock generation circuitry 408 may comprise a crystal and crystal oscillator circuitry used to generate the input clock signal(s) 410. In some configurations, the clock generation circuitry 408 may include components used to compensate for variations in the input clock signal 410. The input clock signal 410 may include variations and other impairments. For example, the input clock signal 410 may not have adequate peak-to-peak amplitude for some applications and/or may be subject to phase noise, jitter, frequency drift and/or temperature variation.[0062] The clock buffer circuitry 412 may be used to improve one or more aspects of the input clock signal(s) 410. The clock buffer circuitry 412 may include one or more clock buffers 418. Each of the one or more clock buffers 418 may include one or more of temperature compensation circuitry 420, frequency drift compensation circuitry 422, jitter compensation circuitry 424, phase noise compensation circuitry 426, amplification circuitry 428, conversion circuitry 430 and other circuitry used to improve the characteristics of the input clock signal(s) 410. The temperature compensation circuitry 420 may compensate for temperature variation in the input clock signal(s) 410. The frequency drift compensation circuitry 422 may compensate for variations in the frequency of the input clock signal(s) 410. The jitter compensation circuitry 424 may compensate for variations in time (e.g., phase), frequency and/or amplitude of the input clock signal(s) 410. The phase noise compensation circuitry 426 may compensate for variations in phase of the input clock signal(s) 410. The amplification circuitry 428 may amplify the input clock signal(s) 410. The conversion circuitry 430 may convert one or more of the input clock signals 410 to a digital (e.g., square wave) signal. Other circuitry (e.g., filtering circuitry) or circuitries may be used to additionally or alternatively enhance the input clock signal(s) 410. For example, other circuitries in the clock buffer(s) 418 may be used to improve a slew rate and/or reduce distortion in the output clock signal(s) 414.[0063] Additionally or alternatively, one or more clock buffers 418 may be configured according to one or more power modes 459 and/or according to one or more other parameters 461. For example, a power mode 459 may allow a clock buffer 418 to be configured to operate with a specified amount of power consumption. For instance, a number of power modes 459 may each configure a clock buffer 418 to consume a given current (e.g., an average number of amperes) while in operation. The configured power mode 459 may effect the functioning of one or more of the temperature compensation circuitry 420, frequency drift compensation circuitry 422, jitter compensation circuitry 424, phase noise compensation circuitry 426, amplification circuitry 428, conversion circuitry 430 and other circuitry or circuitries. For example, a reduced power mode (corresponding to a reduced quality operating mode, for instance) may consume less power at the expense of reduced output clock signal 414 quality. However, a highest (or high) power mode (corresponding to a highest (or high) power mode) may provide a higher quality output clock signal 414 at the expense of greater power consumption. It should be noted that multiple power modes 459 may be used that offer a range of output clock signal 414 qualities (e.g., low, medium, high, etc.) in trade for power consumption. In one configuration, the power mode 459 may be set or configured based on one or more drive signals 406 (e.g., current, voltage or data).[0064] The one or more clock buffers 418 may additionally or alternatively be configured to operate in accordance with one or more other parameters 461. The one or more other parameters 461 may be adjustable to control the performance of the clock buffer(s) 418 (e.g., the output clock signal 414 quality). For example, one other parameter 461 may be used to adjust a slew rate of one or more output clock signals 414. Another parameter 461 may be used to adjust distortion in the output clock signal(s) 414. One or more other parameters may effect the functioning of one or more of the temperature compensation circuitry 420, frequency drift compensation circuitry 422, jitter compensation circuitry 424, phase noise compensation circuitry 426, amplification circuitry 428, conversion circuitry 430 and other circuitry or circuitries. Changing the other parameter(s) 461 may effect power consumption in trade for output clock signal 414 quality. In general, reducing an output clock signal 414 quality based on one or more other parameters 461 may reduce power consumption. Conversely, increasing an output clock signal 414 quality based on one or more other parameters 461 may increase power consumption. In one configuration, the other parameter(s) 461 may be set or configured based on one or more drive signals 406 (e.g., current, voltage or data).[0065] It should be noted that as the one or more drive signals 406 are reduced, the performance of the one or more clock buffers 418 may accordingly be reduced. For example, reducing a drive signal (e.g., reducing a current, reducing a voltage, changing a data message, etc.) 406 may reduce the capability of a clock buffer 418 to improve one or more characteristics of the input clock signal 410. For example, in a reduced quality operating mode, the performance of one or more of the circuitries 420, 422, 424, 426, 428, 430 may be lessened or even disabled. In one configuration, a drive signal 406 may be reduced to the point that the output clock signal 414 is substantially equivalent to the input clock signal 410. It should be noted however, that a range or many different levels of operation of the one or more circuitries 420, 422, 424, 426, 428, 430 may be achieved. For instance, only a selection of the circuitries 420, 422, 424, 426, 428, 430 may be disabled when a drive signal 406 is reduced. Additionally or alternatively, the performance of one or more of the circuitries 420, 422, 424, 426, 428, 430 may be reduced with the reduced drive signal 406. It should be noted that a clock buffer 418 need not include all of the circuitries illustrated 420, 422, 424, 426, 428, 430 or indeed, any. Rather, a clock buffer 418 may include one or more of the circuitries 420, 422, 424, 426, 428, 430 illustrated or some other circuitry that improves a characteristic of the input clock signal 410.[0066] The clock buffer circuitry 412 (e.g., the one or more clock buffers 418) may operate based on one or more drive signals 406. For example, the clock buffer circuitry 412 may modify one or more of the input clock signals 410 to produce one or more output clock signals 414 based on the strength of the one or more drive signals 406. For instance, a clock buffer 418 may provide a "cleaner" or higher quality output clock signal 414 with increased drive signal 406 strength. A "cleaner" or higher quality output clock signal 414 may exhibit reduced phase noise, temperature variation, frequency drift, jitter and/or may provide a more accurate (e.g., desirable for the recipient circuitry 416) peak-to-peak amplitude. However, with decreased drive signal 406 strength, a clock buffer 418 may provide an output clock signal 414 that exhibits increased phase noise, jitter, temperature variation, frequency drift, jitter and/or a less accurate (e.g., lower, increased variation in, less desirable, etc.) peak-to-peak amplitude.[0067] In some configurations, the clock buffer circuitry 412 may provide multiple output clock signals 414. For example, the clock buffer circuitry 412 may provide different output clock signals 414 of the same or differing qualities. For instance, the clock buffer circuitry 412 may provide a low-quality output clock signal 414, a medium-quality output clock signal 414 and/or a high-quality output clock signal 414. The output clock signal(s) 414 may be provided to one or more recipient circuitries 416. For example, a first output clock signal 414 may be provided to a first recipient circuitry 416 and a second output clock signal 414 may be provided to a second recipient circuitry 416.[0068] The one or more recipient circuitries 416 may use the output clock signal(s) 414 to perform one or more operations. Examples of recipient circuitries 416 include processors, global positioning system (GPS) circuitry, Bluetooth circuitry, a frequency modulation (FM) receiver chip, interface circuitry (e.g., ports, etc.), signal processing circuitry (e.g., radio frequency (RF) chips), communications circuitry (e.g., modulators, demodulators, encoders, etc.) and/or timers, etc. For instance, the one or more recipient circuitries 416 may use the output clock signal(s) 414 to execute instructions, receive a signal, transmit a signal, encode a signal, decode a signal, modulate a signal, demodulate a signal, track time and/or coordinate communications, etc.[0069] In some configurations, the one or more recipient circuitries 416 may function according to differing operating modes. For example, one recipient circuitry 416 may require a particular quality of output clock signal 414 while in a first operating mode (e.g., highest quality operating mode), but may not require the same quality of output clock signal 414 in a second operating mode (e.g., reduced quality operating mode). For instance, an RF chip may require a high quality output clock signal 414 while in an active operating mode (e.g., while transmitting and receiving payload data), but may be able to tolerate a lower quality output clock signal 414 (with increased phase noise, frequency drift, etc., for example) while in a passive operating mode (e.g., while not transmitting or receiving payload data).[0070] In one configuration, one or more of the one or more recipient circuitries 416 may send one or more operating mode indicators 404 to the mode control circuitry 402. An operating mode indicator 404 may explicitly or implicitly indicate an operating mode for one or more recipient circuitries 416. The mode control circuitry 402 may control the one or more drive signals 406 based on the operating mode indicator 404. In one configuration, the mode control circuitry 402 may include mode mapping circuitry and one or more registers. The mode mapping circuitry may map the one or more operating mode indicators 404 to register bits that control the strength of the one or more drive signals 406. For example, if an operating mode indicator 404 indicates that a high quality output clock signal 414 is required by a recipient circuitry 416, the mode mapping circuitry may produce a set of corresponding register bits. The register bits may configure one or more registers to increase the strength of a drive signal 406 in order to cause a clock buffer 418 to provide a high quality output clock signal 414 to the recipient circuitry 416. The mode control circuitry 402 may similarly decrease the strength of a drive signal 406 when an operating mode indicator 404 indicates that a lower quality output clock signal 414 is sufficient for an operating mode of the recipient circuitry 416.[0071] In one example, if an RF chip (e.g., a recipient circuitry 416) is about to enter an active operating mode, the RF chip may send an operating mode indicator 404 to the mode control circuitry 402 indicating an operating mode that requires a high quality output clock signal 414. The mode control circuitry 402 may increase a drive signal 406 strength, thereby causing a clock buffer 418 to output a high quality output clock signal 414. Continuing the example, if the RF chip (e.g., recipient circuitry 416) is about to enter a passive mode, the RF chip may send an operating mode indicator 404 to the mode control circuitry 402 indicating an operating mode that does not require a high quality output clock signal 414 (when the RF chip or recipient circuitry 416 can tolerate a lower quality output clock signal 414). Accordingly, the mode control circuitry 402 may reduce the drive signal 406 strength, thereby causing the clock buffer circuitry 412 to provide a lower quality output clock signal 414 to the RF chip. Electrical power may be conserved by reducing the drive signal 406 strength while in operating modes that do not require a high quality output clock signal 414. Thus, the clock buffer circuitry 412 and the recipient circuitry 416 may operate more efficiently.[0072] In some configurations, multiple recipient circuitries 416 may be used in accordance with the systems and methods disclosed herein. For example, a first recipient circuitry 416 may require a high quality output clock signal 414 during an active operating mode, but not during a passive operating mode. A second recipient circuitry 416 may only require a lower quality output clock signal 414 (with increased phase noise, frequency drift, etc.). While in an active operating mode, the mode control circuitry 402 may increase drive signal 406 strength to provide a high quality output clock signal 414. This may come as a result of an active operating mode indicator 404 provided by the first recipient circuitry 416. The high quality output clock signal 414 may be provided to both the first recipient circuitry 416 and the second recipient circuitry 416. However, if the operating mode indicator 404 indicates that a lower quality output clock signal 414 is sufficient, the mode control circuitry 402 may lower the drive signal 406 strength, causing the clock buffer circuitry 412 to provide a lower quality output clock signal 414 for the first recipient circuitry 416 and the second recipient circuitry 416. This may reduce power consumption.[0073] In some configurations, multiple different operating modes may be used. For example, one or more recipient circuitries 416 may require a range of output clock signal 414 qualities based on multiple operating modes. One or more recipient circuitries 416 may thus provide multiple operating mode indicators 404. The mode control circuitry 402 may accordingly provide multiple drive signal 406 strengths. Furthermore, the clock buffer circuitry 412 may provide multiple output clock signal 414 qualities. Additionally or alternatively, the mode control circuitry 402 may provide multiple drive signals 406 to different clock buffers 418 included in the clock buffer circuitry 412, thereby allowing multiple output clock signals 414 of the same or differing qualities.[0074] In some configurations, one or more operating mode indicators 404 may additionally or alternatively be provided from circuitry other than the recipient circuitry 416. For example, controller circuitry (not illustrated in Figure 4) may dictate when an operating mode may change in addition to or alternatively from the recipient circuitry 416. In such a case, the controller circuitry may provide one or more operating mode indicators 404 to the mode control circuitry 402.[0075] It should be noted that a drive signal 406 may be implemented as a current or voltage. Thus, for example, a drive signal 406 strength may be increased or decreased by respectively increasing or decreasing an electrical current or voltage.[0076] Figure 5 is a diagram illustrating one example of dynamically adjusting clock buffer circuitry 512 for power conservation. In this example, clock generation circuitry 508 may provide an input clock signal 532 to the clock buffer circuitry 512. The recipient circuitry 516 (or controller circuitry 563) may determine that a high quality clock signal is required 556. For instance, the recipient circuitry 516 may be entering or anticipate entering a high quality operating mode (e.g., a highest quality operating mode in a set of operating modes). The recipient circuitry 516 (or controller circuitry 563) may provide (e.g., send) a high quality operating mode indicator 534 to mode control circuitry 502. In response, the mode control circuitry 502 may provide a drive signal strength for a high quality clock 536. The clock buffer circuitry 512 may provide a high quality clock signal 538 to the recipient circuitry 516.[0077] In this example, the recipient circuitry 516 (or controller circuitry 563) then determines that only a low quality clock signal is required 540. For example, the recipient circuitry 516 determines that a low quality clock signal may be tolerated while in a low quality operating mode (e.g., a first reduced quality operating mode). Accordingly, the recipient circuitry 516 (or controller circuitry 563) provides (e.g., sends) a low quality operating mode indicator 542 to the mode control circuitry 502. In response, the mode control circuitry 502 provides a drive signal strength for a low quality clock 544. For example, the mode control circuitry 502 may reduce the drive signal strength to the clock buffer circuitry 512 in order to conserve power. Accordingly, the clock buffer circuitry 512 may provide a low quality clock signal 546 to the recipient circuitry 516.[0078] Continuing the example, the recipient circuitry 516 (or controller circuitry 563) then determines that a medium quality clock signal is required 548. For example, the recipient circuitry 516 determines that a low quality clock signal is not sufficient for an anticipated "medium quality" operating mode (e.g., a second reduced quality operating mode). Accordingly, the recipient circuitry 516 (or controller circuitry 563) provides (e.g., sends) a medium quality operating mode indicator 550 to the mode control circuitry 502. In response, the mode control circuitry 502 provides a drive signal strength for a medium quality clock 552. For example, the mode control circuitry 502 may increase the drive signal strength to the clock buffer circuitry 512 in order to improve clock signal quality (from the low quality clock signal). Accordingly, the clock buffer circuitry 512 may provide a medium quality clock signal 554 to the recipient circuitry 516.[0079] It should be noted that the controller circuitry 563 may additionally or alternatively control an operating mode. For example, the controller circuitry 563 may determine an operating mode and send an operating mode indicator to the mode control circuitry. For instance, controller circuitry 563 may determine that RF circuitry (e.g., recipient circuitry 516) will require a high quality clock signal to operate while transmitting and/or receiving data. In one configuration, this determination may be based on a signal sent from the RF circuitry (e.g., recipient circuitry 516) to the controller circuitry 563 and/or a signal received from another component (e.g., processor). Accordingly, the controller circuitry 563 may send a high quality operating mode indicator 534 to the mode control circuitry 502. This may be in addition to or alternatively from the RF circuitry (e.g., recipient circuitry 516).[0080] Figure 6 is a block diagram illustrating one example of clock buffer circuitry 612 that may be dynamically adjusted for power conservation. For example, Figure 6 illustrates circuitry 600 configured for dynamically adjusting clock signal quality based on an operating mode for power savings. The clock buffer circuitry 612 may be coupled to mode control circuitry 602, clock generation circuitry 608 and/or recipient circuitries 616a- b. The clock generation circuitry 608 may generate an input clock signal 610. For example, the clock generation circuitry 608 may comprise a crystal and crystal oscillator circuitry used to generate the input clock signal 610. The input clock signal 610 may not have adequate peak-to-peak amplitude for some applications and/or may be subject to phase noise, jitter, frequency drift and/or temperature variation.[0081] The clock buffer circuitry 612 may be used to improve one or more aspects of the input clock signal 610. In the example illustrated in Figure 6, the clock buffer circuitry 612 includes clock buffer A 618a and clock buffer B 618b. Clock buffer A 618a may amplify the input clock signal 610, compensate for phase noise, compensate for frequency drift and/or compensate for temperature variation in the input clock signal 610. Clock buffer B 618b may be coupled to the output of clock buffer A 618a and may amplify the signal provided by clock buffer A 618a, compensate for phase noise, compensate for frequency drift and/or compensate for temperature variation in the signal provided by clock buffer A 618a.[0082] The recipient circuitries 616a-b may function according to differing operating modes. For example, the recipient circuitries 616a-b may require a particular quality of output clock signals 614a-b while in a first operating mode (e.g., a high quality operating mode), but may not require the same quality of the output clock signals 614a-b in a second operating mode (e.g., a reduced quality operating mode). For instance, an RF chip (e.g., recipient circuitry A 616a) may require high quality output clock signal A 614a while actively transmitting and receiving payload data, but may be able to tolerate a lower quality output clock signal (with increased phase noise, frequency drift, etc., for example) while not transmitting or receiving payload data.[0083] When entering (or anticipating) the first operating mode, recipient circuitry A 616a and/or recipient circuitry B 616b may send operating mode indicators 604a-b to the mode control circuitry 602. The operating mode indicators 604a-b may indicate a high quality operating mode for the recipient circuitries 616a-b. The mode control circuitry 602 may control the drive signals 606a-b based on the operating mode indicators 604a-b. For example, the mode control circuitry 602 may provide drive signal 606a-b strengths that are sufficient to cause clock buffer A 618a and clock buffer B 618b to respectively output high quality clock signal A 614a and high quality clock signal B 614b.[0084] In one example, if an RF chip (e.g., recipient circuitry A 616a) is about to enter an active operating mode, the RF chip may send high quality mode indicator A 604a to the mode control circuitry 602 indicating an operating mode that requires high quality output clock signal A 614a. The mode control circuitry 602 may increase the strength of drive signal A 606a, thereby causing clock buffer A 618a to output high quality output clock signal A 614a.[0085] Figure 7 is a block diagram illustrating another example of clock buffer circuitry 712 that may be dynamically adjusted for power conservation. For example, Figure 7 illustrates circuitry 700 configured for dynamically adjusting clock signal quality based on an operating mode for power savings. For instance, the circuitry 702, 712, 716 illustrated in Figure 7 may be the circuitry 602, 612, 616 illustrated in Figure 6 that is entering (or anticipating) a reduced quality operating mode. The clock buffer circuitry 712 may be coupled to mode control circuitry 702, clock generation circuitry 708, radio frequency (RF) communication circuitry 716a and/or global positioning system (GPS) circuitry 716b. The clock generation circuitry 708 may generate an input clock signal 710. For example, the clock generation circuitry 708 may comprise a crystal and crystal oscillator circuitry used to generate the input clock signal 710. The input clock signal 710 may not have adequate peak-to-peak amplitude for some applications and/or may be subject to phase noise, jitter, frequency drift and/or temperature variation.[0086] The clock buffer circuitry 712 may be used to improve one or more aspects of the input clock signal 710. In the example illustrated in Figure 7, the clock buffer circuitry includes clock buffer A 718a and clock buffer B 718b. Clock buffer A 718a may amplify the input clock signal 710, compensate for phase noise, compensate for frequency drift and/or compensate for temperature variation in the input clock signal 710. Clock buffer B 718b may be coupled to the output of clock buffer A 718a and may amplify the signal provided by clock buffer A 718a, compensate for phase noise, compensate for frequency drift and/or compensate for temperature variation in the signal provided by clock buffer A 718a.[0087] The RF communication circuitry 716a and the GPS circuitry 716b may function according to differing operating modes. For example, the GPS circuitry 716b may tolerate a medium quality clock signal 714b while in a second operating mode (e.g., reduced quality operating mode). Additionally, the RF communication circuitry 716a may tolerate a low quality clock signal 714a while in a second operating mode (e.g., reduced quality operating mode). For instance, RF communication circuitry 716a may tolerate a low quality clock signal 714a while not transmitting or receiving payload data. Additionally, GPS circuitry 716b may tolerate a medium quality clock signal 714b while in a second operating mode (e.g., reduced quality operating mode).[0088] Alternatively, the GPS circuitry 716b may always require only a medium quality clock signal 714b. In that case, the strength of drive signal B 706b may be increased while clock buffer A 718a is providing a low quality clock signal 714a for the RF communication circuitry 716a in the second operating mode. Furthermore, the strength of drive signal B 706b may be decreased (or maintained) while clock buffer A 718a is providing a high quality clock signal for the RF communication circuitry 716a in the first operating mode.[0089] When entering (or anticipating) the second operating mode, the RF communication circuitry 716a may send a low quality mode indicator 704a to the mode control circuitry 702. Additionally or alternatively, the GPS circuitry 716b may send a medium quality mode indicator 704b to the mode control circuitry 702.[0090] The low quality operating mode indicator 704a and the medium quality operating mode indicator 704b may respectively indicate a low quality operating mode for the RF communication circuitry 716a and a medium quality operating mode for the GPS circuitry 716b. The mode control circuitry 702 may control the drive signals 706a-b based on the operating mode indicators 704a-b. For example, the mode control circuitry 702 may provide a (reduced) strength for drive signal A 706a in order to cause clock buffer A 718a to produce a low quality clock signal 714a. Additionally or alternatively, the mode control circuitry 702 may provide a (reduced) strength for drive signal B 706b in order to cause clock buffer B 718b to produce a medium quality clock signal 714b. Electrical power may be conserved by reducing the drive signal 706 strength while in operating modes that do not require a high quality output clock signal. Thus, the clock buffer circuitry 712 and the recipient circuitry 716 may operate more efficiently.[0091] Figure 8 is a block diagram illustrating one configuration of power management circuitry 858. One example of power management circuitry 858 is a power management integrated circuit (PMIC). In some configurations, the power management circuitry 858 may be included in an electronic device, such as an integrated circuit, a cellular phone, a smart phone, a computer, etc. In some configurations, the power management circuitry 858 may be one example of circuitry 100, 400, 600, 700 described above. The power management circuitry 858 may include mode control circuitry 802, crystal oscillator circuitry 868 and clock buffer circuitry 812. The clock buffer circuitry 812 may be dynamically adjusted for power conservation. The clock buffer circuitry 812 may be coupled to mode control circuitry 802, crystal oscillator circuitry 868 and/or recipient circuitries 816a-d. The crystal oscillator circuitry 868 may provide an input clock signal 810. For example, the crystal oscillator circuitry 868 may be coupled to a crystal 866 used to generate the input clock signal 810. For example, the crystal oscillator circuitry 868 may apply a voltage to the crystal 866 that causes the crystal 866 to provide an oscillating signal. In one configuration, the crystal 866 may oscillate at approximately 19.2 megahertz (MHz).[0092] In some configurations, the crystal oscillator circuitry 868 may include components used to compensate for variations in the input clock signal 810. For example, the crystal oscillator circuitry 868 may use a temperature indicator 864 to compensate for temperature variations in the input clock signal 810. However, the input clock signal 810 may still vary according to temperature. The input clock signal 810 may include variations and other impairments. For example, the input clock signal 810 may not have adequate peak-to-peak amplitude for some applications and/or may be subject to phase noise, jitter, frequency drift and/or temperature variation.[0093] The clock buffer circuitry 812 may be used to improve one or more aspects of the input clock signal 810. For example, the clock buffer circuitry 812 may amplify the input clock signal 810, may filter the input clock signal 810 and/or may convert the input clock signal 810 to a digital (e.g., square wave) signal. Additionally or alternatively, the clock buffer circuitry 812 may compensate for phase noise, frequency drift and/or temperature variation in the input clock signal 810.[0094] In the example illustrated in Figure 8, the clock buffer circuitry 812 includes multiple buffers 818a-e in order to provide multiple output clock signals 814a-d. For example, the clock buffer circuitry 812 may provide different output clock signals 814a-d of the same or differing qualities. For instance, each of the buffers 818a-e may provide a range of clock signal 814 qualities, including a highest quality clock signal 814 corresponding to a highest quality operating mode and one or more reduced quality clock signals 814 corresponding to one or more reduced quality operating modes. It should be noted that a highest quality clock signal 814 provided from one buffer 818 may differ from a highest quality clock signal 814 provided from another buffer 818. For example, each of the recipient circuitries 816a-d may require a different highest quality clock signal 814 for proper operation in highest quality operating modes.[0095] Each of the buffers 818a-e may provide differing clock signals. For example, analog buffer A 818a may provide (analog) output clock signal A 814a and analog buffer B 818b may provide (analog) output clock signal B 814b. However, digital buffer E 818e may provide a digital clock signal, digital buffer C 818c may provide (digital) output clock signal C 814c and digital buffer D 818d may provide (digital) clock signal D 814d. Each of the output clock signals 814a-d may have similar or differing qualities.[0096] Each of the output clock signals 814a-d may be provided to corresponding recipient circuitries 816a-d. For example, output clock signal A 814a may be provided to recipient circuitry A 816a, output clock signal B 814b may be provided to recipient circuitry B 816b, output clock signal C 814c may be provided to recipient circuitry C 816c and output clock signal D 814d may be provided to recipient circuitry D 816d.[0097] Each buffer 818a-e may operate based on a corresponding drive signal 806a-e. For example, each buffer 818a-e may modify the input clock signal 810 (or a derivative thereof) to produce output clock signals 814a-d based on drive signal 806a-e strengths. For instance, the buffers 818a-d may provide a "cleaner" or higher quality output clock signals 814a-d with increased drive signal 806a-e strength. Reduced drive signal 806a-e strength may provide reduced clock signal 814a-d quality.[0098] In some configurations, differing buffers 818a-e may modify the input clock signal 810 in different ways. For example, digital buffer E 818e may convert the input clock signal 810 into a digital signal, while digital buffer C 818c may reduce phase noise.[0099] The recipient circuitries 816a-d may use the output clock signals 814a-d to perform one or more operations. Examples of recipient circuitry 816 include processors, global positioning system (GPS) circuitry, Bluetooth circuitry, a frequency modulation (FM) receiver chip, interface circuitry (e.g., ports, etc.), signal processing circuitry (e.g., radio frequency (RF) chips), communications circuitry (e.g., modulators, demodulators, encoders, etc.) and/or timers, etc. For instance, the recipient circuitries 816a-d may use the output clock signal 814 to execute instructions, receive a signal, transmit a signal, encode a signal, decode a signal, modulate a signal, demodulate a signal, track time and/or coordinate communications, etc.[00100] One or more of the recipient circuitries 816a-d may function according to differing operating modes. For example, each of the recipient circuitries 816a-d may require particular output clock signal 814a-d qualities while in differing operating modes. One or more of the recipient circuitries 816a-d may send an operating mode indicator 804 to the mode control circuitry 802. Each operating mode indicator 804 may explicitly or implicitly indicate an operating mode for one or more recipient circuitries 816a-d. The mode control circuitry 802 may control the drive signals 806a-e based on the operating mode indicator 804.[00101] In the example illustrated in Figure 8, the mode control circuitry 802 may include mode mapping circuitry 860 and one or more registers 862. The mode mapping circuitry 860 may map the operating mode indicator(s) 804 to register bits that control drive signal 806a-e strength. For example, if an operating mode indicator 804 indicates that a high quality output clock signal 814a is required for recipient circuitry A 816a, the mode mapping circuitry 860 may produce a set of corresponding register bits. The register bits may configure the one or more registers 862 to increase the strength of drive signal A 806a in order to cause analog buffer A 818a to provide a high quality output clock signal A 814a. The mode control circuitry 802 may similarly decrease the strength of drive signal A 806a when an operating mode indicator 804 corresponding to recipient circuitry A 816a indicates that a lower quality output clock signal A 814a is sufficient.[00102] In some configurations, the mode control circuitry 802 may operate according to one or more sets of operating modes. Each set of operating modes may correspond to one or more recipient circuitries 816a-d. For instance, each recipient circuitry 816a-d may have a highest quality operating mode and one or more reduced quality operating modes. In one configuration, the mode control circuitry 802 may provide minimum (with some margin, for example) drive signal 806a-e strengths in order to provide clock signal 814a-d qualities that are sufficient to satisfy the current operating modes of the recipient circuitries 816a-d. However, the mode control circuitry 802 may not provide (except with some margin, for example) higher drive signal 806a-e strengths and higher output clock signal 814a-d qualities than are needed for all of the recipient circuitries 816a-d to function properly according to operating modes in one configuration. More specifically, the mode control circuitry 802 may not operate according to a higher quality operating mode if a lesser (reduced) quality operating mode is available that will still allow proper functioning of the recipient circuitries 816a-d according to their several operating modes, for instance. This approach may conserve power or reduce wasted power. The operating modes (and hence, drive signal 806a-e strengths and output clock signal 814a-d qualities) may vary in time.[00103] In some configurations, one or more operating mode indicators 804 may additionally or alternatively be provided from circuitry other than the recipient circuitries 816a-d. For example, controller circuitry (not illustrated in Figure 8) may dictate when an operating mode may change in addition to or alternatively from the recipient circuitries 816a-d. In such a case, the controller circuitry may provide one or more operating mode indicators 804 to the mode control circuitry 802.[00104] Figure 9 is a block diagram illustrating one configuration of a wireless communication device 970 in which systems and methods for dynamically adjusting clock buffer circuitry 912 for power conservation may be implemented. Examples of wireless communication devices 970 include cellular phones, smartphones, tablet devices, laptop computers, personal digital assistants (PDAs), etc. The wireless communication device 970 may include an application processor 986. The application processor 986 generally processes instructions (e.g., runs programs) to perform functions on the wireless communication device 970. The application processor 986 may be coupled to an audio coder/decoder (codec) 984.[00105] The audio codec 984 may be an electronic device (e.g., integrated circuit) used for coding and/or decoding audio signals. The audio codec 984 may be coupled to one or more speakers 972, an earpiece 974, an output jack 976 and/or one or more microphones 978. The speakers 972 may include one or more electro-acoustic transducers that convert electrical or electronic signals into acoustic signals. For example, the speakers 972 may be used to play music or output a speakerphone conversation, etc. The earpiece 974 may be another speaker or electro-acoustic transducer that can be used to output acoustic signals (e.g., speech signals) to a user. For example, the earpiece 974 may be used such that only a user may reliably hear the acoustic signal. The output jack 976 may be used for coupling other devices to the wireless communication device 970 for outputting audio, such as headphones. The speakers 972, earpiece 974 and/or output jack 976 may generally be used for outputting an audio signal from the audio codec 984. The one or more microphones 978 may be one or more acousto-electric transducers that convert an acoustic signal (such as a user's voice) into electrical or electronic signals that are provided to the audio codec 984.[00106] The application processor 986 may also be coupled to a power management circuit 980. One example of the power management circuit 980 is a power management integrated circuit (PMIC), which may be used to manage the electrical power consumption of the wireless communication device 970. The power management circuit 980 may be coupled to a battery 982. The battery 982 may generally provide electrical power to the wireless communication device 970.[00107] The power management circuit 980 may include mode control circuitry 902. One or more of the mode control circuitries 102, 402, 502, 602, 702, 802 described above may be examples of the mode control circuitry 902 illustrated in Figure 9. The mode control circuitry 902 may be used to perform one or more of the methods 200, 300 described above. [00108] The power management circuit 980 may additionally include clock buffer circuitry 912. One or more of the clock buffer circuitries 1 12, 412, 512, 612, 712, 812 described above may be examples of the clock buffer circuitry 912 illustrated in Figure 9. The clock buffer circuitry 912 may be used to perform one or more of the methods 200, 300 described above. The mode control circuitry 902 and/or the clock buffer circuitry 912 may be used to conserve battery 982 power in accordance with the systems and methods described herein.[00109] The power management circuitry 858 illustrated in Figure 8 may be one example of the power management circuit 980 illustrated in Figure 9. As shown in Figure 9, the power management circuit 980 may be coupled to the audio codec 984, application processor 986, baseband processor 988, RF transceiver 990, input devices 996, output devices 998, application memory 901, display controller 903, display 905 and/or baseband memory 907. One or more of these elements 984, 986, 988, 990, 996, 998, 901, 903, 905, 907 may be examples of the recipient circuitries 116, 416, 516, 616, 716, 816 described above.[00110] The application processor 986 may be coupled to one or more input devices 996 for receiving input. Examples of input devices 996 include infrared sensors, image sensors, accelerometers, touch sensors, keypads, etc. The input devices 996 may allow user interaction with the wireless communication device 970. The application processor 986 may also be coupled to one or more output devices 998. Examples of output devices 998 include printers, projectors, screens, haptic devices, etc. The output devices 998 may allow the wireless communication device 970 to produce output that may be experienced by a user.[00111] The application processor 986 may be coupled to application memory 901. The application memory 901 may be any electronic device that is capable of storing electronic information. Examples of application memory 901 include double data rate synchronous dynamic random access memory (DDRAM), synchronous dynamic random access memory (SDRAM), flash memory, etc. The application memory 901 may provide storage for the application processor 986. For instance, the application memory 901 may store data and/or instructions for the functioning of programs that are run on the application processor 986. [00112] The application processor 986 may be coupled to a display controller 903, which in turn may be coupled to a display 905. The display controller 903 may be a hardware block that is used to generate images on the display 905. For example, the display controller 903 may translate instructions and/or data from the application processor 986 into images that can be presented on the display 905. Examples of the display 905 include liquid crystal display (LCD) panels, light emitting diode (LED) panels, cathode ray tube (CRT) displays, plasma displays, etc.[00113] The application processor 986 may be coupled to a baseband processor 988. The baseband processor 988 generally processes communication signals. For example, the baseband processor 988 may demodulate and/or decode received signals. Additionally or alternatively, the baseband processor 988 may encode and/or modulate signals in preparation for transmission.[00114] The baseband processor 988 may be coupled to baseband memory 907. The baseband memory 907 may be any electronic device capable of storing electronic information, such as SDRAM, DDRAM, flash memory, etc. The baseband processor 988 may read information (e.g., instructions and/or data) from and/or write information to the baseband memory 907. Additionally or alternatively, the baseband processor 988 may use instructions and/or data stored in the baseband memory 907 to perform communication operations.[00115] The baseband processor 988 may be coupled to a radio frequency (RF) transceiver 990. The RF transceiver 990 may be coupled to a power amplifier 992 and one or more antennas 994. The RF transceiver 990 may transmit and/or receive radio frequency signals. For example, the RF transceiver 990 may transmit an RF signal using a power amplifier 992 and one or more antennas 994. The RF transceiver 990 may also receive RF signals using the one or more antennas 994.[00116] Figure 10 illustrates various components that may be utilized in an electronic device 1009. The illustrated components may be located within the same physical structure or in separate housings or structures. The electronic device 1009 may include one or more of the clock buffer circuitries 1 12, 412, 512, 612, 712, 812, 912 and/or mode control circuitries 102, 402, 502, 602, 702, 802, 902 described previously. The electronic device 1009 includes a processor 1017. The processor 1017 may be a general purpose single- or multi-chip microprocessor (e.g., an ARM), a special purpose microprocessor (e.g., a digital signal processor (DSP)), a microcontroller, a programmable gate array, etc. The processor 1017 may be referred to as a central processing unit (CPU). Although just a single processor 1017 is shown in the electronic device 1009 of Figure 10, in an alternative configuration, a combination of processors (e.g., an ARM and DSP) could be used.[00117] The electronic device 1009 also includes memory 1011 in electronic communication with the processor 1017. That is, the processor 1017 can read information from and/or write information to the memory 1011. The memory 1011 may be any electronic component capable of storing electronic information. The memory 1011 may be random access memory (RAM), read-only memory (ROM), magnetic disk storage media, optical storage media, flash memory devices in RAM, on-board memory included with the processor, programmable read-only memory (PROM), erasable programmable read-only memory (EPROM), electrically erasable PROM (EEPROM), registers, and so forth, including combinations thereof.[00118] Data 1015a and instructions 1013a may be stored in the memory 1011. The instructions 1013a may include one or more programs, routines, sub-routines, functions, procedures, etc. The instructions 1013a may include a single computer-readable statement or many computer-readable statements. The instructions 1013a may be executable by the processor 1017 to implement one or more of the methods 200, 300 described above. Executing the instructions 1013a may involve the use of the data 1015a that is stored in the memory 1011. Figure 10 shows some instructions 1013b and data 1015b being loaded into the processor 1017 (which may come from instructions 1013a and data 1015a).[00119] The electronic device 1009 may also include one or more communication interfaces 1021 for communicating with other electronic devices. The communication interfaces 1021 may be based on wired communication technology, wireless communication technology, or both. Examples of different types of communication interfaces 1021 include a serial port, a parallel port, a Universal Serial Bus (USB), an Ethernet adapter, an IEEE 1394 bus interface, a small computer system interface (SCSI) bus interface, an infrared (IR) communication port, a Bluetooth wireless communication adapter, and so forth.[00120] The electronic device 1009 may also include one or more input devices 1023 and one or more output devices 1027. Examples of different kinds of input devices 1023 include a keyboard, mouse, microphone, remote control device, button, joystick, trackball, touchpad, lightpen, etc. For instance, the electronic device 1009 may include one or more microphones 1025 for capturing acoustic signals. In one configuration, a microphone 1025 may be a transducer that converts acoustic signals (e.g., voice, speech) into electrical or electronic signals. Examples of different kinds of output devices 1027 include a speaker, printer, etc. For instance, the electronic device 1009 may include one or more speakers 1029. In one configuration, a speaker 1029 may be a transducer that converts electrical or electronic signals into acoustic signals. One specific type of output device which may be typically included in an electronic device 1009 is a display device 1031. Display devices 1031 used with configurations disclosed herein may utilize any suitable image projection technology, such as a cathode ray tube (CRT), liquid crystal display (LCD), light-emitting diode (LED), gas plasma, electroluminescence, or the like. A display controller 1033 may also be provided, for converting data stored in the memory 1011 into text, graphics, and/or moving images (as appropriate) shown on the display device 1031.[00121] The various components of the electronic device 1009 may be coupled together by one or more buses, which may include a power bus, a control signal bus, a status signal bus, a data bus, etc. For simplicity, the various buses are illustrated in Figure 10 as a bus system 1019. It should be noted that Figure 10 illustrates only one possible configuration of an electronic device 1009. Various other architectures and components may be utilized.[00122] Figure 11 illustrates certain components that may be included within a wireless communication device 1135. The wireless communication device 1135 may include one or more of the clock buffer circuitries 112, 412, 512, 612, 712, 812, 912 and/or mode control circuitries 102, 402, 502, 602, 702, 802, 902 described previously.[00123] The wireless communication device 1135 includes a processor 1157. The processor 1157 may be a general purpose single- or multi-chip microprocessor (e.g., an ARM), a special purpose microprocessor (e.g., a digital signal processor (DSP)), a microcontroller, a programmable gate array, etc. The processor 1157 may be referred to as a central processing unit (CPU). Although just a single processor 1157 is shown in the wireless communication device 1135 of Figure 11, in an alternative configuration, a combination of processors (e.g., an ARM and DSP) could be used.[00124] The wireless communication device 1135 also includes memory 1137 in electronic communication with the processor 1157 (i.e., the processor 1157 can read information from and/or write information to the memory 1137). The memory 1137 may be any electronic component capable of storing electronic information. The memory 1137 may be random access memory (RAM), read-only memory (ROM), magnetic disk storage media, optical storage media, flash memory devices in RAM, on-board memory included with the processor, programmable read-only memory (PROM), erasable programmable read-only memory (EPROM), electrically erasable PROM (EEPROM), registers, and so forth, including combinations thereof.[00125] Data 1 139a and instructions 1141a may be stored in the memory 1137. The instructions 1141a may include one or more programs, routines, sub-routines, functions, procedures, code, etc. The instructions 1141a may include a single computer-readable statement or many computer-readable statements. The instructions 1141a may be executable by the processor 1157 to implement one or more of the methods 200, 300 described above. Executing the instructions 1141a may involve the use of the data 1139a that is stored in the memory 1137. Figure 11 shows some instructions 1141b and data 1139b being loaded into the processor 1157 (which may come from instructions 1141a and data 1139a).[00126] The wireless communication device 1135 may also include a transmitter 1153 and a receiver 1155 to allow transmission and reception of signals between the wireless communication device 1135 and a remote location (e.g., another electronic device, wireless communication device, etc.). The transmitter 1153 and receiver 1155 may be collectively referred to as a transceiver 1151. An antenna 1149 may be electrically coupled to the transceiver 1151. The wireless communication device 1135 may also include (not shown) multiple transmitters, multiple receivers, multiple transceivers and/or multiple antenna. [00127] In some configurations, the wireless communication device 1135 may include one or more microphones 1143 for capturing acoustic signals. In one configuration, a microphone 1143 may be a transducer that converts acoustic signals (e.g., voice, speech) into electrical or electronic signals. Additionally or alternatively, the wireless communication device 1 135 may include one or more speakers 1145. In one configuration, a speaker 1145 may be a transducer that converts electrical or electronic signals into acoustic signals.[00128] The various components of the wireless communication device 1135 may be coupled together by one or more buses, which may include a power bus, a control signal bus, a status signal bus, a data bus, etc. For simplicity, the various buses are illustrated in Figure 11 as a bus system 1147.[00129] In the above description, reference numbers have sometimes been used in connection with various terms. Where a term is used in connection with a reference number, this may be meant to refer to a specific element that is shown in one or more of the Figures. Where a term is used without a reference number, this may be meant to refer generally to the term without limitation to any particular Figure.[00130] The term "determining" encompasses a wide variety of actions and, therefore, "determining" can include calculating, computing, processing, deriving, investigating, looking up (e.g., looking up in a table, a database or another data structure), ascertaining and the like. Also, "determining" can include receiving (e.g., receiving information), accessing (e.g., accessing data in a memory) and the like. Also, "determining" can include resolving, selecting, choosing, establishing and the like.[00131] The phrase "based on" does not mean "based only on," unless expressly specified otherwise. In other words, the phrase "based on" describes both "based only on" and "based at least on."[00132] The functions described herein may be stored as one or more instructions on a processor-readable or computer-readable medium. The term "computer-readable medium" refers to any available medium that can be accessed by a computer or processor. By way of example, and not limitation, such a medium may comprise RAM, ROM, EEPROM, flash memory, CD-ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other medium that can be used to store desired program code in the form of instructions or data structures and that can be accessed by a computer or processor. Disk and disc, as used herein, includes compact disc (CD), laser disc, optical disc, digital versatile disc (DVD), floppy disk and Blu-ray®disc where disks usually reproduce data magnetically, while discs reproduce data optically with lasers. It should be noted that a computer-readable medium may be tangible and non-transitory. The term "computer- program product" refers to a computing device or processor in combination with code or instructions (e.g., a "program") that may be executed, processed or computed by the computing device or processor. As used herein, the term "code" may refer to software, instructions, code or data that is/are executable by a computing device or processor.[00133] Software or instructions may also be transmitted over a transmission medium. For example, if the software is transmitted from a website, server, or other remote source using a coaxial cable, fiber optic cable, twisted pair, digital subscriber line (DSL), or wireless technologies such as infrared, radio, and microwave, then the coaxial cable, fiber optic cable, twisted pair, DSL, or wireless technologies such as infrared, radio, and microwave are included in the definition of transmission medium.[00134] The methods disclosed herein comprise one or more steps or actions for achieving the described method. The method steps and/or actions may be interchanged with one another without departing from the scope of the claims. In other words, unless a specific order of steps or actions is required for proper operation of the method that is being described, the order and/or use of specific steps and/or actions may be modified without departing from the scope of the claims.[00135] It is to be understood that the claims are not limited to the precise configuration and components illustrated above. Various modifications, changes and variations may be made in the arrangement, operation and details of the systems, methods, and apparatus described herein without departing from the scope of the claims.[00136] What is claimed is: |
The invention relates to mixed metal structures in stacked semiconductor devices and associated systems and methods. The stacked semiconductor device may include a first semiconductor die and a second semiconductor die. The first semiconductor die may include a top surface, a first bonding site at the top surface, and a second bonding site at the first surface spaced apart from the first bonding site. The second semiconductor die may include a lower surface facing the top surface of the first semiconductor die, a third bonding site at the lower surface, and a fourth bonding site at the lower surface. The third bonding site includes a conductive structure bonded to the first bonding site by a metal-metal bond. The fourth bond site at the lower surface includes a solder ball bonded to the second bond site. |
1. A stacked semiconductor device comprising:A first semiconductor die having a top surface and a bottom surface opposite the top surface, the first semiconductor die including a first bonding site at the top surface and a bonding site at the top surface a second junction site spaced apart from the first junction site; anda second semiconductor die having a lower surface facing the top surface of the first semiconductor die and an upper surface opposite the lower surface, the second semiconductor die comprising:a third junction site at the lower surface, wherein the third junction site comprises a conductive structure joined to the first junction site by a metal-to-metal joint; andA fourth bonding site at the lower surface, wherein the fourth bonding site includes a solder ball bonded to the second bonding site.2. The stacked semiconductor device of claim 1, wherein the conductive structure of the third bonding site is a first copper pillar, and wherein the first bonding site comprises an electrical connection through a copper-copper joint to the second copper post on the first copper post.3. The stacked semiconductor device of claim 1 , wherein the fourth bonding site comprises a first conductive pad bonded to the solder ball, and wherein the second bonding site comprises a first conductive pad bonded to the solder ball. the second conductive pad to the first conductive pad.4. The stacked semiconductor device of claim 1 , wherein the first bonding site at the top surface corresponds to a first through-substrate via (TSV) extending from the top surface toward the bottom surface, And wherein the third junction site at the lower surface corresponds to a third TSV extending from the lower surface toward the upper surface.5. The stacked semiconductor device of claim 4, wherein the first TSV and the third TSV form an electrical communication line between the first semiconductor die and the second semiconductor die.6. The stacked semiconductor device of claim 1 , wherein the second bonding site at the top surface corresponds to a second TSV extending from the top surface toward the bottom surface, and wherein the lower portion The fourth junction site at the surface corresponds to a fourth TSV extending from the lower surface toward the upper surface.7. The stacked semiconductor device of claim 1, wherein the first bonding site and the second bonding site are separated by a distance between 5 microns and 40 microns.8. The stacked semiconductor device of claim 1, wherein the first semiconductor die and the second semiconductor die have a bond wire thickness between 5 microns and 20 microns.9. A stacked semiconductor device comprising:A first semiconductor die having a first bonding surface, a first plurality of bonding sites in a first array on the first bonding surface, and a first plurality of bonding sites in a second array on the first bonding surface two junction sites;A second semiconductor die having a second bonding surface facing the first bonding surface of the first semiconductor die, a plurality of third bonding sites in the first array on the second bonding surface and a plurality of fourth engagement sites in the second array at the second engagement surface;a plurality of solderless interconnect structures between the first semiconductor die and the second semiconductor die, wherein individual bonding of each solderless interconnect structure in the plurality of first bonding sites an electrical connection is formed between the site and individual ones of the plurality of third engagement sites; anda plurality of solder joints between the first semiconductor die and the second semiconductor die, wherein each solder joint is coupled to an individual joint site of the plurality of second joint sites and the Individual junction sites of the plurality of fourth junction sites.10. The stacked semiconductor device according to claim 9 , wherein each solderless interconnect structure is between said individual bonding site of said plurality of first bonding sites and said plurality of third bonding sites Metal-to-metal joints are formed between the individual joint sites.11. The stacked semiconductor device of claim 9, wherein the electrical connections between the first plurality of bonding sites and the third plurality of bonding sites are between the first semiconductor die and the A plurality of electrical communication channels are established between the second semiconductor dies.12. The stacked semiconductor device of claim 9 , wherein each of the plurality of first bonding sites is bonded to a TSV in the first semiconductor die, and wherein the plurality of third bonding sites Each of the points is bonded to a TSV in the second semiconductor die.13. The stacked semiconductor device of claim 9, wherein each bonding site of the plurality of second bonding sites is bonded to a thermal structure in the first semiconductor die.14. The stacked semiconductor device of claim 12 , wherein the plurality of first bonding sites comprises a plurality of first bonding pads extending to a height, and wherein the plurality of second bonding sites comprises extending to a plurality of second bonding pads of the height.15. The stacked semiconductor device of claim 9 , wherein the plurality of solder joints between the second plurality of bonding sites and the fourth plurality of bonding sites are on the first semiconductor die A plurality of thermal vias are established with the second semiconductor die.16. The stacked semiconductor device according to claim 9, wherein:The second semiconductor die has a third bonding surface opposite the second bonding surface, a plurality of fifth bonding sites in the first array on the third bonding surface, and the third bonding A plurality of sixth junction sites in said second array at the surface, wherein:One or more bonding sites of the plurality of fifth bonding sites are electrically connected to corresponding bonding sites of the plurality of third bonding sites by an interconnect structure extending through the second semiconductor die. point, andOne or more bonding sites of the plurality of sixth bonding sites are thermally connected to corresponding bonding sites of the plurality of fourth bonding sites by a thermal structure extending through the second semiconductor die .17. The stacked semiconductor device of claim 16, further comprising:A third semiconductor die having a fourth bonding surface facing the third bonding surface of the second semiconductor die, a plurality of seventh bonding sites in the first array on the fourth bonding surface and a plurality of eighth engagement sites in the second array at the fourth engagement surface, wherein:each bonding site of the plurality of seventh bonding sites comprises a conductive structure directly bonded to a corresponding conductive structure of the plurality of fifth bonding sites, andEach bonding site of the plurality of eighth bonding sites includes a solder structure bonded to a corresponding conductive structure in the plurality of sixth bonding sites.18. A method for forming a stacked semiconductor device comprising:forming a conductive pad on at least one first bonding site of the first semiconductor die;forming a solder structure on at least one second bonding site in the first semiconductor die adjacent to the at least one first bonding site;stacking the first semiconductor die on a second semiconductor die having a corresponding conductive pad; andbonding the at least one first bonding site and the at least one second bonding site to the corresponding conductive pads on the second semiconductor die, wherein the bonding comprises:reflowing the solder structure over the at least one second bonding site to bond the at least one second bonding site to the corresponding conductive pad on the second semiconductor die; andThe conductive pad is annealed to form a metal-metal bond between the at least one first bonding site on the first semiconductor die and the corresponding conductive pad on the second semiconductor die.19. The method of claim 18, wherein the at least one first bonding site of the first semiconductor die is at least two first bonding sites, and wherein Sites on which the conductive pads are formed include:disposing a photoresist material over the bonding surface of the first semiconductor die;patterning the photoresist material to expose the at least two first junction sites;depositing a conductive material into the patterned photoresist material at a uniform height beyond said conductive pads;removing the conductive material until each of the conductive pads is at the uniform height; andThe photoresist material is stripped from the first semiconductor die.20. The method of claim 19, wherein the at least one second bonding site of the first semiconductor die is at least two second bonding sites, and wherein Sites on which the solder structure is formed include:disposing a second photoresist material over the bonding surface of the first semiconductor die, and disposing the conductive pads over the at least two first bonding sites;patterning the second photoresist material to expose the at least two second bonding sites;depositing solder material into the second patterned photoresist material;stripping the second photoresist material from the first semiconductor die; andThe solder material is at least partially reflowed over the at least two second joint sites.21. The method of claim 18 , wherein the conductive pad on the at least one first bonding site of the first semiconductor die and the corresponding conductive pad on the second semiconductor die are both are copper pads, and wherein the annealing forms a copper-copper bond between the copper pads. |
Mixed metal structures in stacked semiconductor devices and associated systems and methodstechnical fieldThe present disclosure generally relates to systems and methods for stacking semiconductor devices. In particular, the present technology relates to stacked semiconductor devices having mixed metal structure bonded die in the stacked semiconductor device.Background techniqueMicroelectronic devices, such as memory devices, microprocessors, and other electronic devices, typically include one or more semiconductor die mounted to a substrate and encased in a protective covering. A semiconductor die contains functional features such as memory cells, processor circuits, interconnect circuitry, and the like. Semiconductor die manufacturers are under increasing pressure to reduce the volume occupied by semiconductor dies and to increase the capacity and/or speed of the resulting semiconductor components. To meet these demands, semiconductor die manufacturers typically stack multiple semiconductor dies vertically one above the other to increase the size of the microelectronic device within a limited area on a circuit board or other component to which the semiconductor die and/or components are mounted. capacity or performance.Furthermore, semiconductor die manufacturers have continuously reduced bond wire thickness to reduce the overall height of the stack of semiconductor die and/or have reduced the pitch between bonded features to reduce the vertical footprint of the die stack. However, the reduction can cause bonding issues between dies. For example, conventional solder joints between semiconductor die stacks often have extrusions. As height requirements shrink, the die compresses together, creating more extrusion that can create thermal and/or electrical shorts between bonded features. The reduction in spacing between engagement features may also allow the extrusion to short between engagement features.Contents of the inventionIn one aspect, the present application provides a stacked semiconductor device comprising: a first semiconductor die having a top surface and a bottom surface opposite the top surface, the first semiconductor die contained on the top surface a first bonding site at a surface and a second bonding site spaced apart from the first bonding site at the top surface; and a second semiconductor die having a A lower surface of the top surface and an upper surface opposite the lower surface, the second semiconductor die comprising: a third bonding site at the lower surface, wherein the third bonding site comprises A conductive structure bonded to the first bonding site by a metal-metal joint; and a fourth bonding site at the lower surface, wherein the fourth bonding site includes bonding to the second bonding site Dot of solder balls.In another aspect, the present application provides a stacked semiconductor device comprising: a first semiconductor die having a first bonding surface, a plurality of first bonding sites in a first array on the first bonding surface and a plurality of second bonding sites in a second array on the first bonding surface; a second semiconductor die having a second bonding facing the first bonding surface of the first semiconductor die surface, a plurality of third engagement sites in the first array on the second engagement surface, and a plurality of fourth engagement sites in the second array at the second engagement surface; a plurality of Solderless interconnect structures between the first semiconductor die and the second semiconductor die, wherein each solderless interconnect structure is at a respective joint site of the plurality of first joint sites forming electrical connections with individual ones of the plurality of third bonding sites; and a plurality of solder joints between the first semiconductor die and the second semiconductor die, wherein each A solder joint is coupled to an individual bonding site of the plurality of second bonding sites and an individual bonding site of the plurality of fourth bonding sites.In yet another aspect, the present application provides a method for forming a stacked semiconductor device, comprising: forming a conductive pad on at least one first bonding site of a first semiconductor die; Solder balls are formed on at least one second bonding site adjacent to the at least one first bonding site; the first semiconductor die is stacked on a second semiconductor die having individual corresponding to each of the at least one first bonding site and the at least one second bonding site; and connecting the at least one first bonding site and the at least one second bonding site site bonding to the corresponding conductive pad on the second semiconductor die, wherein the bonding includes: reflowing a solder post on the at least one second bonding site to bond the at least one second bonding site bonding to the corresponding conductive pad on the second semiconductor die; and annealing the conductive pad to bond between the at least one first bonding site on the first semiconductor die and the second semiconductor die Metal-to-metal joints are formed between the corresponding conductive pads.Description of drawings1 is a cross-sectional view of a stacked semiconductor device with a mixed-metal bonding structure between dies in accordance with some embodiments of the present technology.2A-2J illustrate a process for producing a semiconductor die with a mixed-metal bonding structure for use in stacked semiconductor devices, in accordance with some embodiments of the present technology.3A-3H illustrate a process for producing semiconductor die with corresponding metal bonding structures for use in stacked semiconductor devices, in accordance with some embodiments of the present technology.4A-4D illustrate a process for forming a stacked semiconductor device with a mixed metal junction structure in accordance with some embodiments of the present technology.5A and 5B illustrate a process for leveling metal bonded structures in accordance with some embodiments of the present technology.6 is a schematic diagram of a system including a semiconductor die assembly configured in accordance with some embodiments of the present technology.The drawings are not necessarily drawn to scale. Similarly, for purposes of discussing some implementations of the present technology, some components and/or operations may be separated into different blocks or combined into a single block. Also, while the present technology is susceptible to various modifications and alternative forms, specific implementations have been shown by way of example in the drawings and described in detail below.Detailed waysoverviewA stacked semiconductor device having a mixed metal structure and associated systems and methods are disclosed herein. The stacked semiconductor device includes a first semiconductor die and a second semiconductor die. The first semiconductor die has a top surface and a bottom surface opposite the top surface. One or more first engagement sites are located on the top surface. One or more second engagement sites are positioned at the top surface spaced apart from the first engagement sites. The second semiconductor die includes a lower surface facing the top surface of the first semiconductor die. One or more third engagement sites are positioned at the lower surface corresponding to the first engagement sites. One or more fourth engagement sites are positioned at the lower surface corresponding to the second engagement sites. The third bonding site includes a conductive structure bonded to the first bonding site by a metal-metal bond. The fourth bonding site includes a solder structure (eg, solder ball, solder column, etc.) bonded to the second bonding site (eg, using conventional solder bonding techniques). That is, stacked semiconductor devices have a hybrid bonding scheme that includes one or more metal-to-metal joints (eg, metal-to-metal joints, sometimes referred to herein as "solderless" joints) and One or more solder joints (eg, solder joints) between bonding sites on stacked semiconductor die. Hybrid bonding schemes can take advantage of the benefits of solder joints (eg, self-alignment, cost, etc.) while minimizing the disadvantages of solder joints (eg, risk of shorting between bonding sites). Furthermore, hybrid bonding schemes can leverage the benefits of metal-metal bonding (quality connection, low risk of short circuits) while addressing the limitations of metal-metal bonding (eg, alignment).In some embodiments, metal-metal joints form live electrical connections between semiconductor dies, while solder joints form thermal connections between thermal structures in the semiconductor dies. In some embodiments, each of the first through fourth bonding sites generally corresponds to a feature in the semiconductor die (e.g., a through-substrate via, a redistribution layer, a thermal feature, and/or any other suitable components). In some embodiments, the first junction site and the second junction site are substantially similar in structure. In some such embodiments, the first junction site and the second junction site are formed in the same fabrication process. For example, the first bonding site and the second bonding site can be formed by the same copper deposition process. Additionally, in some embodiments, the metal-to-metal joint between the first joint site and the third joint site is a copper-to-copper joint.For ease of reference, stacks are sometimes described herein with reference to top and bottom, upper and lower, up and down, and/or horizontal planes, x-y planes, vertical or z directions relative to the spatial orientation of the embodiments shown in the drawings Semiconductor devices and components therein. It should be understood, however, that stacked semiconductor devices and components therein may be moved to and used in different spatial orientations without altering the structure and/or function of the disclosed embodiments of the present technology.1 is a cross-sectional view of a stacked semiconductor device 100 ("device 100") with mixed metal structures between semiconductor dies in accordance with some embodiments of the present technology. In the illustrated embodiment, the device 100 includes a packaging substrate 102 having a first surface 104 (eg, an upper surface or a die stacking surface) and a second surface 106 (eg, a lower surface) opposite the first surface 104 . Semiconductor die 110 ("die 110," individually referred to as first through fourth die 110a-d) are stacked on first surface 104 of packaging substrate 102, and a molding compound is disposed over die 110. between each of them and between the fourth die 110d (eg, the bottommost die) and the package substrate 102 .As described with reference to first die 110a (eg, the uppermost die), each of dies 110 has a first surface 112 (eg, an upper or top surface) and a second surface 114 (eg, a lower or top surface). bottom surface). Each of dies 110 may include a main semiconductor substrate 116 insulated by a dielectric substrate 118 at first surface 112 and second surface 114 . The first die 110a includes a first array of bonding sites 120a carried by the first surface 112, an array of through-substrate vias 130 (“TSVs 130”) extending at least partially through the first die 110a, and A second array of engagement sites 120b is carried by the second surface 114 . In the illustrated embodiment, TSV 130 extends completely across first die 110a; each individual bond site in the first array of bond sites 120a is directly coupled to an individual TSV 130; and the first array of bond sites 120b Each individual junction site in the two arrays is directly coupled to an individual TSV 130 . In various other embodiments, one or more of the TSVs 130 may extend only partially through the first die 110a, and one or more of the first array of bonding sites 120a may be coupled to the first Another structure on a surface 112 (eg, coupled to traces in a redistribution layer, thermal element, or another suitable structure), and/or one or more bonding sites in the second array of bonding sites 120b The dots may be coupled to another structure on the second surface 114 .As further illustrated in FIG. 1, the first array of engagement sites 120a includes one or more first engagement sites 122 (shown as two) and one or more second engagement sites spaced apart from the first engagement sites 122. Junction sites 124 (shown as two). In the illustrated embodiment, the first junction site 122 and the second junction site 124 are generally similar in structure. For example, as illustrated, the first junction site 122 and the second junction site 124 may each have a junction structure that includes a joint bonded to the first TSV 132 and the second TSV 134 at the first surface 112 . The conductive pad 121 , the metal pad 126 carried by the conductive pad 121 , and the bonding film 128 carried by the metal pad 126 .In various embodiments, the conductive pad 121 may be formed of a suitable conductive metal, such as copper, gold, silver, aluminum, or any other suitable conductive material. Similarly, in various embodiments, metal pad 126 may be formed from a suitable conductive metal, such as copper, gold, silver, aluminum, or any other suitable conductive material. Similarly, in various embodiments, bonding film 128 may be formed from a suitable conductive metal, such as copper, gold, silver, aluminum, or any other suitable conductive material. In various embodiments, the conductive pad 121 , the metal pad 126 and the bonding film 128 may be formed of the same conductive material and/or different conductive materials. For example, in some embodiments, conductive pad 121 and metal pad 126 are formed of copper, while bonding film 128 is formed of gold. The copper configuration of the conductive pad 121 and the metal pad 126 can help reduce manufacturing costs, while the gold configuration of the bonding film 128 can help improve the bonding capability of the surface of the first bonding site 122 .The second array of engagement sites 120b includes one or more first engagement sites 142 (shown as two) and one or more second engagement sites 144 (shown as two) spaced apart from the first engagement sites 142. indivual). In the illustrated embodiment, the structure of the first engagement site 142 is substantially different than the structure of the second engagement site 144 . As illustrated, the structure of the first junction site 142 is generally similar to the structure of the first junction site 122 discussed above. For example, the first bonding site 142 includes a conductive pad 141 bonded to the first TSV 132 at the second surface 114 , a metal pad 146 carried by the conductive pad 141 , and a bonding film 148 carried by the metal pad 146 . Also as discussed above, in various embodiments, conductive pads 141 , metal pads 146 and/or bonding film 148 may be formed from a suitable conductive metal, eg, copper, gold, silver, aluminum, or any other suitable conductive material. For example, in some embodiments, conductive pad 141 and metal pad 146 are formed of copper, while bonding film 148 is formed of gold.In some embodiments, each of the metal pads 126, 146 is formed from a sufficiently refined metal material to allow the metal pads 126, 146 to be directly bonded to each other. For example, in some embodiments, each of metal pads 126, 146 is formed of copper having a relatively defect-free (or free) bonding surface. In such embodiments, the bonding films 128, 148 may be omitted, and the copper in the metal pads 126, 146 may be directly bonded in the form of a metal-to-metal bond.However, the structure of the second engagement site 144 differs substantially from the structure of the second engagement site 124 . In the illustrated embodiment, the second bonding site 144 includes a conductive pad 141 bonded to the second TSV 134 , a metal pad 156 carried by the conductive pad 141 , and a solder structure 158 carried by the metal pad 156 . Conductive pads 141 and/or metal pads 156 may be formed from a suitable conductive metal, such as copper, gold, silver, aluminum, or any other suitable conductive material. The solder structures may be solder balls, columns of solder material, or any other suitable structure.As further illustrated in FIG. 1, each of the second through fourth die 110b-c includes a first array of bonding sites 120a and a second array of bonding sites 120b. At each bonding interface in device 100, a first array of bonding sites 120a of a relatively lower die is bonded to a second array of bonding sites 120b of a relatively higher die. For example, as described with respect to first die 110a and second die 110b, a first array of bonding sites 120a on second die 110b is bonded to a first array of bonding sites 120b at first die 110a. Second array. Specifically, the first bonding site 122 on the second die 110b is bonded to the first bonding site 142 of the first die 110a by a metal-to-metal bond between the bonding films 128, 148, while the second die The second bonding site 124 on the sheet 110 b is bonded to the second bonding site 144 of the first die 110 a through a solder joint between the bonding film 128 and the solder structure 158 .During the bonding process, a solder bonding process between the second bonding sites 124, 144 may help align the first die 110a and the second die 110b (eg, through a solder self-alignment process). However, the bonding process squeezes the solder structure 158, which drifts outward in the x-y plane toward other bonding sites. If every joint in device 100 is a solder joint, the joint sites must be separated by at least twice the drift distance to avoid shorts between the joint sites. As the height requirements of devices of the type illustrated in FIG. 1 shrink, the die 110 in the stack are compressed more closely together, which can increase the average distance that the solder structures 158 drift. Additionally, it may be desirable to reduce the distance between bonding sites (eg, reduce pitch) to reduce the x-y footprint of device 100 and/or to provide additional communication lines between die 110 . Due to the reduced height and spacing, solder squeezed out between the bond sites can create short circuits between the bond sites, thereby compromising the performance of the device 100 . A metal-to-metal joint between the first joint sites 122, 142 does not have the same extrusion problems for forming a short circuit between the joint sites, and the metal-to-metal joint can provide a high quality connection between the joint sites . However, metal-to-metal joints are not suitable for joint sites with a pitch greater than 5 micrometers ([mu]m), for example due to the high cost of aligning the joint sites.The hybrid configuration of first bonding site 142 and second bonding site 144 and the hybrid bonding scheme in device 100 reduces the likelihood of bridges forming between bonding sites while maintaining many of the benefits of solder joints. For example, as discussed in more detail below with respect to FIGS. A metal-to-metal joint is formed between 122,142. Furthermore, a hybrid bonding scheme may take advantage of the metal-to-metal bond between the first bonding sites 122 , 142 . For example, in some embodiments, first TSV 132 is an electrical communication path between die 110 (eg, a powered TSV), while second TSV 134 forms a thermal communication path between die 110 (eg, a thermal Dissipation route). Metal-to-metal joints can help ensure a quality electrical connection between dies 110, while solder joints can help ensure that dies 110 are properly aligned.The hybrid bonding scheme in device 100 may be especially advantageous when the post pitch and/or bond wire thickness are sufficiently small that pure solder bonding schemes start to form too many shorts. In various embodiments, for example, mixed metal junction structures may be used when the pillar pitch is above about 3 μm, between about 60 μm and about 4 μm, or between about 40 μm and about 5 μm. In some embodiments, when the bonding wire thickness between the dies 110 is between about 1 μm and 30 μm, between about 2 μm and about 25 μm, between about 5 μm and about 20 μm, or between about 10 μm and about 20 μm , a mixed metal joint structure can be used.2A-2J illustrate a process for creating bonding sites on a semiconductor die 110 in accordance with some embodiments of the present technology. The process described below with respect to FIGS. 2A-2J may, for example, be used to create the mixed conductive structure in the second array of bonding sites 120b discussed above with respect to FIG. 1 . Additionally, the processes described below may be performed after dielectric substrate 118 ( FIG. 1 ) has been deposited on second surface 114 and conductive layer 121 ′ (eg, a precursor to conductive pads 121 ) has been deposited on dielectric substrate 118 start.2A illustrates die 110 after a photoresist material 220 is deposited on second surface 114 of die 110 and patterned. As illustrated, patterning the photoresist material 220 may form vias 222 in the photoresist material 220 that expose the conductive layer 121 ′ over the one or more TSVs 130 in the die 110 . In some embodiments, vias 222 expose TSVs 130 corresponding to live communication channels through die 110 .FIG. 2B illustrates die 110 after metal plating one or more vias 222 to form one or more instances of metal pads 146 . As illustrated in FIG. 2B , each of via holes 222 may be filled to a level at or near upper surface 221 of photoresist material 220 by a metal plating process. As further illustrated in FIG. 2B , metal pads 146 may have dissimilar heights after the metal plating process.FIG. 2C illustrates die 110 after removal of material from upper surface 147 of each of metal pads 146 . As discussed in more detail below with respect to FIGS. 5A and 5B , the removal process can ensure that metal pads 146 have a substantially uniform height and/or that upper surface 147 is relatively free of defects.FIG. 2D illustrates die 110 after performing a metal plating process to deposit bonding film 148 on upper surface 147 of metal pad 146 . In some embodiments, bonding film 148 may protect metal pad 146 during further processing of die 110 , thereby preventing reintroduction of impurities into upper surface 147 . In some embodiments, bonding film 148 may be a conductive metal selected based at least in part on the metal's ability to form the metal-metal bond discussed above with respect to FIG. 1 . For example, in some embodiments, bonding film 148 is a gold layer.As discussed above, in some embodiments, bonding film 148 is omitted. In such embodiments, the process for creating the first array of bonding sites may omit the second metal plating process of FIG. 2D . For example, in some embodiments, the metal pad 146 may have a sufficiently uniform upper surface 147 and/or be formed of a suitable metal for forming a metal-metal joint to omit the second deposition process.Furthermore, in some embodiments, the process for creating the first array of bonding sites may omit the removal process discussed above with respect to Figure 2C. For example, in some embodiments, the deposition process discussed above with respect to FIG. 2B can produce metal pads 146 with a height within acceptable tolerances, making a removal process unnecessary. In another example, the second deposition process of FIG. 2D can account for differences in height.FIG. 2E illustrates die 110 after depositing bonding film 148 onto metal pad 146 to complete the formation of the conductive structure at first bonding site 142 . Once the conductive structures are complete, the photoresist material 220 may be stripped from the die 110 .FIG. 2F illustrates die 110 after deposition and patterning of a second photoresist material 230 on second surface 114 of die 110 . As illustrated, second photoresist material 230 is patterned to form vias 234 that expose conductive layer 121 ′ over one or more TSVs 130 in die 110 . In some embodiments, vias 234 in the photoresist material expose TSVs 130 corresponding to thermal vias through die 110 .FIG. 2G illustrates die 110 after a metal and solder plating process that sequentially deposits metal pads 156 and solder structures 158 . As illustrated, the resulting solder pads may have different heights, but each have a rectangular structure bonded to metal pad 156 .FIG. 2H illustrates die 110 after stripping second photoresist material 230 from die 110 . FIG. 2I illustrates die 110 after etching conductive layer 121 ′ to expose second surface 114 of die 110 and isolate newly formed conductive structures. That is, the ejection process removes material from the conductive layer 121 ′ to isolate the conductive pad 121 of the first bonding site 142 and the second bonding site 144 .FIG. 2J illustrates die 110 after a solder reflow process to reshape solder structure 158 . In some embodiments, a solder reflow process may improve the bond between metal pad 156 and solder structure 158 . In some embodiments, the solder reflow process may improve the uniformity of the height of the solder structures 158 on the die 110 .3A-3H illustrate a process for creating an array of substantially similar bonding sites on a semiconductor die 110 in accordance with some embodiments of the present technology. The process described below with respect to FIGS. 3A-3H may, for example, be used to generate the first array of junction sites 120a discussed above with respect to FIG. 1 . In the illustrated embodiment, the processes of Figures 3A-3H occur after the processes discussed above with respect to Figures 2A-2J. In other embodiments, the process of FIGS. 3A-3H occurs prior to forming the mixed conductive structure of the second array of bonding sites 120b.Referring to FIG. 3A , the process may begin by securing the die 110 on a carrier structure 302 such as a carrier wafer. In some embodiments, the handle wafer includes a protective material 304 (eg, molding material) at the same level as the conductive structures in the second array of bonding sites 120b. Once mounted, the untreated first surface 112 ′ of the die 110 may be treated to expose the TSVs 130 in the die 110 .FIG. 3B illustrates die 110 after a bulk removal and/or thinning process on first surface 112′, which is done with first surface 112″ at an elevation above TSV 130. In various implementations For example, the bulk thinning process may include a grinding process and/or a chemical mechanical planarization (CMP) process to quickly and/or efficiently remove semiconductor substrate 116 material from first surface 112 ″.FIG. 3C illustrates die 110 after a second removal and/or thinning process, resulting in first surface 112 at an elevation parallel to or just below TSV 130 . In some embodiments, the second removal process is a dry etch process to carefully remove semiconductor substrate 116 material while minimizing damage to and/or removal from TSVs 130 .3D illustrates die 110 after depositing dielectric layer 118 (and/or a passivation layer) on first surface 112 of die 110, for example, by a chemical vapor deposition process. As illustrated in FIG. 3D , the deposition process may result in the most exposed ends of the dielectric layer 118 TSV 130 .FIG. 3E illustrates die 110 after an optional removal process to re-expose TSVs 130 in die 110 and a deposition process to deposit conductive layer 121 ′. In various embodiments, the optional removal process may include a CMP process, a dry etch process, or other suitable removal process to produce a finished dielectric layer 118 on the first surface 112 of the die 110 .As further illustrated in FIG. 3E , once the TSVs 130 are exposed, a deposition process may deposit a conductive layer 121 ′ across the first surface 112 . In some embodiments, the conductive layer 121' is a metal seed layer. Examples of metals used in the conductive layer 121' include copper, tin, aluminum, gold, silver, and/or any other suitable metal. In some embodiments, the conductive layer 121' is deposited by a physical vapor deposition (PVD) process.FIG. 3F illustrates die 110 after deposition and patterning of a third photoresist material 320 over conductive layer 121 ′. As illustrated, patterning may result in vias 322 that expose conductive layer 121 ′ in locations generally corresponding to TSVs 130 in die 110 . In some embodiments, each of vias 322 has a generally similar size and shape. In other embodiments, vias 322 may have different sizes and/or shapes. For example, a first group of vias corresponding to powered communication channels through die 110 may be sized to mate with conductive structures on first bonding site 142 ( FIG. 1 ), while corresponding to The second group of vias through the thermal vias of the die 110 may have a size configured to mate with the conductive structures on the second bonding site 144 .FIG. 3G illustrates die 110 after a plating process to deposit metal pad 126 and bonding film 128 in via 322 , thereby forming first bonding site 122 and second bonding site 124 . As illustrated in FIG. 3G , the substantially equivalent width of the via 322 produces a substantially equivalent width between the first bonding site 122 and the second bonding site 124 . As discussed above, the first bonding site 122 and the second bonding site 124 may be formed with a first bonding site 142 on the second surface 114 of another semiconductor die and/or another suitable substrate. A different width corresponding to the width of the second junction site 144 (FIG. 1). In the illustrated embodiment, the plating process also produces first bond sites 122 and second bond sites 124 having generally uniform heights. In other embodiments, the plating process can be adjusted to produce first and second bonding sites 122, 124 having different heights to further facilitate bonding with corresponding bonding sites. Furthermore, the plating process may omit depositing the bonding film 128 in the metal pad 126 associated with the first bonding site 122 .FIG. 3H illustrates die 110 after stripping third photoresist material 320 from die 110 to expose conductive layer 121 ′. As further illustrated, the process then includes etching the exposed conductive layer 121 ′ to isolate the conductive pad 121 (and thus the first bonding site 122 and the second bonding site 124 ) and expose the first surface 112 of the die 110 .Although the process described above with respect to FIGS. 2A-3H describes generating the hybrid conductive structure on the semiconductor die first, it should be understood that in some embodiments, dies 110 are generated in a different order. For example, in some embodiments, a substantially similar conductive structure on the first surface 112 of the die 110 is created prior to creating a mixed conductive structure on the second surface 114 of the die 110 (eg, by referring above to The process described in Figures 2A-2J is replaced by the process described above with respect to Figures 3F-3H).4A-4D illustrate a process for forming a stacked semiconductor device with a mixed conduction structure in accordance with some embodiments of the present technology. The described process may be done, for example, after the hybrid structure is produced on die 110 according to the embodiments discussed above with respect to FIGS. 2A-3H .Referring to FIG. 4A , one or more dies 110 (shown as one) may be stripped from the carrier structure 302 . For example, the die 110 may be disengaged from the wafer-carrying substrate and/or peeled from the molding material. In some embodiments, the process at FIG. 4A includes dicing a wafer (not shown) to singulate die 110 from the wafer.Referring to FIG. 4B , one or more dies 110 (shown as two) may be stacked on top of the packaging substrate 102 . In the illustrated embodiment, the second die 110b is stacked on the upper surface 104 of the packaging substrate 102, and the first die 110a is stacked on top of the second die 110b. In some embodiments, multiple dies may be stacked on various other substrates. For example, in some embodiments, one or more dies 110 may be stacked on the base die before or after the base die is attached to the packaging substrate 102 or any other suitable material. As illustrated in FIG. 4B , stacking the die 110 may include substantially aligning the second array of bonding sites 120b on the first die 110a with the first array of bonding sites 120a on the second die 110b.FIG. 4C illustrates device 100 after a solder reflow process and/or thermocompression bonding process. The solder reflow process forms a seam between the second bonding site 144 on the first die 110a and the second bonding site 124 on the second die 110b. As illustrated, the solder reflow process involves compressing the die 110 , causing some of the solder structures 158 to extrude on the horizontal plane. In some embodiments, the solder reflow process creates thermal pathways 434 through device 100 . In some embodiments, the solder reflow process further aligns the die 110, thereby correcting any minor errors in alignment from the stack. Self-alignment during solder reflow occurs when the solder is adjusted to minimize the surface area of the solder structure 158, and the position of the die 110 may be adjusted accordingly.The thermocompression bonding process (sometimes referred to herein as the "annealing process") forms a metal- Metal fittings. The formed metal-metal joint may depend on the bonding film 128 , 148 deposited on the first bonding site 122 , 142 . In some embodiments, the metal-metal joints include copper-copper joints, silver-silver joints, gold-gold joints, and/or any other suitable metal-metal joints. In some embodiments, the metal-to-metal joint establishes an electrical pathway 432 through the device 100 . In some embodiments, the thermocompression bonding process occurs concurrently with the solder reflow process. For example, in some embodiments, the thermocompression bonding process introduces sufficient heat to reflow solder structure 158 . In some embodiments, a thermocompression bonding process may be performed after the solder reflow process to improve alignment between the first bonding sites 122, 142 prior to forming the metal-to-metal joint.FIG. 4D illustrates device 100 after depositing underfill material 160 between first die 110 a and second die 110 b and between second die 110 b and packaging substrate 102 . The underfill material 160 may be a thermosetting epoxy or other suitable material. The underfill material 160 can help reduce thermal stress on the solder structure 158 created by a mismatch in the coefficient of thermal expansion between the surface of the die 110 and the solder material. In some embodiments, underfill material 160 increases the stiffness of device 100 to help reduce debonding between die 110 . In some embodiments, underfill material 160 is deposited by a capillary-type underfill process.In some embodiments, one or more additional die may be stacked on top of the first die 110a to increase the die count in device 100 . For example, as illustrated in FIG. 1 , two additional die 110 may be stacked in device 100 . In various embodiments, one additional die, two additional die, five additional die, ten additional die, or any suitable number of additional die may be added to the stack. In some embodiments, one or more additional die may be stacked in the initial stacking process discussed above with respect to Figure 4B. In some embodiments, once die stacking is complete, an encapsulant (not shown) may flow over device 100 to further insulate and protect die 110 . In some embodiments, once the die stacking is complete, a cover (not shown) may be attached to the packaging substrate 102 to further insulate and/or protect the die 110 .5A and 5B illustrate a process for leveling bonding structures 522 on die 110 in accordance with some embodiments of the present technology. As discussed above, in some embodiments, a leveling process is used after the deposition process to improve the uniformity of the bonding structure 522 . For example, a leveling process may be performed after depositing the metal plating process described above with respect to FIG. 2B , resulting in the first bonding site 122 discussed above with respect to FIG. 2C .FIG. 5A illustrates three bonding structures 522a - c after a deposition process into vias in the second photoresist material 230 . As illustrated, each of the bonding structures 522a-c may include an impurity layer 523a-c on its respective upper surface. Additionally, each of the engagement structures 522a-c may have a different height. For example, the first bonding structure 522a is higher than the second bonding structure 522b, but shorter than the third bonding structure 522c. The different heights of the impurity layers 523a - c and the bonding structures 522 may hinder and/or prevent the bonding structures 522a - c from bonding to structures on another die without further processing.FIG. 5B illustrates the bonding structure 522 after the leveling process. As illustrated, impurity layer 523 has been removed from each of bonding structures 522 . In addition, each of the engagement structures 522 has additional material removed from the upper surface to create a generally uniform height across the engagement structures 522 . That is, the leveling process includes a removal process that strips material from the bonding structure 522 . The removal process can be an electrical and/or chemical process (e.g., a Durendal process, immersing the bonded structure 522 in a chemical bath, or any other suitable process) to avoid mechanical stress on the relatively thin bonded structure. . Additionally, electrical and/or chemical processes may allow removal of material from bonding structure 522 while bonding structure 522 is supported by second photoresist material 230 . As discussed above, once the leveling process is complete, the second photoresist material 230 may be stripped from the die 110 .6 is a schematic diagram of a system including a semiconductor die assembly configured in accordance with an embodiment of the present technology. Any of the semiconductor devices characterized above and/or resulting from the processes described above with reference to FIGS. System 900 is shown schematically in FIG. 6 . System 900 may include memory 990 (eg, SRAM, DRAM, Flash, and/or other memory devices), power supply 992 , drives 994 , processor 996 and/or other subsystems or components 998 . Semiconductor devices similar to those described above with reference to FIG. 1 , or semiconductor devices resulting from the processes described above with respect to FIGS. 2A-5B may be included in any of the elements shown in FIG. 6 . For example, memory 990 may include a stacked semiconductor device having a mixed-metal junction structure, such as those described above with respect to FIG. 1 . The resulting system 900 may be configured to perform any of a wide variety of suitable computing, processing, storage, sensing, imaging, and/or other functions. Accordingly, representative examples of system 900 include, but are not limited to, computers and/or other data processors, such as desktop computers, laptop computers, network appliances, handheld devices (e.g., palmtop computers, wearable computers, cellular or mobile phones, personal digital assistants, music players, etc.), tablet computers, multiprocessor systems, processor-based or programmable consumer electronics, network computers, and microcomputers. Additional representative examples of system 900 include lights, cameras, vehicles, and the like. As with these and other examples, system 900 may be housed in a single unit or distributed over multiple interconnected units, such as through a communication network. Accordingly, the components of system 900 may include local and/or remote memory storage devices and any of a wide variety of suitable computer-readable media.in conclusionFrom the foregoing, it should be appreciated that specific embodiments of the technology have been described herein for illustrative purposes, but well-known structures and functions have not been shown or described in detail to avoid unnecessarily obscuring the description of the embodiments of the technology. In the event that any material incorporated herein by reference conflicts with the present disclosure, the present disclosure controls. Where the context permits, singular or plural terms may also encompass plural or singular terms, respectively. Furthermore, unless the word "or" is expressly limited to mean only a single item that is exclusive to the other items that reference a list of two or more items, use of "or" in this list should be understood to include : (a) any single item in the list, (b) all items in the list, or (c) any combination of items in the list. Furthermore, as used herein, the phrase "and/or" as in "A and/or B" refers to only A, only B, and both A and B. Furthermore, the terms "comprising", "comprising", "having" and "with" are used throughout to mean the inclusion of at least one or more of the stated features, such that any greater number of the same features and/or additional types are not excluded. other characteristics of .In light of the foregoing, it should also be appreciated that various modifications can be made without departing from the disclosure or the inventive technique. For example, those skilled in the art will understand that various components of the present technology may be further divided into sub-components, or that various components and functions of the present technology may be combined and integrated. Furthermore, certain aspects of the techniques described in the context of particular embodiments may also be combined or eliminated in other embodiments. Furthermore, while advantages associated with certain embodiments of the inventive technology have been described in the context of those embodiments, other embodiments may exhibit such advantages, and not all embodiments necessarily exhibit such advantages To fall within the scope of the technology of the present invention. Accordingly, the present disclosure and associated technology can encompass other embodiments not expressly shown or described herein. |
The present disclosure includes system and method of mapping shader variables into physical registers. In an embodiment, a graphics processing unit (GPU) and a memory coupled to the GPU are disclosed. The memory includes a processor readable data file that has a register file portion. The register file portion has a rectangular structure including a plurality of data items. At least two of the plurality of data items corresponding to data elements of a shader program. The data elements have different data storage types. |
A communication device comprising:a graphics processing unit (GPU); anda memory coupled to the GPU, the memory comprising a processor readable data file that includes a register file portion, the register file portion having a rectangular structure including a plurality of data items, at least two of the plurality of data items corresponding to data elements of a shader program, the data elements having different data storage types.The communication device of claim 1, further comprising:a processor coupled to the GPU and further coupled to the memory;a transceiver coupled to the processor;a codec coupled to the processor;a speaker coupled to the codec; anda display coupled to the GPU.The communication device of claim 1, wherein the register file portion associates each data item of the plurality of data items to a respective portion of the register file that is identified by a register offset value, a register count value, and a component mask.The communication device of claim 1, wherein the different data storage types are selected from an attribute type, a uniform type, a varying type, a built-in uniform type, a built-in input type, and a built-in output type.The communication device of claim 1, wherein the processor readable data file is an object file.The communication device of claim 5, wherein the object file includes a symbol table.The communication device of claim 1, further comprising a compiler configured to compile source code into a format executable by the GPU.The communication device of claim 7, wherein the compiler is configured to compile source code that is compliant with an OpenGL standard specification.The communication device of claim 7, wherein the compiler includes a storage mapping module that is configured to map data with each of the different data storage types to a common register file format.The communication device of claim 7, wherein the communication device is a portable wireless device, the device further comprising:a display coupled to the GPU; anda receiver configured to receive data via a wireless network.A multimedia device comprising:a display;a graphics processing unit (GPU) coupled to the display; andan object file accessible to the GPU, the object file indicating arespective rectangular region of a register file for each data item of the object file.The multimedia device of claim 11, wherein each data item of the object file is mapped to a respective rectangular region by a register offset value, a register count value, and an offset mask.The multimedia device of claim 12, wherein the register count value identifies a number of registers.The multimedia device of claim 12, wherein the offset mask identifies a number of components of the register file and a starting component.The multimedia device of claim 11, further comprising a shader compiler configured to generate executable code using a universal representation for all shader variables in an input stream.A method comprising:compiling a shader program to generate a compiled output file; andproviding the compiled output file to be executed by a wireless device having a graphics processing unit, wherein the compiled output file identifies a plurality of rectangular regions of a register file, and wherein each of the plurality of rectangular regions is associated with a respective data item of the compiled output file.The method of claim 16, wherein providing the compiled output file comprises transmitting the compiled output file to the wireless device via a wireless transmission.The method of claim 16, wherein each of the plurality of rectangular regions is defined by a starting register, a number of registers, and a number of contiguous register components.The method of claim 18, further comprising mapping a plurality of data storage types of the shader program to a uniform representation at a compiler.The method of claim 19, further comprising mapping the data storage types in the uniform representation to physical registers.A system comprising:means for locating a rectangular region of a register file corresponding to a data object; andgraphics processing means for executing a shader program that accesses the data object.The system of claim 21, further comprising:means for mapping a shader variable to a uniform data representation;andmeans for mapping the uniform data representation of the shader variable to a physical register.A processor readable medium having processor readable data to identify to a graphics processing unit a plurality of rectangular portions of a register file, each of the plurality of rectangular portions associated with a respective shader data item.The processor readable medium of claim 23, wherein each of the plurality of rectangular portions is identified by a register offset value, a register count value, and a component mask.The processor readable medium of claim 23, wherein the processor readable data embodies a universal representation for shader variables. |
DESCRIPTION OF RELATED ART Advances in technology have resulted in smaller and more powerful personal computing devices. For example, there currently exist a variety of portable personal computing devices, including wireless computing devices, such as portable wireless telephones, personal digital assistants (PDAs), and paging devices that are small, lightweight, and easily carried by users. More specifically, portable wireless telephones, such as cellular telephones and IP telephones, can communicate voice and data packets over wireless networks. Further, many such wireless telephones include other types of devices that are incorporated therein. For example, a wireless telephone can also include a digital still camera, a digital video camera, a digital recorder, and an audio file player. Also, such wireless telephones can process executable instructions, including software applications, such as a web browser application, that can be used to access the Internet. As such, these wireless telephones can include significant computing capabilities.Graphics processing units (GPUs) can improve graphics processing and multimedia application performance by processing data associated with a graphics pipeline. GPUs can execute programs, commonly referred to as shaders, that may supplement or replace stages of a default graphics pipeline. Shaders may manipulate vertex data or scalar data and may be written in high-level or low-level programming languages. Shader compilers recognize and process a variety of data storage types by maintaining special rules and characteristics associated with the data storage types to produce executable code. SUMMARY In a particular embodiment, a communication device is disclosed. The communication device includes a graphics processing unit (GPU) and a memory coupled to the GPU. The memory includes a processor readable data file that has a register file portion. The register file portion has a rectangular structure including multiple data items. At least two of the data items correspond to data elements of a shader program that have different data storage types.In another particular embodiment, a multimedia device is disclosed. The multimedia device includes a display and a graphics processing unit (GPU) coupled to the display. The multimedia device also includes an object file accessible to the GPU. The object file indicates a rectangular region of a register file for each data item of the object file.In another particular embodiment, a method is disclosed that includes receiving a shader program including a plurality of data items. Each of the plurality of data items has a data storage type. The method also includes mapping each of the data items to a universal storage representation. The method further includes generating an object file using the universal storage representation to create a register file.In another particular embodiment, a method is disclosed that includes compiling a shader program to generate a compiled output file. The method also includes providing the compiled output file to be executed by a wireless device having a graphics processing unit. The compiled output file identifies a plurality of rectangular regions of a register file. Each of the plurality of rectangular regions is associated with a respective data item of the compiled output file.In another particular embodiment, a system is disclosed that includes means for locating a rectangular region of a register file corresponding to a data object. The system also includes graphics processing means for executing a shader program that accesses the data object.In another particular embodiment, a processor readable medium is disclosed. The processor readable medium stores processor readable data to identify rectangular portions of a register file to a graphics processing unit. Each of the rectangular portions is associated with a respective shader data item.One particular advantage provided by disclosed embodiments is a reduced compiler footprint due to a unified representation of shader variables.Other aspects, advantages, and features of the present disclosure will become apparent after review of the entire application, including the following sections: Brief Description of the Drawings, Detailed Description, and the Claims. BRIEF DESCRIPTION OF THE DRAWINGS FIG. 1 is a functional diagram of a particular illustrative embodiment of a system to map shader variables to physical registers;FIG. 2 is a functional diagram of a second illustrative embodiment of a system to map shader variables to physical registers;FIG. 3 is a functional diagram of a particular illustrative embodiment of a decoder that may be used in a system to map shader variables to physical registers;FIG. 4 is a general diagram of a table to illustrate shader variable mapping input parameters;FIG. 5 is a general diagram of an embodiment of a register file;FIG. 6 is a flow chart of an embodiment of a method of mapping shader variables to physical registers;FIG. 7 is a flow chart of a second embodiment of a method of mapping shader variables to physical registers;FIG. 8 is a block diagram of a cellular phone including a GPU and a memory including an object file that maps shader variables to physical registers; andFIG. 9 is a block diagram of a portable communication device including a GPU and a memory including an object file that maps shader variables to physical registers. DETAILED DESCRIPTION Referring to FIG. 1 , a particular illustrative embodiment of a system to map shader variables to physical registers is disclosed and generally designated 100. The system 100 includes a shader program with multiple data storage types 102, a shader program compiler 106, and an object file 110. In a particular embodiment, the shader program with multiple data storage types 102, the shader program compiler 106, the object file 110, or any combination thereof, are stored in a memory of a portable wireless device that has a graphics processing unit (GPU).The shader program with multiple data storage types 102 is input to the shader program compiler 106 via an input data stream 104. The shader program compiler 106 compiles the shader program and writes via an output data stream 108 to the object file 110. The object file 110 includes a symbol table 112 indicating data elements such as variables of the shader program with multiple data storage types 102.In a particular embodiment, the shader program compiler 106 maps every data element of the shader program with multiple data storage types 102 to a respective universal storage representation for processing. Using the universal storage representation, the shader program compiler 106 may map each data element to a rectangular portion of a register file, indicated by a rectangular register structure 114, in the symbol table 112. The object file 110 may be executed by a graphics processing unit that reads and writes data corresponding to the data elements to physical registers as specified by the rectangular register structure 114.Use of a universal storage representation for all shader program data storage elements may enable the shader program compiler 106 to operate with a smaller memory footprint than compilers that are configured to support each of the multiple data storage types throughout processing. Furthermore, because the shader program compiler 106 processes data elements using a universal storage representation, the compiler 106 is more easily revised to accommodate new shader programming languages and revisions to current shader specification standards, such as OpenGL.Referring to FIG. 2 , a second illustrative embodiment of a system to map shader variables to physical registers is depicted and generally designated 200. The system 200 includes the shader program compiler 106 configured to receive the input data stream 104 and to provide the output data stream 108, as illustrated in FIG. 1 . The shader program compiler 106 includes a decoder 202, a translator 204, an instruction scheduler 206, a register allocator 208, an optimizer 210, an encoder 212, and an object file generator 214.In a particular embodiment, the decoder 204 is configured to receive data elements associated with multiple data storage types and to map the input data storage types to a universal storage representation. The universal storage representation may provide a common representation of all shader variables for further processing by the shader program compiler 106. Each of the translator 204, instruction scheduler 206, register allocator 208, optimizer 210, encoder 212, and object file generator 214 may be configured to perform its respective function using the common representation. For example, the register allocator 208 may receive information from the instruction scheduler 206 corresponding to the universal storage representation of shader variables of the input data stream 104 and may map the shader variables to physical registers or portions of physical registers using the universal storage representation.In a particular embodiment, the shader program compiler 106 is configured to receive shader program data that specifies data storage types associated with vertex data and also data storage types associated with pixel data, such as in a high-level shader programming language. The shader program compiler 106 may also be configured to receive shader program data that specifies logical input registers and logical output registers, such as in a low-level shader programming language. All input data storage types may be mapped to a universal storage representation at the decoder 202 for output to the translator 204. Thus, multiple data storage types may be processed at the shader program compiler 106 without implementing multiple parallel compilation paths to support each distinct data storage type throughout the compilation process.Referring to FIG. 3 , a particular embodiment of a decoder that may be used in a system to map shader variables to physical registers is depicted and generally designated 300. The decoder 300 is configured to provide shader program data with different storage types 302 to a data storage type mapping module 304 that is configured to output a representation of the shader program data using a universal storage type 306. In a particular embodiment, the decoder 300 may be used in a shader compiler, such as the shader program compiler 106 illustrated in FIGS. 1-2 .Referring to FIG. 4 , a table illustrating shader variable mapping input parameters is depicted and generally designated 400. The table 400 includes columns for data elements associated with high-level shader languages, including uniform variables, attribute variables, varying variables, built-in uniform variables, built-in input variables, and built-in output variables. The table 400 also includes columns for data elements associated with low-level shader languages, including logical input registers and logical output registers. Input mapping parameters are depicted for each data element. For example, uniform variables have user-defined names, support all data types, and may include arrays. Examples of data types include basic data types such as float, vector2, vector3, vector4, matrix3, or matrix4, in an illustrative embodiment. Attribute variables have user-defined names and do not support all data types, nor do attribute variables support arrays. Varying variables have user-defined names and do not support all data types, and may include arrays. Built-in uniform, built-in input, and built-in output variables do not have user-defined names, and instead may be identified by reserved keywords. Examples of reserved keywords include gl_Position, gl_PointSize, gl_FragCoord, gl_FrontFacing, gl_FragColor, gl_FragData, and gl_PointCoord, in an illustrative embodiment. Logical input and output registers of low level languages also do not have user-defined names, and instead are identified by semantic identifiers. Further, logical input and output registers are described by a logical register number and component mask.In a particular embodiment, each of the data storage types identified in the table 400 may be mapped to a universal storage representation by a shader compiler, such as by the decoder 202 of the shader program compiler 106, illustrated in FIG. 2 . For example, the decoder 202 may define a structure including numeric values to identify input parameters including a name value, an array size value, and data type value, and output values including a register offset value, register count value, and a component mask value. The name value may enumerate key word values and semantics, and may store an index value into a separate name array for variables having user-defined names. The array value may indicate an array size or may store a zero value for no array. The register offset value may indicate a register number of a first register of a rectangular register footprint corresponding to the variable. The register count value may indicate a number of contiguous registers covered by the rectangular register footprint. The component mask may specify register components of the rectangular register footprint. For example, in an illustrative embodiment, each register may include four equally-sized components, and each rectangular register footprint may include from one to four contiguous components.Referring to FIG. 5 , a particular illustrative embodiment of a register file is depicted and generally designated 500. The register file 500 includes a first rectangular region 510 and a second rectangular region 520. In a particular embodiment, the first rectangular region 510 and the second rectangular region 520 may be defined by an object file that is executable by a graphics processing unit (GPU).The register file 500 includes N registers having one or more components. In an illustrative embodiment, N is thirty-two and each register includes four components of equal size. Variables may be mapped to rectangular footprints such as the first rectangular region 510 and the second rectangular region 520. The first rectangular region 510 spans the third and fourth components of the first through fourth registers of the register file 500, and may be defined by a register offset value of zero (using zero-based indexing to identify the first register of the register file 500), a register count value of four (indicating that the first rectangular region 510 spans four registers), and an offset mask value indicating the third and fourth components. For example, the offset mask value may indicate a bit pattern that reflects components included in the first rectangular region, such as 0011, a true mask bit pattern such as Ox0000FFFF, an enumerated value that indicates the third and fourth components are included but the first and second components are excluded, or values indicating a starting component number and a number of components, as illustrative, non-limiting examples.Similarly, the second rectangular region 520 spans three components of a single register. The second rectangular region 520 may therefore be designated by a register offset value of seven, a register count value of one, and an offset mask value of 1110, as an illustrative, non-limiting example.In a particular embodiment, each register of the register file 500 may include thirty-two bits. The first rectangular region 510 may therefore include 128 bits, and may correspond to a single variable having 128 bits, or an array of two 64-bit values, four 32-bit values, or eight 16-bit values, as determined by the data type of the variable that is mapped to the first rectangular region 510. The second rectangular region 520 includes 24 bits, and may correspond to an array of three 8-bit values, for example.Although the register file 500 is depicted as having four components of equal size, any number of registers and any configuration of components, of equal size or varying sizes, may be used. In addition, variables may be mapped to any number, size, and configuration of rectangular footprints in the register file 500. For example, a shader compiler may map shader variables to rectangular regions of the register file 500 based on algorithms to improve compiler speed, to improve runtime performance, to increase register usage, to achieve other performance or design goals, or any combination thereof.Referring to FIG. 6 , a particular illustrative embodiment of a method of mapping shader variables to physical registers is depicted and generally designated 600. A shader program is received that includes a plurality of data items, each of the plurality of data items having a data storage type, at 602. The shader program may include multiple different data storage types. In a particular embodiment, the shader program includes a first data item having a first data storage type and a second data item having a second data storage type, where the second data storage type is different from the first data storage type.Continuing to 604, each of the plurality of data items is mapped to a universal storage representation. In a particular embodiment, each data item may be mapped to a respective portion of the register file identified by a register offset value, a register count value, and component mask.Advancing to 606, an object file is generated using the universal storage representation to create a register file. In a particular embodiment, the register file may have a rectangular structure and may accessible to a graphics processing unit (GPU). For example, the object file may include a symbol table identifying rectangular portions of the register file for each data item, as illustrated in FIG. 5 .Referring to FIG. 7 , a second illustrative embodiment of a method of mapping shader variables to physical registers is depicted and generally designated 700. A shader program is compiled to generate a compiled output file, at 702. The shader program may be performed by a shader compiler, such as the shader program complier 106 illustrated in FIG. 1 .Continuing to 704, in a particular embodiment, a plurality of data storage types of the shader program may be mapped to a uniform representation at the compiler. For example, multiple different data storage types, such as the data storage types illustrated in FIG. 4 , may be mapped to a universal storage representation by a decoder of a compiler, such as the decoder 300 illustrated in FIG. 3 .Moving to 706, in a particular embodiment, the data storage types in the uniform representation may be mapped to physical registers. For example, the data storage types in the uniform representation may be mapped to rectangular regions of a register file, such as first rectangular region 510 illustrated in FIG. 5 . In a particular embodiment, each of the plurality of rectangular regions is defined by a starting register, a number of registers, and a number of contiguous register components.Advancing to 708, the compiled output file is provided to be executed by a wireless device having a graphics processing unit (GPU). For example the compiled output file may be stored at a memory of the wireless device that is accessible by a GPU of the portable device. The compiled output file may identify a plurality of rectangular regions of a register file, and each of the plurality of rectangular regions may be associated with a respective data item of the compiled output file.In a particular embodiment, the compiled output file may be transmitted to the wireless device via a wireless transmission. For example, instead of running a shader compiler at a portable device, shader programs may be compiled at a remote compiler and downloaded via wireless data transmission to the portable device for execution by a GPU of the portable device.Referring to FIG. 8 , an exemplary, non-limiting embodiment of a cellular telephone is shown and is generally designated 820. As shown, the cellular telephone 820 includes an on-chip system 822 that includes a digital baseband processor 824 and an analog baseband processor 826 that are coupled together. The cellular telephone 820 also includes a graphics processing unit (GPU) 828 and a touchscreen controller 830 coupled to the digital baseband processor 824. In turn, a touchscreen display 832 external to the on-chip system 822 is coupled to the GPU 828 and the touchscreen controller 830.In a particular illustrative embodiment, the GPU 828 may be may be configured to execute one or more object files 890 stored at a memory 844. The one or more object files 890 may include compiled shader programs that are executable by the GPU 828. The object files 890 may include a symbol table indicating a rectangular register structure for variables, such as the symbol table 112 illustrated in FIG. 1 . In a particular embodiment, the cellular telephone 820 may include a shader compiler (not shown) configured to map shader variables to physical registers using a universal storage representation, such as the shader program compiler 106 illustrated in FIGS. 1-2 . The cellular telephone 820 may be configured to receive shader source code, compiled shader files, the one or more object files 890, or any combination thereof, via wireless transmission from one or more remote sources.FIG. 8 further indicates that a video encoder 834, e.g., a phase alternating line (PAL) encoder, a sequential couleur a memoire (SECAM) encoder, or a national television system(s) committee (NTSC) encoder, is coupled to the digital baseband processor 824. Further, a video amplifier 836 is coupled to the video encoder 834 and the touch screen display 832. Also, a video port 838 is coupled to the video amplifier 836. As depicted in FIG. 8 , a universal serial bus (USB) controller 840 is coupled to the digital baseband processor 824. Also, a USB port 842 is coupled to the USB controller 840. The memory 844 and a subscriber identity module (SIM) card 846 can also be coupled to the digital baseband processor 824. Further, as shown in FIG. 8 , a digital camera 848 can be coupled to the digital baseband processor 824. In an exemplary embodiment, the digital camera 848 is a charge-coupled device (CCD) camera or a complementary metal-oxide semiconductor (CMOS) camera.As further illustrated in FIG. 8 , a stereo audio CODEC 850 can be coupled to the analog baseband processor 826. Moreover, an audio amplifier 852 can coupled to the to stereo audio CODEC 850. In an exemplary embodiment, a first stereo speaker 854 and a second stereo speaker 856 are coupled to the audio amplifier 852. FIG. 8 shows that a microphone amplifier 858 can be also coupled to the stereo audio CODEC 850. Additionally, a microphone 860 can be coupled to the microphone amplifier 858. In a particular embodiment, a frequency modulation (FM) radio tuner 862 can be coupled to the stereo audio CODEC 850. Also, an FM antenna 864 is coupled to the FM radio tuner 862. Further, stereo headphones 866 can be coupled to the stereo audio CODEC 850.FIG. 8 further indicates that a radio frequency (RF) transceiver 868 can be coupled to the analog baseband processor 826. An RF switch 870 can be coupled to the RF transceiver 868 and an RF antenna 872. As shown in FIG. 8 , a keypad 874 can be coupled to the analog baseband processor 826. Also, a mono headset with a microphone 876 can be coupled to the analog baseband processor 826. Further, a vibrator device 878 can be coupled to the analog baseband processor 826. FIG. 8 also shows that a power supply 880 can be coupled to the on-chip system 822. In a particular embodiment, the power supply 880 is a direct current (DC) power supply that provides power to the various components of the cellular telephone 820 that require power. Further, in a particular embodiment, the power supply is a rechargeable DC battery or a DC power supply that is derived from an alternating current (AC) to DC transformer that is connected to an AC power source.In a particular embodiment, as depicted in FIG. 8 , the touchscreen display 832, the video port 838, the USB port 842, the camera 848, the first stereo speaker 854, the second stereo speaker 856, the microphone 860, the FM antenna 864, the stereo headphones 866, the RF switch 870, the RF antenna 872, the keypad 874, the mono headset 876, the vibrator device 878, and the power supply 880 are external to the on-chip system 822. Moreover, in a particular embodiment, the digital baseband processor 824 can use interleaved multithreading in order to process the various program threads associated with one or more of the different components associated with the cellular telephone 820.FIG. 9 illustrates an exemplary, non-limiting embodiment of a portable communication device that is generally designated 920. As illustrated in FIG. 9 , the portable communication device includes an on-chip system 922 that includes a digital signal processor 924 and a graphics processing unit (GPU) 926. In a particular illustrative embodiment, the GPU 926 may be may be configured to execute one or more object files 970 stored at a memory 932. The one or more object files 970 may include compiled shader programs that are executable by the GPU 926. The one or more object files 970 may include a symbol table indicating a rectangular register structure for variables, such as the symbol table 112 illustrated in FIG. 1 . In a particular embodiment, the portable communication device 920 may include a shader compiler (not shown) configured to map shader variables to physical registers using a universal storage representation, such as the shader program compiler 106 illustrated in FIGS. 1-2 . The portable communication device 920 may be configured to receive shader source code, compiled shader files, the one or more object files 890, or any combination thereof, via wireless transmission from one or more remote sources.FIG. 9 also shows that the GPU 926 is coupled to the digital signal processor 924 and a display 928. An input device 930 and the memory 932 are also coupled to the digital signal processor 924. Additionally, a coder/decoder (CODEC) 934 can be coupled to the digital signal processor 924. A speaker 936 and a microphone 938 can be coupled to the CODEC 934.FIG. 9 also indicates that a wireless controller 940 can be coupled to the digital signal processor 924 and a wireless antenna 942. In a particular embodiment, a power supply 944 is coupled to the on-chip system 922. Moreover, in a particular embodiment, as illustrated in FIG. 9 , the display 928, the input device 930, the speaker 936, the microphone 938, the wireless antenna 942, and the power supply 944 are external to the on-chip system 922. However, each is coupled to a component of the on-chip system 922.In a particular embodiment, the digital signal processor 924 utilizes interleaved multithreading to process instructions associated with program threads necessary to perform the functionality and operations needed by the various components of the portable communication device 920. For example, when a wireless communication session is established via the wireless antenna 942 a user can speak into the microphone 938. Electronic signals representing the user's voice can be sent to the CODEC 934 to be encoded. The digital signal processor 924 can perform data processing for the CODEC 934 to encode the electronic signals from the microphone. Further, incoming signals received via the wireless antenna 942 can be sent to the CODEC 934 by the wireless controller 940 to be decoded and sent to the speaker 936. The digital signal processor 924 can also perform the data processing for the CODEC 934 when decoding the signal received via the wireless antenna 942.Further, before, during, or after the wireless communication session, the digital signal processor 924 can process inputs that are received from the input device 930. For example, during the wireless communication session, a user may be using the input device 930 and the display 928 to surf the Internet via a web browser that is embedded within the memory 932 of the portable communication device 920. The digital signal processor 924 can interleave various program threads that are used by the input device 930, the GPU 926, the display 928, the CODEC 934 and the wireless controller 940, as described herein, to efficiently control the operation of the portable communication device 920 and the various components therein. Many of the instructions associated with the various program threads are executed concurrently during one or more clock cycles. As such, the power and energy consumption due to wasted clock cycles is substantially decreased.Those of skill would further appreciate that the various illustrative logical blocks, configurations, modules, circuits, and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both. To clearly illustrate this interchangeability of hardware and software, various illustrative components, blocks, configurations, modules, circuits, and steps have been described above generally in terms of their functionality. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the overall system. Skilled artisan may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present disclosure.The steps of a method or algorithm described in connection with the embodiments disclosed herein may be embodied directly in hardware, in a software module executed by a processor, or in a combination of the two. A software module may reside in RAM memory, flash memory, ROM memory, PROM memory, EPROM memory, EEPROM memory, registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art. An exemplary storage medium is coupled to the processor such that the processor can read information from, and write information to, the storage medium. In the alternative, the storage medium may be integral to the processor. The processor and the storage medium may reside in an ASIC. The ASIC may reside in a computing device or a user terminal. In the alternative, the processor and the storage medium may reside as discrete components in a computing device or user terminal.The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the disclosed embodiments. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the disclosure. Thus, the present disclosure is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope possible consistent with the principles and novel features as defined by the following claims. |
A temperature control assembly includes a TEC including a top surface and a bottom surface. The thermally conductive layer includes a top surface and a bottom surface. The top surface of the thermally conductive layer is coupled to the bottom surface of the TEC. The bottom surface of the thermally conductive layer includes a planar region. The planar region of the thermally conductive layer is to be positioned over two or more of a plurality of electronic devices of an electronic system to transfer thermal energy at the two or more electronic devices. |
1.A device comprising:A first thermoelectric component (TEC) including a top surface and a bottom surface, the first TEC configured to simultaneously raise the top surface of the first TEC based on a voltage potential applied to the first TEC temperature and lowering the temperature of the bottom surface or simultaneously lowering the temperature of the top surface and raising the temperature of the bottom surface to achieve a temperature between the top surface and the bottom surface of the first TEC heat transfer betweena heat transfer assembly including a top surface and a bottom surface, wherein the top surface of the heat transfer assembly is coupled to the bottom surface of the first TEC;a second TEC comprising a top surface and a bottom surface, wherein the top surface of the second TEC is coupled to the bottom surface of the heat transfer assembly; anda thermally conductive layer comprising a top surface and a bottom surface, wherein the top surface of the thermally conductive layer is coupled to the bottom surface of the second TEC, wherein the bottom surface of the thermally conductive layer includes a planar area, and wherein The planar area of the thermally conductive layer will be positioned over two or more of a plurality of electronic devices of an electronic system to transfer the thermal energy at the two or more electronic devices.2.4. The apparatus of claim 1, wherein the bottom surface of the thermally conductive layer further comprises a notched region, wherein the planar region intersects the notched region, and wherein the notched region comprises the voids in the thermally conductive layer extending in a vertical direction from the planar area toward the top surface of the thermally conductive layer.3.3. The apparatus of claim 2, wherein the thermally conductive layer further comprises a front side, a rear side, a first end and a second end, and wherein the notched region extends from the front side of the thermally conductive layer extends to the rear side.4.3. The apparatus of claim 2, wherein the planar area at the bottom surface of the thermally conductive layer intersects the notched area to form a first planar area and a second planar area, wherein the first planar area The planar area and the second planar area are oriented parallel to a plane and at the same vertical distance from the plane.5.2. The apparatus of claim 2, wherein the notched region of the thermally conductive layer is to be positioned over at least one electronic device of the plurality of electronic devices to connect the at least one electronic device to the thermal energy The transfer isolation.6.The apparatus of claim 1, further comprising:A thermal pad including a top surface and a bottom surface, wherein the top surface of the thermal pad is coupled to at least the planar area of the bottom surface of the thermally conductive layer.7.7. The apparatus of claim 6, wherein the thermal pad comprises a thermally conductive, electronic insulator and compressible material.8.The apparatus of claim 1, further comprising:a heat spreader including a top surface and a bottom surface, wherein the bottom surface of the heat spreader is coupled to the top surface of the first TEC to transfer the thermal energy from the first TEC to the heat spreader .9.The apparatus of claim 8, further comprising:A plurality of attachment features of the heat sink for receiving a plurality of adjustable coupling features to adjustably couple the device to the thermal chamber.10.The apparatus of claim 8, further comprising:an electric fan positioned over the top surface of the heat sink to transfer the thermal energy from the heat sink to an adjacent medium.11.A system for testing a plurality of electronic devices under a variety of thermal conditions, the system comprising:an electronic system including the plurality of electronic devices; andA temperature control assembly positioned over two or more of the plurality of electronic devices and transferring thermal energy at the two or more electronic devices, the temperature control assembly comprising:A first thermoelectric component (TEC) including a top surface and a bottom surface, the first TEC configured to simultaneously raise the top surface of the first TEC based on a voltage potential applied to the first TEC temperature and lowering the temperature of the bottom surface or simultaneously lowering the temperature of the top surface and raising the temperature of the bottom surface to achieve a temperature between the top surface and the bottom surface of the first TEC transfer the thermal energy between;a heat transfer assembly including a top surface and a bottom surface, wherein the top surface of the heat transfer assembly is coupled to the bottom surface of the first TEC;a second TEC comprising a top surface and a bottom surface, wherein the top surface of the second TEC is coupled to the bottom surface of the heat transfer assembly; anda thermally conductive layer comprising a top surface and a bottom surface, wherein the top surface of the thermally conductive layer is coupled to the bottom surface of the second TEC, wherein the bottom surface of the thermally conductive layer includes a planar area, and wherein The planar area of the thermally conductive layer will be positioned over the two or more electronic devices of the plurality of electronic devices to transfer the thermal energy at the two or more electronic devices.12.The system of claim 11, further comprising:a thermal chamber including a plurality of sides, wherein one of the plurality of sides includes a port exposing a chamber within the thermal chamber, wherein the port is configured to receive the temperature control within the chamber a bottom portion of the assembly, and wherein a top portion of the temperature control assembly extends externally to the thermal chamber.13.11. The system of claim 11, wherein the bottom surface of the thermally conductive layer further comprises a notched region, wherein the planar region intersects the notched region, and wherein the notched region comprises the voids in the thermally conductive layer extending in a vertical direction from the planar area toward the top surface of the thermally conductive layer.14.14. The system of claim 13, wherein the thermally conductive layer further comprises a front side, a rear side, a first end and a second end, and wherein the notched region extends from the front side of the thermally conductive layer extends to the rear side.15.14. The system of claim 13, wherein the planar area at the bottom surface of the thermally conductive layer intersects the notched area to form a first planar area and a second planar area, wherein the first planar area The planar area and the second planar area are oriented parallel to a plane and at the same vertical distance from the plane.16.14. The system of claim 13, wherein the notched region of the thermally conductive layer is to be positioned over at least one electronic device of the plurality of electronic devices to connect the at least one electronic device to the thermal energy The transfer isolation.17.The system of claim 13, further comprising:A thermal pad including a top surface and a bottom surface, wherein the top surface of the thermal pad is coupled to at least the planar area of the bottom surface of the thermally conductive layer.18.A device comprising:A first thermoelectric component (TEC) including a top surface and a bottom surface, the first TEC configured to simultaneously raise the top surface of the first TEC based on a voltage potential applied to the first TEC temperature and lowering the temperature of the bottom surface or simultaneously lowering the temperature of the top surface and raising the temperature of the bottom surface to achieve a temperature between the top surface and the bottom surface of the first TEC heat transfer betweena heat transfer assembly including a top surface and a bottom surface, wherein the top surface of the heat transfer assembly is coupled to the bottom surface of the first TEC;a second TEC comprising a top surface and a bottom surface, wherein the top surface of the second TEC is coupled to the bottom surface of the heat transfer assembly; anda thermally conductive layer comprising a top surface and a bottom surface, wherein the top surface of the thermally conductive layer is coupled to the bottom surface of the second TEC, wherein the bottom surface of the thermally conductive layer includes a planar area and a concave a mouth area, wherein the planar area intersects the notched area, and wherein the notched area includes voids in the thermally conductive layer, the voids vertically extending from the planar area toward the notched area The top surface of the thermally conductive layer extends.19.19. The apparatus of claim 18, wherein the thermally conductive layer further comprises a front side, a backside, a first end, and a second end, wherein the planar area of the thermally conductive layer is to be positioned at multiple points of an electronic system over two or more of the electronic devices to transfer the thermal energy at the two or more electronic devices, wherein the notched region extends from the front side of the thermally conductive layer to the backside, wherein the notched region of the thermally conductive layer is to be positioned over at least one electronic device of the plurality of electronic devices to isolate the at least one electronic device from the transfer of the thermal energy .20.The apparatus of claim 18, further comprising:A thermal pad including a top surface and a bottom surface, wherein the top surface of the thermal pad is coupled to at least the planar area of the bottom surface of the thermally conductive layer. |
Temperature Control Components for Electronic Systemstechnical fieldThe present disclosure relates generally to a temperature control assembly, and more particularly, to a temperature control assembly for an electronic system.Background techniqueThe memory subsystem may include one or more memory components that store data. The memory components may be, for example, non-volatile memory components and volatile memory components. In general, a host system can utilize a memory subsystem to store data at and retrieve data from memory components.Description of drawingsThe present disclosure will be more fully understood from the detailed description given below and the accompanying drawings of various embodiments of the disclosure. However, the drawings should not be construed as limiting the present disclosure to specific embodiments, but are for explanation and understanding only.1 illustrates an example environment in which testing resources are allocated to perform testing of electronic devices, such as memory components, in accordance with some embodiments of the present disclosure.2A illustrates a temperature control assembly in a folded view according to some embodiments of the present disclosure.2B shows the temperature control assembly in an enlarged view according to some embodiments of the present disclosure.2C illustrates an alternative temperature control assembly in a folded view according to some embodiments of the present disclosure.2D illustrates another alternative temperature control assembly in a folded view according to some embodiments of the present disclosure.3A illustrates a thermal chamber in a closed position according to an embodiment of the present disclosure.3B illustrates a thermal chamber in an open position according to an embodiment of the present disclosure.4A shows the thermal testing system in an enlarged view according to an embodiment of the present disclosure.4B illustrates the thermal testing system in a collapsed view according to an embodiment of the present disclosure.5 is a block diagram of an example computer system in which embodiments of the present disclosure may operate.Detailed waysAspects of the present disclosure relate to a temperature control assembly for an electronic system. During conventional thermal testing, the electronic device may be placed into a chamber (ie, an oven) that tests the electronic device under various temperature conditions. For example, a single chamber can be used to test components of multiple memory subsystems at a time at a particular temperature. Hot or cold gas can be pumped into the chamber to control the temperature of the chamber and the temperature of the electronics therein. The testing process may dictate various operations to be performed at the electronic device at a particular temperature. Such operations may include, but are not limited to, read operations, write operations, or erase operations. The performance and behavior of electronic devices can be observed or measured while performing the test process. For example, performance characteristics (eg, read or write latency) and reliability of data stored at memory components may be measured and recorded during the testing process. However, since the chamber can only apply a single temperature to all electronic devices at any given time, testing electronic devices at many different temperatures may require a significant amount of time as the testing process will need to be performed for each desired temperature. Furthermore, all components of the system within the chamber are controlled to the same temperature, and in some cases only a subset of the components of the system need to be tested at a certain temperature. Additionally, the chamber can only perform a single test procedure at a time. Thus, performing different tests on an electronic device under different operating conditions (eg, different temperatures) can take a significant amount of time if many different conditions of the electronic device's testing process are required.Thermoelectric components (TECs) (also known as "thermoelectric coolers") can convert electrical energy into thermal energy and vice versa. A TEC can contain two surfaces. When a voltage potential is applied to the TEC, one surface heats up while the other simultaneously cools. In some conventional systems, thermoelectric components can be applied directly to an item, such as one or more electronic devices of an electronic system, to change the temperature of the item. However, in some cases, the TEC is shaped in a manner that is not conducive to the transfer of thermal energy to one or more electronic devices. Additionally, in some cases, applying a single TEC to an electrical device does not transfer enough thermal energy to meet the temperature test range of some thermal test conditions. Additionally, an electronic system may include a plurality of electronic devices coupled to the circuit board. Multiple electronic devices may have different vertical heights. It may not be feasible to use one or more planar TECs to contact all or a desired subset of electronic devices.In some conventional systems, to remove excess heat, two TECs of different sizes can be stacked directly on top of each other. In addition to the above challenges, stacking two TECs on top of each other can be inefficient and often insufficient to transfer enough thermal energy to meet the temperature testing range of electrical devices.Aspects of the present disclosure address the above and other challenges by providing a temperature control assembly that implements one or more TECs coupled to a thermally conductive layer. The thermally conductive layer includes a bottom surface that includes a planar region. The planar area of the thermally conductive layer can be positioned over one or more electronic devices of an electronic system to transfer thermal energy to the one or more electronic devices.In some embodiments, the bottom surface of the thermally conductive layer includes one or more notched regions. The flat area intersects the notched area. The notched area contains voids or air gaps so that the notched area is not thermally coupled to the electronic device directly below the notched area. The planar and notched regions of the bottom surface of the thermally conductive layer may allow the temperature control assembly to transfer thermal energy to some of the electronic devices of the electronic system and isolate other electronic devices of the electronic system from thermal energy transfer.In some embodiments, the planar area of the thermally conductive layer may be coupled to a thermal pad. In some embodiments, the thermal pad is compressible and thermally conductive. Thermal pads may allow temperature control components, particularly planar regions of thermally conductive layers, to thermally couple to electronic devices having different vertical heights.In some embodiments, the temperature control assembly includes an upper TEC that includes a top surface and a bottom surface. The temperature control assembly also includes a heat transfer assembly including a top surface and a bottom surface. The bottom surface of the upper TEC is coupled to the top surface of the heat transfer assembly. The temperature control assembly contains the lower TEC. The top surface of the lower TEC is coupled to the bottom surface of the heat transfer assembly. The temperature control assembly includes a thermally conductive layer having a top surface and a bottom surface. The top surface of the thermally conductive layer is coupled to the bottom surface of the lower TEC. The thermally conductive layer bottom surface includes a planar area positioned over two or more electronic devices of the electronic system to transfer thermal energy at the electronic devices.Advantages of the present disclosure include, but are not limited to, providing a temperature control assembly that allows for efficient thermal energy transfer between the temperature control assembly and one or more electronic devices of an electronic system. Furthermore, multiple temperature control components can be implemented to independently control thermal conditions at respective electronic systems, which allows for more efficient electronic device testing under different thermal conditions. Additionally, aspects of the present disclosure may apply thermal conditions over a wider and lower temperature range than conventional test systems. Many different tests of an electrical device can be performed more quickly, and reliability of the electrical device can also be improved because any potential shortcomings or defects can be identified and subsequently addressed in the design or manufacture of the electrical device.1 illustrates an example environment in which testing resources are allocated to perform testing of electronic devices, such as memory components, in accordance with some embodiments of the present disclosure. Test platform 100 may include one or more racks 110A, 110B, and 110N. Each of racks 110A, 110B, and 110N may include multiple frames 120, where each frame 120 includes one or more thermal chambers. Test platform 100 may include any number of racks or thermal chambers.In some embodiments, the thermal chamber may enclose the electronic system within the chamber of the thermal chamber. An electronic system may have one or more electronic devices. In some embodiments, multiple electronic devices are coupled to the circuit board to form an electronic system. In some embodiments, the electronic device may be a discrete component housed in a package (eg, ceramic encapsulation material). The encapsulation material may have pins, solder bumps, or terminals outside the package that connect on-chip or on-die components to off-chip or off-die components (eg, power supplies, other components at the circuit board, etc.).As shown, frame 120 may include one or more thermal chambers. For example, the frame 120 may include a first thermal chamber 121 , a second thermal chamber 122 and a third thermal chamber 123 . Although three thermal chambers are shown, frame 120 may include any number of thermal chambers. Additionally, each thermal chamber may be equipped with temperature control components for applying temperature conditions to one or more of the electronic devices of the electronic system. For example, a temperature control component can be thermally coupled with the packaging of the electronics of the memory subsystem to adjust the package temperature or on-die temperature to a desired temperature value within a temperature range. In some embodiments, a temperature control assembly may be used to apply a localized temperature to a corresponding electronic device of a particular electronic system that is different from other corresponding electronic devices applied by another temperature control assembly to another electronic system at the same or different frame 120 temperature of the electronic device. For example, a first temperature control component may apply a temperature of -20 degrees Celsius to an electronic device of a particular memory subsystem, and another temperature control component located adjacent to the first temperature control component may apply a temperature of 100 degrees Celsius to a temperature of 100 degrees Celsius. Other electronics of another memory subsystem at the same frame 120 .In some embodiments, the temperature control components may include one or more thermoelectric components (TECs). In some embodiments, a temperature control assembly including one or more TECs can utilize the Peltier effect to apply a heating or cooling effect to electronic devices coupled to an electronic system of the temperature control assembly. For example, the bottom portion of the temperature control assembly can be coupled to a package of an electronic device of an electronic system to transfer thermal energy to and from the electronic device. In some embodiments, the thermoelectric assembly may be a Peltier device. In some embodiments, a thermoelectric assembly may comprise an array of alternating n-type and p-type semiconductors disposed between two plates, such as two ceramic plates. The voltage applied to the thermoelectric assembly causes one plate to cool while the other heats up.As shown, each test rack 110A, 110B, and 110N may include multiple frames 120 . Each of the frames 120 of a particular test rack can be coupled with local test components. For example, each test rack 110A, 110B, and 110N may include local test components 111A, 111B, and 111N, respectively. Each of the local test assemblies 111A, 111B, and 111N may receive instructions to perform a test or a portion of a test to be performed at the thermal chamber of the respective test rack. For example, the resource allocator component 130 can receive (eg, from a user) the conditions of the test to be executed, and the resource allocator component 130 can determine one or more of the test racks 110A, 110B, and 110N that can be used by the test specific thermal chambers on different frames 120 at . In some embodiments, resource allocator component 130 may be provided by server 131 . In some embodiments, server 131 is a computing device or system coupled to local testing components 111A, 111B, and 111N via a network.The temperature control components of each thermal chamber 121, 122, and 123 of each frame 120 may be used to apply different temperature conditions to the electronic devices of the corresponding electronic system. Additionally, a communication channel may be formed between each electronic system and the server 131 at each thermal chamber. For example, server 131 may control each electronic system such that each electronic system performs different operations under different thermal conditions.The resource allocator component 130 can receive test input from a user. Test inputs may specify conditions for tests to be performed with one or more electronic systems. For example, a test may specify specific temperature conditions to be applied to the memory components of the memory subsystem, and sequences of operations to be performed at the memory components under the specific temperature conditions. Resource allocator 130 may retrieve a data structure that identifies available thermal chambers on test platform 100 and characteristics of the available thermal chambers and the electronic systems therein. The resource allocator component 130 can then assign thermal chambers at the test platform 100 containing electronic devices (eg, embedded memory components) that match or satisfy the test conditions. The resource allocator component 130 may then transmit the instructions to the local test component of the test rack containing the thermal chamber to be used for testing.In some embodiments, the thermal chamber may include one or more ports. The one or more ports may expose chambers within the thermal chamber. Electronic devices of the electronic system are accessible from the one or more ports. In some embodiments, the one or more ports are configured to receive temperature control components. In some embodiments, the bottom portion of the temperature control assembly extends within the chamber of the thermal chamber and is coupled to a corresponding electronic device. A top portion of a temperature control assembly, such as a heat sink, may extend over the thermal chamber. In some embodiments, a temperature control assembly can be coupled to the thermal chamber. In some embodiments, a thermal chamber may be used to hold the temperature control assembly in place. In some embodiments, the thermal chamber may align the temperature control assembly with the corresponding electronic device such that a bottom portion of the temperature control assembly may be coupled with the corresponding electronic device. Multiple temperature control assemblies can simultaneously apply different temperatures to electronic systems within the thermal chamber. The thermal chamber is further described below with respect to at least FIGS. 3A-3B and 4A-4B.2A-2D illustrate a temperature control assembly according to some embodiments of the present disclosure. 2A illustrates a temperature control assembly in a folded view according to some embodiments of the present disclosure. 2B shows the temperature control assembly in an enlarged view according to some embodiments of the present disclosure. 2C illustrates an alternative temperature control assembly in a folded view according to some embodiments of the present disclosure. 2D illustrates another alternative temperature control assembly in a folded view according to some embodiments of the present disclosure. For purposes of illustration and not limitation, temperature control assembly 200 is shown with several elements. In other embodiments, the temperature control assembly 200 may include the same, different, fewer or additional elements. For purposes of illustration and not limitation, temperature control assembly 200 is shown with a relative positional relationship, eg, top, bottom, front, and end. It may be noted that it is within the scope of this disclosure to assign other positional relationships to the temperature control assembly 200 and elements of the temperature control assembly 200 .Thermoelectric components (TECs) (also known as "thermoelectric coolers") can convert electrical energy into thermal energy and vice versa. A TEC can contain two surfaces. When a voltage potential is applied to the TEC, one surface heats up while the other opposing surface simultaneously cools. The TEC may generate more thermal energy at one surface than the TEC dissipates at the opposite surface. For example, for every 1 degree Celsius drop at the first surface of the TEC, the opposite surface of the TEC generates approximately 3 degrees Celsius. Since the TEC generates a disproportionate amount of heat for each degree of cooling, it can be challenging to remove more than one heat from one surface while cooling the electronic device with opposing surfaces. The challenges are especially severe when testing electronic devices at extremely low temperatures, because the heat generated is a multiple of the heat removed. In some embodiments, the use of a single TEC is not sufficient to transfer enough thermal energy to meet the temperature test range of the electronic device. In other embodiments, the use of a single TEC may be sufficient to achieve the desired thermal test conditions. In some embodiments, one or more TECs may be implemented, and the number and location of the TECs may be determined, eg, based on design considerations and desired thermal test conditions.In some embodiments, the temperature control assembly 200 includes a thermoelectric assembly (TEC) 202 . In embodiments, a TEC such as TEC 202 may act as a heat pump to deliver heat to or remove heat from a surface. The TEC 202 includes two surfaces 204, a top surface 204A and a bottom surface 204B. A TEC such as TEC 202 is configured to simultaneously increase the temperature of the top surface (eg, top surface 204A) and decrease the temperature of the bottom surface (eg, bottom surface 204B), or simultaneously decrease the temperature of the top surface (eg, bottom surface 204B), based on a voltage potential applied to the TEC. For example, the temperature of the top surface 204A) and the temperature of the bottom surface (eg, the bottom surface 204B) is increased. In some embodiments, TECs such as TEC 202 and TEC 210 contain a set of wires to couple a voltage potential to the TEC and deliver the necessary current to the TEC. The amount of heat removed or delivered to the surface can be controlled by the surface area of the TEC and/or the power supplied to the TEC. For example, if the heat transfer capacity of TEC 210 is twice the heat transfer capacity of TEC 202, the surface area of TEC 210 may be twice the surface area of TEC 202, so that the heat transfer capacity of TEC 210 is the heat transfer capacity of TEC 202 at least twice the capacity. Alternatively, TEC 210 may have a similar surface area as TEC 202, but be supplied with twice the power and have twice the heat transfer capacity.In an embodiment, temperature control assembly 200 includes TEC 210 . The TEC 210 may include two surfaces 212, such as a top surface 212A and a bottom surface 212B. In an embodiment, bottom surface 212B is coupled to top surface 208A of heat transfer assembly 206 .In some embodiments, the surface areas of TEC 210 and TEC 202 may have any ratio (eg, 1:1, 2:1, 1:2, etc.). In some embodiments, TEC 210 has a larger surface area than TEC 202 . In some embodiments, TEC 210 is sized to efficiently transfer heat away from TEC 202 under desired thermal conditions.In some embodiments, one or more of TEC 210 or TEC 202 may include one or more TECs. For example, TEC 202 may include two other TECs distributed over thermally conductive layer 214 and coupled to heat transfer component 206 . In some embodiments, a single level of TEC may be implemented. For example, in some embodiments, TEC 210 and heat transfer component 206 are not implemented, and TEC 202 may be coupled to surface 222B of heat spreader 220 . TEC 202 may include any number of TECs under the particular hierarchy.For purposes of illustration and not limitation, TEC 202 and TEC 210 are shown as specific shapes. In some embodiments, one or more of TEC 210 or TEC 202 may be of any shape or size, such as circular TECs, rectangular TECs, square TECs, and the like. In some embodiments, the shape of one or more of the selected TECs may be based on the shape of the surface of the electronic device 250 or the shape of the electronic system 252 . Electronic devices 250A-250E are generally referred to herein as "electronic devices 250". For example, if the electronic system 252 is rectangular in shape with multiple electronic devices aligned in a row, a rectangular TEC shaped to couple to the electronic device 250 of the electronic system 252 (at least for the TEC 202) can contribute to high efficiency The ground transfers thermal energy to and from the electronic device 250 . It may be noted that it is within the scope of this disclosure to use TECs with different shapes.In some embodiments, temperature control assembly 200 includes heat transfer assembly 206 . In some embodiments, the heat transfer assembly 206 efficiently conducts thermal energy from the surface of one TEC to the opposite surface of another TEC. For example, to cool the electronic device 250 under test, the bottom surface 204B of the TEC 202 removes thermal energy (eg, heat) from the top surface of the electronic device 250 . The top surface 204A of the TEC 202 simultaneously generates thermal energy, which is transferred to the bottom surface 212B of the TEC 210 via the heat transfer assembly 206 . The TEC 210 may remove received thermal energy at the bottom surface 212B of the TEC 210 . The top surface 212A of the TEC 219 may generate thermal energy, which is transferred to the heat sink 220 and dissipated in the surrounding environment.In some embodiments, the heat transfer assembly 206 is constructed or fabricated from a thermally conductive material. Thermally conductive materials include, but are not limited to, copper, aluminum, brass, or alloys of the foregoing. It is noted that other thermally conductive materials may be used. It can also be noted that materials with higher thermal conductivity (k) can transfer thermal energy between TEC 202 and TEC 210 more efficiently.In some embodiments, the heat transfer assembly 206 includes at least two surfaces 208, including a top surface 208A and a bottom surface 208B. Bottom surface 208B of heat transfer assembly 206 is coupled to top surface 204A of TEC 202 . The top surface 208A of the heat transfer assembly 206 is coupled to the bottom surface 212B of the TEC 210 .In some embodiments, thermal transfer assembly 206 may be coupled to surfaces of adjacent components using thermal interface materials such as thermally conductive adhesives, thermal greases, phase change materials, thermal tapes, thermal pads, thermal epoxy resin, etc. For example, thermal interface material may be disposed between the top surface 204A of the TEC 202 and the bottom surface 208B of the heat transfer component 206 , and between the top surface 208A of the heat transfer component 206 and the bottom surface 212B of the TEC 210 . In some embodiments, the thermal interface material may have a minimum conductivity of at least 150 Watts/meter-Kelvin (W/mK) or greater.In some embodiments, the top surface 208A and the bottom surface 208B of the heat transfer assembly 206 may have any number of shapes or sizes. In some embodiments, heat transfer assembly 206 is tapered such that top surface 208A and bottom surface 208B are aligned with the surfaces of adjacent TECs (ie, TEC 210 and TEC 202 ), respectively. In some embodiments, the top surface 208A and the bottom surface 208B of the heat transfer assembly 206 are sized to match or approximate the size of the surfaces of the respective TECs. In some embodiments, the top surface 208A of the heat transfer assembly 206 may be any shape. In some embodiments, top surface 208A of heat transfer assembly 206 may be larger and/or smaller than surface 212B of TEC 210 . For example, the top surface 208A of the heat transfer component 206 may be larger than the bottom surface 212 of the TEC 210 such that the edges of the TEC 210 do not extend over any of the edges of the top surface 208A of the heat transfer component 206 . In another example, the top surface 208A of the heat transfer assembly 206 may be larger than the bottom surface 212B of the TEC 210 along one axis, but smaller than the bottom surface 212B of the TEC 210 along the other axis. For example, the TEC 210 may be longer but narrower than the top surface 208A of the heat transfer component 206 . In some embodiments, the bottom surface 208B of the heat transfer assembly 206 can be any shape. In some embodiments, bottom surface 208B of heat transfer assembly 206 may be larger and/or smaller than surface 204A of TEC 202 .In some embodiments, the heat transfer assembly 206 may be stepped, as shown. In other embodiments, the heat transfer assembly 206 may have a different shape, such as a planar pyramid that tapers from the top surface to the bottom surface. In some embodiments, the shape of the heat transfer assembly 206 may be based in part on the shape of the TEC that contacts the surface of the heat transfer assembly 206 . For example, in embodiments using circular TECs, the shape of the heat transfer element 206 may be conical or cylindrical, wherein the bottom and top surfaces of the heat transfer element 206 are circular. In some embodiments, the thickness of heat transfer assembly 206 (between surfaces 208A and 208B) is greater than or equal to the thickness of one of TEC 202 or TEC 210 .In some embodiments, the temperature control assembly 200 may include a thermally conductive layer 214 . The thermally conductive layer 214 layer may include a top surface 216A and a bottom surface 216B. In an embodiment, the top surface 216A of the thermally conductive layer 214 is coupled to the bottom surface 204B of the TEC 202 . In some embodiments, the thermally conductive layer 214 may transfer thermal energy from the bottom surface 204B of the TEC 202 to the bottom surface 216B of the thermally conductive layer 214 .In some embodiments, bottom surface 216B of thermally conductive layer 214 includes planar region 260A and planar region 260B (commonly referred to herein as "planar region 260"). Planar region 260 may be positioned over one or more electronic devices 250 of electronic system 252 (eg, electronic devices 250A, 250B, 250C, and 250E) to transfer thermal energy to underlying electronic devices 250 . For example, the bottom surface 216B of the thermally conductive layer 214 may be positioned to couple with the top surface of the package of the electronic device 250 of the electronic system 252 such that the package temperature of the electronic device 250 or the on-chip temperature of the electronic device is controlled to a desired temperature.In some embodiments, bottom surface 216B of thermally conductive layer 214 may include one or more notched regions, such as notched region 262 . In some embodiments, the notched region 262 may be positioned such that one or more electronic devices of the electronic system 252 that are directly below the notched region 262 (eg, electronic device 250D) are not coupled to the thermally conductive layer 214 . Notched region 262 may allow an air gap between the underlying electronic device and thermally conductive layer 214 so that thermal energy is not transferred between thermally conductive layer 214 and the electronic device below notched region 262 .In some embodiments, planar region 260 intersects notched region 262 . Notched region 262 may include voids in thermally conductive layer 214 that extend in a vertical direction from planar region 260 toward top surface 216A of thermally conductive layer 214 . In some embodiments, the notched region 262 does not vertically intersect the thermally conductive layer 214 from the bottom surface 216B of the thermally conductive layer 214 through the top surface 216A (eg, divides the thermally conductive layer 214 into two portions). The notched region 262 may leave a portion of the thermally conductive layer 214 over the notched region such that the thermally conductive layer 214 is a continuous block that efficiently conducts thermal energy.In some embodiments, the thermally conductive layer 214 includes four sides, eg, a front side, a rear side, a first end, and a second end. The notched region 262 may extend from the front side to the back side of the thermally conductive layer 214, as shown. In some embodiments, planar area 260 of bottom surface 216B of thermally conductive layer 214 intersects notched area 262 to form planar area 260A and planar area 260B. Planar regions 260A and 260B may be oriented parallel to a plane and oriented the same vertical distance from the plane. In some embodiments, the notched region 262 of the thermally conductive layer 214 is positioned over at least one electronic device 250 of the electronic system 252 to isolate the corresponding electronic device 250 (eg, electronic device 150D) from thermal energy transfer.In some embodiments, the notched region 262 may be located anywhere along the bottom surface 216B of the thermally conductive layer 214 . For example, the notched regions 262 may be at the ends of the thermally conductive layer 214 . In some embodiments, the notched region 262 can be of any size, of any size, and located anywhere relative to the thermally conductive layer 214 . In some embodiments, one or more of the size, dimensions, and location of the notched region 262 may be determined based on the location and size of the underlying electronics that do not require thermal energy transfer.In an embodiment, the thermally conductive layer 214 may be coupled to the TEC 202 using a thermal interface material, as described above. In an embodiment, the thermally conductive layer 214 is composed of or fabricated from a thermally conductive material, as described above.In embodiments, the top surface 216A of the thermally conductive layer 214 may be approximately the same size and the same shape as the bottom surface 204B of the TEC 202 . In some embodiments, the size and shape of the surface 216 of the thermally conductive layer 214 may be based on the size and shape of the top surface (eg, the contact surface) of the electronic device 250 . For example, the thermally conductive layer 214 can be shaped such that the bottom surface 216B is coupled to most, if not all, (in some cases, more than just the top surface of the electronic device) of the electronic device 250 . In some embodiments, the top surface 216A of the thermally conductive layer 214 is approximately the same size as, or larger than, the bottom surface 204B of the TEC 202 . In some embodiments, the bottom surface 216B of the thermally conductive layer 214 may have the same size and shape as the top surface 216A of the thermally conductive layer 214 . For example, the thermally conductive layer 214 may be a cube or a rectangle. In some embodiments, the thermally conductive layer 214 may taper in one direction or the other, eg, from the top surface 216A to the bottom surface 216B, or vice versa. It may be noted that the shape of the thermally conductive layer 214 may be based, at least in part, on the shape of the TEC 202 , the electronic device 250 , or the electronic system 252 .In some embodiments, the thermally conductive layer 214 can be an optional element, and the TEC 202 can be coupled with the electronic device 250 to transfer thermal energy to and from the electronic device 250 .In some embodiments, temperature control assembly 200 may include one or more thermal sensing devices 218 . In some embodiments, thermal sensing device 218 may be disposed or embedded within thermally conductive layer 214 . The thermal sensing device 218 may be located within the thermally conductive layer 214 such that the temperature sensing surface of the thermal sensing device 218 is very close to the bottom surface 216B of the thermally conductive layer 214 . In some embodiments, one or more thermal sensing devices 218 may be distributed across thermally conductive layer 214 . Thermal sensing device 218 may be used to measure the temperature applied to the package of electronic device 250 , which may effectively represent the temperature at the package of electronic device 250 due to the low thermal resistance (k) of thermally conductive layer 214 . In embodiments, thermal sensing device 218 may include any temperature sensing device, such as thermocouples, capacitive temperature sensing devices, resistive temperature sensing devices, and the like. In an embodiment, the thermal sensing devices 218 may include a set of wires to couple each thermal sensing device 218 to a measurement unit to measure the output of the thermal sensing devices 218 .In some embodiments, the bottom surface 216B of the thermally conductive layer 214 may include a thermal interface material disposed between the thermally conductive layer 214 and the underlying electronic device 250 of the electronic system 252 . In some embodiments, the thermal interface material can be one or more of flexible, thermally conductive, compressible, electrically insulating, reusable, and can return to its original shape (eg, properties) after compression. An example of an interface material having one or more of the aforementioned properties may be a thermal pad. In some embodiments, the vertical height of the electronic device 250 of the electronic system 252 may vary. For coupling to electronic devices 250 having different heights, thermal pads may be positioned between thermally conductive layer 214 and electronic device 250 to compensate for the different heights and allow for efficient transfer of thermal energy between thermally conductive layer 214 and electronic device 250 . The thermal pad can be compressed between the thermally conductive layer 215 and the encapsulation of the electronic device 250 so that a physical contact is formed between the thermal pad and the underlying electronic device 250, which enables the thermal conductive layer 215 and the electronic device 250 with different heights. thermal coupling.In some embodiments, thermal liner 264 may be applied to at least planar region 260 of thermally conductive layer 214 . For example, thermal pad 264 includes thermal pad 264A and thermal pad 264B corresponding to planar area 260A and planar area 260B, respectively. In some embodiments, thermal pad 264 may include a top surface 266A and a bottom surface 266B. The top surface 266A of the thermal pad 264 may be coupled (eg, bonded) to at least the planar region 260 of the bottom surface 216B of the thermally conductive layer 214 .In some embodiments, electronic system 252 may have one or more electronic devices, as shown by electronic device 250 . Electronic system 252 is shown as a solid state drive in an M.2 form factor. In other embodiments, electronic system 252 may be any type of electronic system, or may be of any size. In some embodiments, the electronic device is mounted to an electronic circuit board. In some embodiments, one or more of the electronic device 250 and the electronic circuit board are included in the electronic system 252 . In some embodiments, one or more of the electronic devices 250 may include one or more temperature sensing devices, such as on-chip temperature sensing devices. The on-chip temperature may be different from the package temperature of the electronic device 250 due to the thermal resistance of the package. Temperature measurements from the on-chip temperature sensing device, the thermal sensing device 218 of the thermally conductive layer 214 , or both may be used to perform thermal testing on the electronic device 250 .In some embodiments, the temperature control assembly 200 may include a heat sink 220 . Heat spreader 220 may include a top surface 222A and a bottom surface 222B. In embodiments, top surface 222A may comprise a larger surface area than bottom surface 222B to help facilitate thermal energy transfer from heat sink 220 to adjacent media. In an embodiment, the bottom surface 222B of the heat spreader 220 is coupled to the top surface 212A of the TEC 210 to transfer thermal energy from the TEC 210 to the heat spreader 220 . In some embodiments, the heat spreader 220 and the TEC 210 are coupled using a thermal interface material, as described above. In an embodiment, the heat spreader 220 is constructed of a thermally conductive material, as described above.In some embodiments, the heat sink 220 is a passive mechanical device. In an embodiment, the top surface 222A of the heat spreader 220 includes a plurality of channels, and a plurality of fins disposed on opposite sides of the channels. In other embodiments, the heat sink 220 may be another type of heat sink, such as a liquid cooled heat sink or the like.In some embodiments, the heat spreader 220 includes one or more attachment features 224 . In embodiments, attachment features may be used to secure the temperature control assembly 200 to the thermal chamber. In some embodiments, the attachment member 224 is configured to receive an adjustable coupling member 226 that can adjustably couple the temperature control assembly 200 to the thermal chamber. In some embodiments, the adjustable coupling member may include a spring element that allows adjustment of the vertical position of the temperature control assembly 200 mounted to the thermal chamber.In some embodiments, temperature control assembly 200 may include a fan, such as electric fan 228 . In an embodiment, the electric fan 228 is positioned over the top surface 222A of the heat sink 220 and is used to transfer thermal energy from the heat sink 220 to an adjacent medium, such as the local gaseous medium of the temperature control assembly 200 . Electric fan 228 may include a set of wires coupled to a voltage potential.For purposes of illustration and not limitation, a single heat transfer assembly 206 is shown. In other embodiments, multiple heat transfer assemblies 206 may be used. For example, additional heat transfer components may be stacked on top surface 212A of TEC 210 . The additional heat transfer assembly may be larger than heat transfer assembly 206 . For example, the bottom surface of the additional heat transfer component may be approximately the same size as the top surface 212A of the TEC 210 . The additional heat transfer components may be tapered such that the top surface of the additional heat transfer components is larger than the bottom surface. In embodiments, the top surface of the additional heat transfer component may be coupled to a TEC larger than TEC 210 (eg, larger surface area). In other embodiments, any number of additional heat transfer components or TECs may be implemented.In FIG. 2C , temperature control assembly 270 uses a single level of TEC, such as TEC 272 . The thermally conductive layer 274 does not contain any notches. Thermal control assembly 270 does not implement a second level TEC or heat transfer assembly. In Figure 2D, temperature control assembly 280 contains two TECs, eg, TEC 282A and TEC 282B, under a single level. It may be noted that elements of temperature control assemblies 200, 270 and 280 may be mixed, matched, removed or added to form different temperature control assemblies.3A-3B illustrate thermal chambers according to embodiments of the present disclosure. 3A illustrates a thermal chamber in a closed position according to an embodiment of the present disclosure. 3B illustrates a thermal chamber in an open position according to an embodiment of the present disclosure. For purposes of illustration and not limitation, thermal chamber 300 is depicted in relative positional relationship as shown by three-dimensional (3D) axis 302 . It is noted that it is within the scope of this disclosure to assign other relative positional relationships to thermal chamber 300 .3D-axis 302 includes an X-axis, a Y-axis, and a Z-axis. As shown, the X-axis points in forward and rearward directions relative to thermal chamber 300 . The Y-axis points in the direction of the two ends with respect to the thermal chamber 300 . The Y axis of the 3D axis 302 corresponds to the horizontal axis 304 . The Z-axis points in the direction of the top and bottom relative to the thermal chamber 300 .It may be noted that thermal chamber 300 may include one or more hinges, such as hinges 338A and 338B, that allow thermal chamber 300 to transition from an open position to a closed position and vice versa. FIG. 3A shows thermal chamber 300 in a closed position. FIG. 3B shows thermal chamber 300 in an open position.The following describes the positional relationship of the various sides of the thermal chamber 300 in the closed position. It will be appreciated that some positional relationships of one or more of the plurality of sides may be changed by transitioning the thermal chamber 300 to another position shown in FIG. 3B (eg, an open position). In an embodiment, thermal chamber 300 includes multiple sides, eg, multiple rigid sides. The plurality of sides includes a rear side 308 oriented parallel to the horizontal axis 304, a front side 306 oriented parallel to the horizontal axis 304, an end 310A oriented perpendicular to the horizontal axis 304 (eg, along the X axis), And end 310B oriented perpendicular to horizontal axis 304 and positioned opposite end 310A.The sides of thermal chamber 300 also include a top side 312 oriented perpendicular to back side 308 , front side 306 , end 310A, and end 310B. The sides of thermal chamber 300 also include a bottom side 314 oriented perpendicular to back side 308 , front side 306 , end 310A, and end 310B. In an embodiment, in the closed position, the plurality of sides form a chamber 316 enclosed by the plurality of sides.In some embodiments, thermal chamber 300 is coupled to frame 348 . For example, the bottom side 314 of the thermal chamber 300 may be secured to the frame 348 using one or more fasteners. In some embodiments, frame 348 may be used with a rack, as shown in FIG. 1 . Although a single thermal chamber 300 is shown secured to frame 348, in some embodiments one or more thermal chambers may be secured to a particular frame.In some embodiments, the top side 312 includes one or more ports 318 oriented along the first direction of the horizontal axis 304 . It may be further noted that thermal chamber 300 shows a single port 318 aligned along horizontal axis 304 for purposes of illustration and not limitation. In other embodiments, thermal chamber 300 may include any number of ports 318 located anywhere relative to thermal chamber 300 . In some embodiments, port 318 includes an open area (also referred to herein as "topside open area 320") that exposes chamber 316 within thermal chamber 300. In an embodiment, port 318 is configured to receive a temperature control assembly, such as temperature control assembly 200 described with respect to Figures 2A-2D. The temperature control assembly 200 may be positioned relative to the thermal chamber 300 such that the temperature control assembly 200 transfers thermal energy to and from electronic devices exposed via the chamber 316 .In some embodiments, one or more of the plurality of sides is composed of a material that is one or more of a thermal insulator, a non-conductive material, or an antistatic material. In some embodiments, one or more of the plurality of sides may be composed of a phenolic material. In some embodiments, the one or more of the plurality of sides are composed of a conductive material. In some embodiments, the thermal chamber 300 constructed of a conductive material may be grounded to a ground potential to help avoid electrostatic discharge damage at the electronic device under test.In some embodiments, port 318 includes at least one pair of opposing sides, such as opposing side 322A and opposing side 322B of port 318 (generally referred to herein as "opposing sides 322"). In some embodiments, port 318 may be associated with one or more securing features. The securing feature allows the temperature control assembly 200 to be secured at the top side 312 of the thermal chamber 300 and aligned to access electronic devices exposed in the chamber 316 via the top side open area 312 of the thermal chamber 300 . For example, securing feature 324A is positioned adjacent to opposite side 322A of port 318 . The securing feature 324B is positioned adjacent to the opposite side 322B of the port 318 . Securement features 324A and 324B (generally referred to herein as “secured features 324 ”) are associated with port 318 and allow respective temperature control assemblies 200 to be secured at port 318 . In some embodiments, the securing feature 324 includes a hole through the top side 312 of the thermal chamber 300 . In an embodiment, the securing features 324 are each configured to receive an adjustable coupling member to adjustably couple the temperature control assembly 200 to the thermal chamber 300 at the port 318 . The number, shape and location of the securing features are provided for purposes of illustration and not limitation. In other embodiments, the number, shape or location of the securing features may vary.Turning to FIG. 3B , in an embodiment, thermal chamber 300 includes gas port 326 . Gas port 326 may be configured to allow gas to enter chamber 316 of thermal chamber 300 from an external gas source. Gas ports 326 connect the outer surface of thermal chamber 300 to the chamber of thermal chamber 300 . In some embodiments, the gas port 326 includes a hole, such as a circular hole, at one of the sides. For example, gas port 326 may be located at front side 306 , rear side 308 , end 310A, end 310B, top side 312 , or bottom side 314 of thermal chamber 300 . In the illustrative example, gas port 326 is located at rear side 308 of thermal chamber 300 . In some embodiments, the gas port 326 is fitted with a gas fitting 328 coupled to the gas port 326 . In some embodiments, a portion of gas fitting 328 may fit within gas port 326 and another portion of gas fitting 328 may extend outside of thermal chamber 300 . In some embodiments, the portion of gas fitting 328 that extends outside of thermal chamber 300 may be coupled to a gas hose that moves gas from a gas source into the chamber of thermal chamber 300 .In some embodiments, thermal chamber 300 includes one or more adjustable brackets, such as adjustable bracket 344A, adjustable bracket 344B, adjustable bracket 344C, adjustable bracket 344D, adjustable bracket 344E, and adjustable bracket 344F ( Commonly referred to herein as "adjustable bracket 344"). In some embodiments, the adjustable bracket 344 can be coupled (eg, mounted) to the bottom side 314 of the thermal chamber and positioned perpendicular to the bottom side 314 of the thermal chamber 300 . In some embodiments, each of the adjustable brackets 344 includes two ends. The first end is coupled to the bottom side 314 of the thermal chamber 300 and the second end is coupled to the electronic circuit board 332 located above the bottom side 314 of the thermal chamber 300 . Adjustable brackets 344 form a vertical distance (eg, space) between bottom side 314 of thermal chamber 300 and electronic circuit board 332 . For example, the adjustable features 344A include an end 346A mounted to the bottom side 314 of the thermal chamber 300 , and an end 346B extending above and perpendicular to the bottom side 314 .In some embodiments, one or more of the adjustable brackets 344 may include adjustable features, such as the adjustable features 330 . The vertical position of the adjustable feature 330 can be adjusted. For example, the adjustable features 330 may include one or more nuts, and the adjustable brackets 344A may include threaded bolts. Adjustable feature 330 may be rotated in a counterclockwise direction to move upward, or in a clockwise direction to move downward toward bottom side 314 of thermal chamber 300 . In some embodiments, the electronic circuit board 332 may be mounted to the adjustable brackets 344 over one or more adjustable features of the adjustable brackets 344 . For example, the electronic circuit board 332 may rest on the adjustable features. The adjustable features can be raised or lowered so that the electronic circuit board 322 can be raised or lowered a similar distance.In some embodiments, the electronic circuit board 332 may be in dielectric contact with the electronic system 252 . In some embodiments, electronic circuit board 332 is not implemented, and electronic system 252 may be coupled within thermal chamber 300 in a similar manner as described with respect to electronic circuit board 332 .In some embodiments, electronic circuit board 332 includes four sides, a top surface, and a bottom surface, all contained within chamber 316 of thermal chamber 300 in the closed position. The bottom surface of the electronic circuit board 332 may face the bottom side 314 of the thermal chamber 300 .In some embodiments, electrical connector 336 is coupled to electronic circuit board 332 . The electrical connector 336 is configured to couple the electronic system 252 with the electronic circuit board 332 . In some embodiments, electrical connector 336 is above electronic circuit board 332 . When electronic system 252 is plugged into electrical connector 336 , electronic system 252 is positioned over electronic circuit board 332 such that a vertical space exists between the top surface of electronic circuit board 332 and the bottom surface of electronic system 252 .In some embodiments, support features 334 may be located between the top surface of electronic circuit board 332 and the bottom surface of electronic system 252 . In some embodiments, the support features comprise a non-conductive material, such as rubber. In some embodiments, the support features 334 support the electronic system 252 at a fixed location above the electronic circuit board 332 . For purposes of illustration and not limitation, support feature 334 is shown as a pad positioned below electronic system 252 . In other embodiments, the support features may include one or more adjustable brackets mounted to the electronic circuit board 332 .In some embodiments, electrical connector 342 is coupled to electronic circuit board 332 . In some embodiments, electrical connector 342 is configured to couple electronic system 252 to an external electronic system (eg, server 131 of FIG. 1 ) external to thermal chamber 300 .In some embodiments, at least one of the multiple sides of thermal chamber 300 may include electrical connector access ports 340 . For example, electrical connector access port 340 is shown at end 310B of thermal chamber 300 . In some embodiments, the electrical connector access port 340 is configured to allow the first end of the cable to be coupled to the electrical connector 342 and the second end of the cable to extend through the electrical connector access port 340 and to Outside of thermal chamber 300 . For example, a ribbon cable can be coupled to electrical connector 342 . Ribbon cables may extend outside of thermal chamber 300 and couple electronic system 252 (and electronic circuit board 332 ) to a server, such as server 131 of FIG. 1 . The server may send and receive signals to and from the electronic system 252 via the ribbon cable.In some embodiments, one or more humidity sensors 350 may be located within chamber 316 of thermal chamber 300 . Humidity sensor 350 may sense the humidity level within chamber 316 . For purposes of illustration and not limitation, humidity sensor 350 is shown coupled to electronic circuit board 332 . In other embodiments, humidity sensor 350 may be located anywhere within thermal chamber 300 .In some embodiments, thermal chamber 300 may include one or more hinges, such as hinge 338A and hinge 338B (generally referred to herein as "hinge 338"). The one or more hinges may be coupled to any one or more of the plurality of sides of the thermal chamber 300 . For example, hinge 338A is coupled to end 310B and front side 306 . Hinge 338 is configured to allow thermal chamber 300 to transition between an open position and a closed position, and vice versa. The hinge 338 is configured to rotate the top side 312 of the thermal chamber 300 about the axis of rotation. The axis of rotation may be parallel or perpendicular to the horizontal axis 304 .4A-4B illustrate a system for testing electronic devices of an electronic system under various thermal conditions in accordance with embodiments of the present disclosure. FIG. 4A shows thermal testing system 400 in an enlarged view according to an embodiment of the present disclosure. 4B illustrates thermal testing system 400 in a collapsed view according to an embodiment of the present disclosure. It may be noted that a temperature control assembly such as temperature control assembly 200 of FIGS. 2A-2D may be used with or part of system 400 . It may also be noted that a thermal chamber, such as thermal chamber 300 of FIGS. 3A-3B , may be used with or part of system 400 . Elements of the temperature control assembly 200 of FIGS. 2A-2D and the thermal chamber 300 of FIGS. 3A-3B are used to help illustrate aspects of FIGS. 4A-4B.System 400 (eg, also referred to herein as "thermal testing system 400") can be used to test one or more electronic devices of one or more electronic systems under various thermal conditions as described herein. In some embodiments, system 400 may include electronic circuit board 332 . Electronic circuit board 332 may be coupled to one or more electronic systems 252 under test. In some embodiments, electronic circuit board 332 may facilitate the transfer of electrical signals to and from one or more electronic devices 250 , as well as to and from any additional elements or systems coupled to electronic circuit board 332 . In embodiments, electronic circuit board 332 may facilitate power transfer to and from one or more electronic devices 250 and to and from any additional elements coupled to electronic circuit board 332 . For example, one or more humidity sensors can be coupled to electronic circuit board 332, and electronic circuit board 332 can supply power to the one or more humidity sensors. In some embodiments, the electronic circuit board 332 may be coupled to an external system, such as a server. External systems via electronic circuit board 332 may be used to transmit instructions to perform read, write, or erase operations at electronic device 250 of electronic system 252 during thermal testing. Additionally, external systems may be used to retrieve information or test data from the electronic device 250 during thermal testing.In some embodiments, thermal chamber 300 may include one or more ports 318 . One or more ports 318 may expose chambers within thermal chamber 300 . Electronic system 252 is coupled to electrical connectors of electronic circuit board 332 . Electronic devices 250 of electronic system 252 are accessible from port 318 .In some embodiments, temperature control assembly 200 is coupled at the top side of thermal chamber 300 . In some embodiments, port 318 of thermal chamber 300 is configured to receive temperature control assembly 200 . In some embodiments, the bottom portion of the temperature control assembly 200 extends within the chamber of the thermal chamber 300 and is coupled with at least some of the electronic devices 250 of the electronic system 252 to transfer thermal energy to and from the respective electronic devices 250 . The top portion of temperature control assembly 200 extends over the top side of thermal chamber 300 .For example, a top portion of temperature control assembly 200 , such as a heat sink, may extend over thermal chamber 300 . The bottom portion of the temperature control assembly 200, such as the thermally conductive layer 214 and thermal pad, extends within the thermal chamber. In some embodiments, the thermal pad physically contacts the top surface of the electronic device 250 . The temperature control assembly 200 can transfer thermal energy from and to the electronic device 250 . For example, the temperature control assembly 200 can vary the temperature of the electronic device 250 (eg, package temperature or on-die temperature) within a temperature range of -40 degrees Celsius to 140 degrees Celsius.In some embodiments, temperature control assembly 200 may be coupled to thermal chamber 300 . In some embodiments, thermal chamber 300 may be used to hold temperature control assembly 200 in place. In some embodiments, thermal chamber 300 may align temperature control assembly 200 with electronics 250 of electronic system 252 such that a bottom portion of temperature control assembly 200 may be coupled to respective electronics 250 . In embodiments where thermal chamber 300 includes multiple ports holding multiple temperature control assemblies 200 , thermal chamber 300 using adjustable coupling components may allow each of temperature control assemblies 200 to apply a similar or equal or constant pressure applied to each of the electronic devices of the corresponding electronic system. Multiple temperature control assemblies 200 may simultaneously apply different temperatures to electronic devices of different electronic systems within thermal chamber 300 .In some embodiments, temperature control assembly 200 may include attachment features, such as attachment features 224 of Figures 2A-2D. In an embodiment, thermal chamber 300 may include securing features, such as securing features 324 of Figures 3A-3B. In some embodiments, an adjustable coupling member may be coupled to the attachment member of the temperature control assembly 200 and the securing feature 324 of the thermal chamber 300 to adjustably couple the temperature control assembly 200 to the thermal chamber 300 . In some embodiments, the attachment members and securing features are configured to receive adjustable coupling members that can adjustably couple the temperature control assembly 200 to the thermal chamber 300 . In some embodiments, the adjustable coupling member may include a spring element that allows adjustment of the vertical position of the temperature control assembly 200 mounted to the thermal chamber 300 .In some embodiments, the positive pressure environment within the chamber of thermal chamber 300 is created using gas injected into the chamber of the thermal chamber. In some embodiments, instead of hermetically sealing thermal chamber 300, thermal chamber 300 (eg, a chamber within thermal chamber 300) may be maintained in a positive pressure environment such that the only gas entering thermal chamber 300 comes from gas port, and the only gas that escapes the thermal chamber 300 is the gas from the gas port. In some embodiments, the thermal chamber 300 may include a gas port to receive a gas, such as oil free air (OFA) or nitrogen or clean dry air or gas (CDA). In some embodiments, the gas may have a dew point below the expected cold temperature range in the test. In some embodiments, the gas may have less than 1 part per million (ppm) carbon dioxide and less than 0.003 ppm hydrocarbon vapor. Thermal chamber 300 may be used to control the environment near electronic device 250 under test. In an embodiment, the gas provided to the thermal chamber 300 has a dew point below the lowest temperature at which the electronic device 250 will be tested. This gas is provided to the thermal chamber 300 so that condensation, such as moisture or ice, does not form at the electronic device 250 during testing. For example, the packaging of the electronic device under test can be controlled within a temperature range of -25 degrees Celsius to 140 degrees Celsius. The dew point of the gas may be below -25 degrees Celsius (eg, -90 degrees Celsius). When the temperature control assembly 200 applied a temperature of -25C to the electronic device under test, based on the low dew point of the gas provided within the cavity of the thermal chamber 300, condensation did not form at the electronic device.In an embodiment, instead of using hot or cold gas to alter the temperature of thermal chamber 300, temperature control assembly 200 may maintain a local temperature environment for electronic device 252 under test. In embodiments where thermal chamber 300 includes a plurality of temperature control components 200 coupled to a plurality of electronic systems 252, each of the temperature control components 200 may be independently controlled and maintained at the electronic device of the corresponding electronic system under test different (or the same) temperature without the use of hot or cold gas. For example, the first electronic device of the first electronic system under test may be coupled to the first temperature control component. The second electronic device of the second electronic system under test may contact the second control assembly. Both the first temperature control assembly and the second temperature control assembly may be coupled to a single thermal chamber. The first temperature control component may maintain the temperature at the first electronic device at 100 degrees Celsius, and the second temperature control component may maintain the temperature at the second electronic device at 0 degrees Celsius.5 illustrates an example machine of a computer system 500 within which a set of instructions for causing the machine to perform any one or more of the methods discussed herein can be executed. In some embodiments, computer system 500 may correspond to a host or server system that includes, is coupled to, or utilizes a test platform (eg, to perform operations corresponding to resource allocator component 130 of FIG. 1). In alternative embodiments, the machines may be connected (eg, networked) to other machines in a LAN, intranet, extranet, and/or the Internet. A machine may be in the capacity of a server or client machine in a client-server network environment as a peer machine in a peer-to-peer (or distributed) network environment or as a server or client machine in a cloud computing infrastructure or environment operate.The machine may be a personal computer (PC), tablet PC, set-top box (STB), personal digital assistant (PDA), cellular phone, network appliance, server, network router, switch or bridge, or capable (sequentially or otherwise) Any machine that executes a set of instructions that specify actions to be taken by said machine. Furthermore, while describing a single machine, the term "machine" should also be considered to encompass any collection of machines that, individually or collectively, execute a set (or sets of instructions) of instructions to perform any of the methods discussed herein. one or more.The example computer system 500 includes a processing device 502, main memory 504 (eg, read only memory (ROM), flash memory, dynamic random access memory (DRAM) (eg, synchronous DRAM (SDRAM) or Rambus DRAM (RDRAM)), etc.) , static memory 506 (eg, flash memory, static random access memory (SRAM), etc.), and data storage system 518 , which communicate with each other via bus 530 .Processing device 502 represents one or more general-purpose processing devices, such as microprocessors, central processing units, and the like. More specifically, the processing device may be a complex instruction set computing (CISC) microprocessor, a reduced instruction set computing (RISC) microprocessor, a very long instruction word (VLIW) microprocessor, or a processor implementing other instruction sets , or a processor that implements a combination of instruction sets. Processing device 502 may also be one or more special purpose processing devices, such as application specific integrated circuits (ASICs), field programmable gate arrays (FPGAs), digital signal processors (DSPs), network processors, and the like. The processing device 502 is configured to execute instructions 526 for performing the operations and steps discussed herein. Computer system 500 may further include a network interface device 508 to communicate via network 520 .Data storage system 518 may include machine-readable storage media 524 (also referred to as computer-readable media) having stored thereon one or more sets of instructions 526 or software embodying any one or more of the methods or functions described herein . Instructions 526 may also reside entirely or at least partially within main memory 504 and/or within processing device 502 during execution by computer system 500, which also constitute machine-readable storage media. Machine-readable storage medium 524, data storage system 518, and/or main memory 504 may correspond to a memory subsystem.In one embodiment, instructions 526 include instructions to implement functionality corresponding to a resource allocator component (eg, resource allocator component 130 of FIG. 1). Although machine-readable storage medium 524 is shown as a single medium in example embodiments, the term "machine-readable storage medium" should be considered to encompass a single medium or multiple media that store one or more sets of instructions. The term "machine-readable storage medium" should also be considered to encompass any medium capable of storing or encoding a set of instructions for execution by a machine and causing the machine to perform any one or more of the methods of the present disclosure. Accordingly, the term "machine-readable storage medium" should be considered to include, but not be limited to, solid-state memory, optical media, and magnetic media.Some portions of the foregoing detailed description have been presented in terms of algorithms and symbolic representations of operations on data bits within a computer memory. Those skilled in the data processing arts use these algorithmic descriptions and representations to most effectively convey the substance of their work to others skilled in the art. An algorithm is here and generally considered to be a self-consistent sequence of operations leading to a desired result. The operations are those requiring physical manipulations of physical quantities. Usually, though not necessarily, these quantities take the form of electrical or magnetic signals capable of being stored, combined, compared, and otherwise manipulated. It has proven convenient at times, principally for reasons of common usage, to refer to these signals as bits, values, elements, symbols, characters, terms, numbers, or the like.It should be borne in mind, however, that all of these and similar terms are to be associated with the appropriate physical quantities and are merely convenient labels applied to these quantities. The present disclosure may relate to computers that manipulate and transform data represented as physical (electronic) quantities within the registers and memory of a computer system into other data similarly represented as physical quantities within the computer system's memory or registers or other such information storage systems The actions and processes of a system or similar electronic computing device.The present disclosure also relates to apparatus for performing the operations herein. This apparatus may be specially constructed for the intended purposes, or it may comprise a general-purpose computer selectively activated or reconfigured by a computer program stored in the computer. Such computer programs may be stored in computer-readable storage media such as, but not limited to, any type of disk (including floppy disks, optical disks, CD-ROMs, and magneto-optical disks), read only memory (ROM), Random access memory (RAM), EPROM, EEPROM, magnetic or optical cards, or any type of medium suitable for storing electronic instructions, are each coupled to a computer system bus.The algorithms and displays presented herein are not inherently related to any particular computer or other device. Various general-purpose systems may be used with programs in accordance with the teachings herein, or it may prove convenient to construct more specialized apparatus to perform the described methods. The structure of a variety of these systems will be presented as set forth in the description below. Additionally, the present disclosure is not described with reference to any particular programming language. It will be appreciated that a variety of programming languages may be used to implement the teachings of the present disclosure as described herein.The present disclosure may be provided as a computer program product or software, which may include a machine-readable medium having stored thereon instructions that may be used to program a computer system (or other electronic device) to perform processes in accordance with the present disclosure . A machine-readable medium includes any mechanism for storing information in a form readable by a machine (eg, a computer). In some embodiments, machine-readable (eg, computer-readable) media include machine (eg, computer-readable) storage media such as read-only memory ("ROM"), random-access memory ("RAM"), magnetic disks Storage media, optical storage media, flash memory components, etc.The words "example" and/or "exemplary" are used herein to mean serving as an example, instance, or illustration. Any aspect or design described herein as an "example" or "exemplary" is not necessarily to be construed as preferred or advantageous over other aspects or designs. Rather, use of the word "example" or "exemplary" is intended to present concepts in a specific manner. As used in this application, the term "or" is intended to mean an inclusive "or" rather than an exclusive "or." That is, unless specified otherwise, or clear from context, "X includes A or B" is intended to mean any of the natural inclusive permutations. That is, if X includes A; X includes B; or X includes both A and B, then "X includes A or B" is satisfied under any of the foregoing examples. In addition, the article "a (a/an)" as used in this application and the appended claims can generally be construed to mean "one or more" unless specified otherwise or clear from the context to be directed to a singular form. Furthermore, the use of the terms "embodiment" or "one embodiment" or "example" or "one example" and the like throughout is not intended to mean the same embodiment or embodiment unless so described. One or more of the embodiments or examples described herein may be combined in a particular embodiment or example. The terms "first," "second," "third," "fourth," etc., as used herein, are meant as labels to distinguish different elements, and may not necessarily have ordinal meanings according to their numerical designations.In the foregoing specification, embodiments of the present disclosure have been described with reference to specific example embodiments thereof. It will be apparent that various modifications may be made to the present disclosure without departing from the broader spirit and scope of embodiments of the disclosure as set forth in the appended claims. Accordingly, the specification and drawings are to be regarded in an illustrative rather than a restrictive sense. |
An apparatus and methods for scheduling and executing commands issued by a first processor, such as a CPU, on a second processor, such as a GPU, are disclosed. In one embodiment, a method of executing processes on a graphics processing unit (GPU) includes monitoring one or more buffers in a memory, selecting a first subset from the one or more buffers for execution on the GPU based on a workload profile of the GPU, and executing the first subset on the GPU. The GPU may also receive a priority ordering of the one or more buffers, where the selecting is further based on the received priority ordering. By performing prioritization and scheduling of commands in the GPU, system performance is enhanced. |
1.A method of processing a work item on a graphics processing unit GPU, comprising: Changing a priority order of the plurality of buffers written by the central processing unit CPU in the memory to reorder the execution order of the plurality of buffers; Selecting a first subset of the plurality of buffers based on the prioritized order of the changes to perform a work item of the first subset of the plurality of buffers on the GPU, wherein the changing and the selecting are in response to dynamically determining the GPU workload overview; and The work item of the first subset of the plurality of buffers is executed on the GPU based on the reordered execution order. 2.The method of claim 1 further comprising: Prior to the change, the priority order of the plurality of buffers is received from the CPU, wherein the selecting further comprises changing the priority order to increase utilization of processing components of the GPU. 3.The method of claim 1 wherein the performing comprises: Executing a work item of a first buffer of the first subset of the plurality of buffers on the GPU; Determining that the work item of the second buffer will be executed on the GPU; Pre-blocking the execution of the work item of the first buffer; and The execution of the work item of the second buffer is initiated on the GPU. 4.The method of claim 3 wherein the pre-blocking comprises: Save the context of the first buffer to the context save area in local storage. 5.The method of claim 3, wherein the determining comprises comparing an execution time of the work item of the first buffer with a predetermined time slice value. 6.The method of claim 3 wherein the determining comprises: Monitoring at least one of the plurality of buffers; Detecting a work item in the at least one of the plurality of buffers having a higher priority than the first buffer. 7.The method of claim 6 wherein the monitoring comprises: Reading at least one memory location written by the second processor; A command write event is detected based on the value read from the at least one memory location. 8.The method of claim 1 wherein the selecting is performed by the GPU and comprises: Analyze the work items in each of the buffers; Determining the priority of the work item in each of the buffers; A first subset of the plurality of buffers is selected based on the determined priority. 9.The method of claim 1 further comprising coupling to the central processing unit CPU and the GPU as the memory using a system memory. 10.The method of claim 1 further comprising using a ring buffer as at least one of the plurality of buffers. 11.The method of claim 1 further comprising selecting one or more command buffers written by the central processing unit CPU in the first subset of each of the buffers. 12.A system for executing a work item on a graphics processing unit GPU, the GPU being grouped into: Changing a priority order of the plurality of buffers written by the central processing unit CPU in the memory to reorder the execution order of the plurality of buffers; Selecting a first subset of the plurality of buffers based on the prioritized order of the changes to perform a work item of the first subset on the GPU, wherein the changing and the selecting are in response to dynamically determining a workload profile of the GPU ;as well as The work item of the first subset of the plurality of buffers is executed on the GPU based on the reordered execution order. 13.The system of claim 12 wherein the GPU is further configured to: Prior to the change, the priority order of the plurality of buffers is received from the CPU. 14.The system of claim 12 wherein the GPU is further configured to: Executing a work item of the first buffer of the first subset of the plurality of buffers; Determining a work item to be executed on a second buffer on the GPU; Pre-blocking the execution of the work item of the first buffer; and The execution of the work item of the second buffer is initiated on the GPU. 15.The system of claim 14 wherein the pre-blocking comprises: Save the context of the first buffer to the context save area in local storage. 16.The system of claim 14 wherein the determining comprises: Monitoring at least one of the plurality of buffers; Detecting a work item in the at least one of the plurality of buffers having a higher priority than the first buffer. 17.The system of claim 12 wherein at least one of the plurality of buffers is a ring buffer. 18.The system of claim 12 further comprising: The central processing unit CPU; The memory is coupled to the CPU and the GPU. 19.The system of claim 12 wherein the GPU comprises: Local storage, organized by one or more context save areas. |
Hardware-based GPU workTechnical fieldThe present invention relates to scheduling commands on a processor.Background techniqueThe processing power of the Graphics Processor Unit (GPU) is rapidly increasing. This increase in processing power is due, at least in part, to a plurality of independent processing units (e.g., SIMD (Single Instruction Multiple Data) processor, ALU (Arithmetic Logic Unit)) included in the graphics processing unit. In many graphics applications, parallel geometric calculations, vertex calculations, and/or pixel operations are performed using the plurality of independent processing units. For example, graphics applications are often characterized by single instruction (multiple data; SIMD), in which the same sequence of instructions can be executed on multiple parallel data streams to greatly increase the speed of operation.Another trend that is evolving is the use of GPUs for general purpose computing, which may not necessarily be SIMD type calculations. The GPU usage for general purpose computing is referred to as GPGPU-style computing. In this GPGPU-style calculation, the CPU (Central Processing Unit) can use the GPU to execute computational work items that were previously typically done by the CPU.Traditionally, the CPU has scheduled the work of the GPU to operate on the GPU, such as vertex stream and texture information, and instructions to process such information. Software executing on the CPU can prioritize different work items (also referred to as "commands") in a prioritized order and queue them in a system memory buffer. The GPU asynchronously retrieves the next work item to be processed from the system memory buffer. On the GPU, the selection of the next work item to be processed is based on the priority order specified by the CPU. In some cases, the CPU may specify a priority order based on each work item; in other cases, the CPU may specify a priority order associated with each memory buffer, and any work items queued in the memory buffer will Has the priority order associated with the buffer.With the rapid increase in processing power in GPUs and the increased use of GPUs for general purpose computing, more efficient methods are needed to take advantage of the available computing power of the GPU. Therefore, there is a need to provide a method and system that can more efficiently allocate GPU resources to a work item.Summary of the inventionThe present invention discloses an apparatus and method for scheduling and executing commands issued by a first processor, such as a CPU, on a second processor, such as a GPU. In an embodiment, the method of performing processing on a graphics processing unit (GPU) includes selecting a first subset from one or more buffers in the memory based on a workload profile of the GPU to Executing the first subset of work items on the GPU; and executing the first subset of work items on the GPU. The GPU can also receive a priority order for the one or more buffers, wherein the selection is further based on the priority order of the receipt.Another embodiment of the present invention provides a system for executing a work item on a GPU. The GPU is configured to select a first subset of buffers from one or more buffers in the memory based on a workload profile of the GPU to execute the first subset of work items on the GPU; and on the GPU Execute the first subset. The GPU may also be further configured to prioritize receiving the one or more buffers, wherein the selection is further based on the priority order of the receiving.Further embodiments, features, and advantages of the present invention, as well as the structure and operation of various embodiments of the present invention, are described in detail below.DRAWINGSBRIEF DESCRIPTION OF THE DRAWINGS The accompanying drawings, which are incorporated in FIGFigure 1 shows a system in accordance with an embodiment of the present invention.2 illustrates a ring buffer allocation in accordance with an embodiment of the present invention.3 is a flow chart showing the steps of a process implemented in a CPU to transfer commands to a GPU for processing in accordance with an embodiment of the present invention.4 is a flow chart showing the steps of a process implemented in a GPU to process commands received from a CPU in accordance with an embodiment of the present invention.5 shows a flow chart of the steps implemented in a GPU to prioritize commands and schedule the commands for execution in accordance with an embodiment of the present invention.6 is a flow chart showing the steps of a process implemented in a GPU to execute a command in accordance with an embodiment of the present invention.Detailed waysThe embodiments of the present invention can substantially improve the utilization of graphics processing unit (GPU) resources. Although the invention has been described herein using example embodiments with specific applications, it should be understood that the invention is not limited thereto. Other modifications, applications, and embodiments within the scope of the present invention, as well as other fields in which the present invention has significant utility, will be apparent to those skilled in the <RTIgt;Embodiments of the invention are applicable to any computer system or computing device having at least two processors, such as a CPU that provides a work item (such as a command or command buffer) and a GPU that processes work items provided by the CPU. By way of example and not limitation, embodiments may include a computer including a notebook computer, a personal computer, a gaming platform, an entertainment platform, a personal digital assistant, and a video platform.In systems with CPUs and GPUs, GPU utilization is an important factor in overall system performance. Ideally, people want GPU utilization to be at or near maximum. The CPU provides instructions and data used in the GPU. In conventional systems, the CPU provides substantially all of the instructions and data to the GPU in a command buffer, and the GPU simply takes those command buffers as input and executes (ie, executes commands for those command buffers). A command buffer is a data structure that contains instructions or commands and related data. In the conventional system, the CPU determines any priority order of the command buffer, and the GPU simply executes the queued commands determined by the CPU in the order specified by the CPU. Although this traditional method is effective, there is still room for improvement in this method where the GPU relies solely on the CPU to determine the priority of the work to be performed on the GPU. Embodiments of the present invention are directed to enabling a GPU to prioritize and schedule commands specified by a CPU. For example, compared to a CPU, a GPU can prioritize the commands to be executed based on the availability of its local resources in a more dynamic and efficient manner. Moreover, the GPU can determine the second level of prioritization outside of any command priority order specified by the CPU.System based on hardware scheduling commands on the GPU1 illustrates a system in which a work item (eg, a command) is executed on a GPU, in accordance with an embodiment of the present invention. System 100 includes a CPU 101, a system memory 102, a graphics driver 103, a GPU 104, and a communication infrastructure 105. Those skilled in the art will appreciate that system 100 can include software, hardware, and firmware components in addition to or in addition to the components of the embodiment shown in FIG.The CPU 101 can be any commercially available CPU, a Digital Signal Processor (DSP), an Application Specific Integrated Processor (ASIC), a Field Programmable Gate Array (FPGA), or Custom processor. CPU 101 may include one or more processors coupled using a communication infrastructure, such as communication infrastructure 105. CPU 101 may also include one or more processors having multiple processing cores on the same chip, such as a multi-core processor. In the embodiment shown in FIG. 1, CPU 101 may be a dual core processor having processing core 1 101a and core 2 101b. The CPU 101 executes an operating system (not shown) and one or more applications, and is a control processor of the system 100. The operating system executing on CPU 101 controls and facilitates access to devices in system 100. One or more applications executing on CPU 101, including user applications, cause CPU 101 to coordinate the use of different devices of system 100, including GPU 104 and system memory 102, to accomplish tasks.System memory 102 includes one or more memory devices. System memory 102 can typically be a Dynamic Random Access Memory (DRAM) or similar memory device for non-persistent data storage. In some embodiments, system memory 102 can include memory devices such as flash memory devices and/or static random access memory (SRAM). In an embodiment, during execution of system 100, one or more memory buffers 110 may reside within system memory 102, and CPU 101 transmits commands to GPU 104 through the one or more memory buffers 110.The memory buffer 110 for the CPU 101 to transmit commands to the GPU 104 may be a ring buffer or other data structure suitable for efficient queuing of work items. Memory buffer 110 is also referred to below as ring buffer 110. Commands transmitted from CPU 101 to GPU 104 may include instructions and data. In some embodiments, an application and/or operating system executing on CPU 101 inputs a data structure having instructions and data to ring buffer 110. CPU 101 (or an application and/or operating system executing on CPU 101) may specify a priority order associated with one or more ring buffers 110. Commands can be added to the ring buffer based on the determined priority of each command. For example, the CPU 101 can define a ring buffer for a high priority command, a low priority command, and a low delay command, respectively.A set of indirect buffers 111 can be used to hold the actual commands (such as instructions and data). For example, when the CPU 101 transmits a command buffer to the GPU 104, the command buffer can be stored in the indirect buffer 111 and a pointer to the indirect buffer is inserted in the ring buffer having the corresponding priority. It should be appreciated that the indirect buffer 111 can be implemented as a single stage indirect or multi-level indirect.Ring buffer working register 112 may be implemented in system memory 102 or in other register storage facilities of system 100. For example, ring buffer working register 112 provides communication between CPU 101 and GPU 104 in response to commands in ring buffer 110. For example, the CPU 101 that writes commands to the ring buffer 110 and the GPU 104 that reads such commands can coordinate the write pointer and the read pointer, which respectively indicate the last item added in the ring buffer 110 and the last item read. . Other information such as a list of available ring buffers 110, a priority order specified by CPU 101, and the like may also be transmitted to GPU 104 through ring buffer working register 112.Graphics driver 103 can include software, firmware, hardware, or any combination thereof. In an embodiment, graphics driver 103 is all implemented in software. Graphics driver 103 software may reside in system memory 102 during execution of system 100. The graphics driver 103 provides an interface and/or an application programming interface (API) for the CPU 101 and applications executing on the CPU 101 to access the GPU 104. In general, when system 100 is booted, the operating system initializes graphics driver 103 that is appropriate for a particular GPU 104.GPU 104 provides system 100 with graphics acceleration and other computing functions. GPU 104 may include multiple processors, such as a Single Instruction Multiple Data (SIMD) processor, including processing components such as an Arithmetic and Logic Unit (ALU). In general, having multiple SIMD processors makes GPU 104 well suited for performing data parallel tasks that are common in graphics processing. For example, when an image is rendered on a display, the same or substantially the same instructions are executed on each pixel rendered on the display. GPU 104 can also be used for tasks other than graphics operations, such as various computationally intensive tasks that can benefit from parallel execution of data streams. For the sake of simplicity, the following is a graphical application. However, those skilled in the art will appreciate that the teachings herein are applicable to numerous other tasks that can be performed on a graphics processor. Additionally, those skilled in the art will appreciate that GPU 104 may be logic embedded in another device, such as CPU 101, bridge chip (eg, Northbridge, Southbridge, or a combined device).The components included in GPU 104 include GPU memory 120, a 3D/CS complex 130, a Ring List Controller (RLC) 140, and a command processor 150. GPU memory 120 provides local memory for use during GPU 104 calculations, which may include DRAM or such memory devices. In an embodiment, GPU 120 includes a plurality of Context Save Areas (CSAs) 121. Each CSA 121 provides a storage area to hold the context of the work items that are swapped out of execution by the GPU 104 prior to completion, as described below.3D/CS complex 130 is the primary computing component within GPU 104 that includes multiple SIMD processors to facilitate computation, including computation on parallel data streams. For example, the 3D/CS complex can include vertex shaders, pixel shaders, geometry shaders, unified shaders, and other necessary components for data computation in GPU 104. In the embodiments described below, the 3D/CS complex is considered to include a three-dimensional computing component, a computing shader component, and a low-latency computing component. The commands sent from the CPU 101 to the GPU 104 are implemented using a 3D/CS complex.Ring List Controller (RLC) 140 includes functionality to coordinate access to a memory buffer (e.g., ring buffer 110). In an embodiment, RLC 140 determines a list of ring buffers 140 to be processed in GPU 104, receiving any prioritization of ring buffer 140 specified by CPU 101 (especially a process or operating system executing on CPU 101), The scheduling of the ring buffer on GPU 104 is determined in a manner that optimizes utilization of processing resources in GPU 104. For example, the RLC 140, along with the command processor 150, can schedule a ring buffer received from the CPU 101 in a manner that maintains the maximum or near maximum utilization of each SIMD processor in the 3D/CS complex 130.Command processor 150 controls the processing within GPU 104. The command processor receives instructions from the CPU 101 to execute and coordinates the execution of the instructions on the GPU 104. In some cases, the command processor can generate one or more commands to be executed in GPU 104 that correspond to the various commands received from CPU 101. In an embodiment, command processor 150, along with RLC 140, implements prioritization and scheduling of commands on GPU 104 in a manner that maximizes GPU 104 resource utilization. Logic instructions that implement the functions of command processor 150 and RLC 140 may be implemented in hardware, firmware, or software, or a combination thereof. In an embodiment, the command processor 150 is implemented as a RISC engine with microcode to implement logic, including scheduling logic.Communication infrastructure 105 provides for coupling to devices and components of system 100. The communication infrastructure 105 can include one or more transmission buses, such as Peripheral Component Interconnect (PCI), Advanced Graphics Port (AGP), and the like.FIG. 2 illustrates a ring buffer that is transmitted from the CPU 101 to the GPU 104. As shown in this example, at an instant during operation of system 100, a set of ring buffers 200 are organized within system memory 102, including ring buffers 0... ring buffers 6 (i.e., 201, 202, 203, 204, 205, 206, 207). In the set of ring buffers 200, GPU 104 accesses subset 210, which includes ring buffer 0 201, ring buffer 1 202, ring buffer 2 203, and ring buffer 5206. The subset 210 may be selected based on criteria specified by the CPU 101, for example, the CPU 101 may identify the subset 210 as having commands to be executed on the GPU 104. For example, after queuing one or more commands to each of the ring buffers 201, 202, 203, and 206, the CPU 101 may update one or more memory locations, such as locations in the ring buffer working register 112, by the GPU 104. Read. In another embodiment, upon writing one or more commands to one or more ring buffers, CPU 101 can directly write to registers within GPU 104 to inform GPU 104 that the command buffers are available for processing.GPU 104 periodically monitors the ring buffer in system memory 102, the ring buffer working registers in system memory, and/or other register locations updated by CPU 101 to determine if the ring buffer has a command buffer ready to be processed by GPU 104. . When it is detected that one or more ring buffers have a command buffer ready for execution, GPU 104 can receive the command buffer for execution (ie, execute the commands in the command buffer). In an embodiment, GPU 104 may receive a circular buffer designated by CPU 101 into a GPU local memory or a set of General Purpose Registers (GPRs) using direct memory access (DMA) or the like. The RLC 140 can perform monitoring of the ring buffer and control the transfer of the ring buffer to the GPU memory and/or GPR. After determining the ring buffer group to be executed on the GPU 104, the RLC 140 determines the allocation of the ring buffers in the GPU, the priority order of the ring buffers, and the priority order of the command buffers of the ring buffers. In some embodiments, the determination of the priority order is performed by the RLC 140 in conjunction with the command processor 150. For example, in the received ring buffer subset 210 to be executed on the GPU 104, based on the priority order determined by the CPU 101 and the priority order determined by the GPU 104, it may be determined that the priority order as shown is: having priority Ring buffer 0 of 1, a ring buffer 2 of priority 2, and ring buffers 1 and 5 of priority 3.CPU processingFIG. 3 illustrates a flow diagram of processing steps (eg, steps 301 through 305) performed by a CPU, such as CPU 101, in accordance with an embodiment of the present invention. In step 301, CPU 101 initializes a set of ring buffers in system memory 103 to transfer a command buffer to GPU 104. Although a circular buffer is used in the description herein as a method of selecting data to implement a method of transmitting a command buffer to GPU 104, those skilled in the art will appreciate that one or more other data structures may be used in place of the ring buffer. The initialization step 301 can occur at system startup or when the application is launched. For example, when the system is booted, when operating system fabric GPU 104 and associated graphics driver 103 on CPU 101 are available for use, one or more ring buffers may be initialized for transmission to GPU 104 from subsequent applications. Instructions and data. In another example, when an application having code that uses a GPU, such as DirectX code, is loaded, the ring buffers can be organized as part of the initialization of the application. In another example embodiment, one or more ring buffers may be initialized at system startup to add and initialize additional buffers upon application startup.Initialization may include memory allocation, initialization of data structures corresponding to the ring buffer, and updating of one or more registers to transmit ring buffer fabric information to GPU 104. For example, initializing a ring buffer may include allocating one or more memory regions for a memory allocation of a ring buffer data structure (eg, ring buffer 110) to accommodate an actual command buffer associated with the ring buffer component (eg, indirect buffering) Region 111), and initialize one or more registers (e.g., one or more registers in ring buffer working register 112). The ring buffer and the indirect ring buffer may be initialized based on the fabric parameters or based on parameters dynamically determined by the executed application. For example, the number of ring buffers, the size, the size of the indirect buffer area, etc. can be used as fabric parameters at system startup, and/or one or more such parameters can be determined based on application characteristics.In an embodiment, each ring buffer 110 is a circular array. The components of the circular array are intended to accommodate pointers to locations in the indirect buffer region 111. Each ring buffer data structure also has parameters required to maintain the ring buffer structure, such as a head pointer and a tail pointer. The indirect buffer area 111 is intended to accommodate a plurality of data structures corresponding to the command buffer. For example, each command buffer can include one or more commands and associated data to be executed by the GPU. Storing the actual command buffer in a different location than the ring buffer facilitates efficient use of the memory. The indirect buffer area 112 can be allocated in a variety of ways, including assigning an area based on each command, assigning an area to each ring buffer, or allocating a continuous area to the entire ring buffer 110. Ring buffer working register 112 may include registers and/or other locations. Although the ring buffer working registers 112 shown in the figures are organized within system memory 102, those skilled in the art will appreciate that the ring buffer working registers may include one or more registers located outside of system memory 102. For example, ring buffer working register 112 may include one or more registers located in GPU 104. The information related to the use of the ring buffer by the CPU 101 can be transmitted to the GPU 104 using the ring buffer working register 112. For example, CPU 101 may transmit information such as the current active ring buffer list, the priority order of the active ring buffers determined by CPU 101, the allocation of active ring buffers for one or more GPU components, to GPU 104. In another embodiment, the ring buffer working register 112 can also be used to transfer information such as the current read and write pointers for each ring buffer.In step 303, the CPU 101 notifies the GPU 104 about the fabric of the ring buffer. This step can occur at system startup or when the application is started after the CPU 101 initializes the ring buffer in system memory 103. In some embodiments, step 303 can be performed at system startup and at application startup. For example, if the number of active ring buffers changes when the application is launched, such changes will be transmitted to GPU 104. In an embodiment of the invention, the ring buffer configuration information transmitted to the GPU 104 includes the number of ring buffers, the location and size of each ring buffer, and the priority order determined by the CPU 101. In various embodiments of the invention, different and/or additional fabric information about the frame buffer organization may be transmitted. The notification in step 303 may be based on CPU 101 writing to one or more register locations monitored by GPU 104, such as ring buffer working register 112. In another embodiment, the notification to the GPU 104 is initiated by an application executing on the operating system of the CPU 101 or CPU 101 using the graphics driver 103. In an embodiment of the invention, graphics driver 103 may write information to be transmitted to GPU 104 into system memory 102.In step 305, the command is written to the ring buffer. For example, during execution of an application, such as a gaming application, a number of graphics related commands are executed to perform various graphics related tasks, including rendering an image on a display. The application code can use graphics commands using a graphics processing platform such as DirectX. When the application is compiled for execution on system 100, or in some cases, more dynamically at runtime, it is determined that CPU 101 unloads specific commands and associated data for processing on GPU 104. For example, any command that invokes the DirectX API to perform a function may select to process on GPU 104. The operating system, or in some embodiments, the application itself, writes the commands and associated data selected for processing on GPU 104 to a ring buffer that is configured to transfer instructions and data to GPU 104. The commands and associated data can form a data structure commonly referred to as a command buffer. The command buffer includes one or more instructions and associated data. For example, for a "draw" command, the corresponding command buffer can include a "draw" command and an image to be drawn or rendered on the display.As previously mentioned, for a ring buffer that transfers a command buffer to GPU 104, CPU 101 can determine its priority order. Thus, when a command buffer is written in step 305, each command buffer is queued in a ring buffer that best matches the priority of the command. For example, game applications generate numerous game character image renderings that require display almost immediately, and menus and other user events have lower time urgency. Thus, a command buffer corresponding to a time-critical image can be queued to a higher priority ring buffer than a time-critical menu and user event command buffer. Writing the command buffer to the appropriate ring buffer can include allocating a memory region to accommodate the command buffer in the indirect buffer region 111, and queuing a pointer to a corresponding location in the indirect buffer region 111 to the ring buffer. in. Inserting a pointer to the indirect buffer 111 in the ring buffer further includes updating data structure components of the ring buffer, such as a head pointer and a tail pointer. In addition, the CPU 101 can update the value indicating the added value and the pointer for the CPU 101 as the writer and the GPU 104 as the reader to securely concurrently access the ring buffer. After writing one or more command buffers to the ring buffer, CPU 101 may update one or more registers and/or other memory locations to inform GPU 104 about the availability of the data. In some embodiments, when the GPU 104 can continuously monitor each of the ring buffers, there is no need for separate notification by the CPU 101.Process 300 is implemented on CPU 101 in a manner substantially asynchronous to processing in a GPU coupled to CPU 101. Process 300 enables an application executing on CPU 101 to have multiple commands waiting to be processed in other processors, such as GPU 104, while executing. However, for example, some synchronization mechanism can be implemented between CPU 101 and GPU 104 to ensure that the GPU does not overwhelm incoming command buffers. For example, CPU 101 may have appropriate techniques to detect when GPU 104 is not processing a ring buffer in order to be able to react to slow processing. The CPU 101 may also have an appropriate mechanism to detect if each command it queues in the ring buffer is processed by the GPU 104. For example, for each command queued to a ring buffer, the CPU 101 can write a value to the memory location in the ring buffer work memory 112. Next, the CPU 101 can periodically check the value at the memory location. When the GPU 104 processes the command buffer, it updates the locations in the ring buffer work memory 112 with different values. After the timeout period, the unaltered value of each location in the ring buffer work memory 112 indicates to the CPU 101 that the GPU 104 is not functioning properly.GPU processing4 shows a flow diagram of steps 401 through 409 of process 400 implemented by GPU 104 in accordance with an embodiment of the present invention. In various embodiments of the invention, process 400 can be implemented in hardware, firmware, and/or software. For example, the functionality of the RLC 140 can be implemented using a combination of hardware and microcode to maintain a high degree of flexibility while maintaining high performance.In step 401, GPU 104 determines the fabric of the ring buffer in system memory 103, through which GPU 104 receives a command buffer from CPU 101. Step 401 can be performed at system startup and/or at application startup. For example, at system startup, when CPU 101 initializes, GPU 104 may determine the fabric of the ring buffer in system memory 103. GPU 104 may also determine the fabric of the ring buffer when the application is launched or when signals are received from CPU 101. In some embodiments, if the CPU 101 initializes the ring buffer 110 at system startup and does not further add and/or delete ring buffers during system operation, the GPU 104 performs step 104 only at system startup. On the other hand, if the CPU 101 occasionally makes a fabric change to the ring buffer 110 at a time other than system startup, such as when the application is started, then when such a change occurs, the GPU 104 is required to update its ring buffer group. Constructive information. GPU 104 may determine the fabric of the ring buffer based on periodically monitoring the ring buffer or associated registers or memory locations or based on messages or signals received from CPU 101. In an embodiment, the functionality of step 401 is primarily implemented in RLC 140.After the fabric of the ring buffer 110 has been determined, in step 403, the GPU 104 monitors the ring buffers to detect the ring buffers available for processing. For example, when the game application is executed on the CPU 101, the CPU 101 queues the graphics processing operation command to the ring buffer 110 in the form of a command buffer, as described with reference to step 305 of the process 300. When a command buffer is generated in accordance with the executing application and queued to the ring buffer, CPU 101 may update one or more memory locations and/or registers to indicate to GPU 104 which ring buffers are available for processing. GPU 104 can monitor such memory locations and/or registers that CPU 101 updates. In an embodiment, the functionality of step 403 is primarily implemented in RLC 140.In step 405, GPU 104 selects a subset of ring buffers 110 for processing and execution. Step 405 may be performed in response to detecting a command buffer queued for processing in the ring buffer 110 or in response to a message or signal received from the CPU 101. The selection of a subset of ring buffers for processing and execution, such as selecting a subset 210 from the available ring buffer 200 as shown in FIG. 2, may be based on one or more factors. In an embodiment, CPU 101 may maintain a ring buffer to be processed in the GPU as a list of ring buffers from which GPU 104 selects the ring buffer as a subset to process. In some embodiments, GPU 104. In another embodiment, CPU 101 simply queues the command buffer to one or more ring buffers, and GPU 104 selects one or more ring buffers with queued command buffers awaiting execution.In some embodiments, a subset of ring buffers selected for execution may be provided into GPU local memory or in a GPR in preparation for processing within GPU 104. The transfer of command buffers from system memory 102 can be controlled by a DMA process. When the command buffer is read from system memory 103, GPU 104 may update one or more memory locations to indicate which command buffers have been read and whether each of the command buffers has been processed. Such memory locations that are updated may be located in the ring buffer working register 112, in the ring buffer data structure, and/or in the GPU local memory or GPR. In an embodiment, the functionality of step 403 is primarily implemented in RLC 140.In step 407, GPU 104 selects a command buffer to execute on the GPU in accordance with the priority criteria. During processing of this step, GPU 104 determines how to assign the ring buffer selected in the previous step to one or more GPUs and how to assign commands to resources within the GPU. For example, GPU 104 may determine a priority order in which the ring buffer subset 210 selected from system memory 103 in step 405 is processed on GPU 104, and in some embodiments, determining how to determine during processing of each ring buffer. The order of the commands is prioritized and scheduled. In an embodiment, the functionality of step 403 is primarily implemented in RLC 140. Figure 5 further depicts the processing of step 407.In step 409, the selected command is executed on GPU 104 in accordance with the prioritized order determined in GPU 104 in the previous steps. In an embodiment, the selected subset of ring buffers 210 are ordered according to a priority order performed on GPU 104. Within each ring buffer, the commands can be prioritized and scheduled for execution, or executed in the order in which they appear in the ring buffer. In another embodiment, GPU 104 may periodically determine all pending commands by considering various factors such as the priority order specified by CPU 101, the type of ring buffer or the type of command buffer, the availability of processing resources on GPU 104, and the like. The priority of the buffer.Execution of commands received from the CPU 101 in the command buffer may include the command processor 150 generating one or more commands for the commands that should be received and scheduling the commands on the processing resources of the GPU 104. For example, receiving a single command from the CPU 101 to render an image may cause the command processor 150 to subdivide the image and generate one or more instructions to process each of the subdivided portions of each of the images. The sub-sections of the command processor schedule are then executed on processing resources of the GPU 104, such as the SIMD processor and/or ALU. The commands to be scheduled for execution and to execute the commands on the GPU are primarily managed by the command processor 150 in conjunction with the RCL 140 that prioritizes the commands.Execution of the commands can be performed in various ways consistent with the present invention. In an embodiment, when each command completes execution to make processing resources available, a next command in priority order is executed on the processing resource. Embodiments may also employ other methods in which other factors are considered in addition to the above-described prioritization when selecting the next command to be executed. For example, a pending command can be evaluated to schedule a command that is most likely to optimize the available resources for the next command to be executed. In other embodiments of the invention, when some number and/or type of commands are executed in GPU 104, GPU 104 may return to step 405 to reselect the ring buffers available for processing.In general, during execution of a command in GPU 104, RLC 140 or another component of GPU 104 continuously monitors a ring buffer in system memory, such as ring buffer 110. This ongoing monitoring enables GPU 104 to detect, for example, when a command buffer is added to a high priority queue. During execution of one or more lower priority commands by GPU 104, CPU 101 adds one or more command buffers to the high priority buffer to cause GPU 104 to pre-block one or more commands in order to be able to perform the higher Priority command. 6 shows steps 601 through 609 implemented in accordance with an embodiment of the present invention to enable GPU 104 to accept higher priority commands during execution of one or more lower priority commands. For example, steps 601 through 609 can be implemented during process step 409.FIG. 5 shows steps 501 through 505 implemented in the aforementioned step 407 in accordance with an embodiment of the present invention. Steps 501 through 505 are primarily implemented by RLC 140 and command processor 150 to enable GPU 104 to determine the priority order of the ring buffer and command buffer.In step 501, a current workload profile for GPU 104 is determined. In an embodiment, RLC 140 and/or command processor 150 determines factors such as available processing components, relative processing capabilities of the processing components, and current priority of the workload to be processed to create a workload profile. This workload profile reflects the state of the GPU. Determining the available processing components and their respective relative processing capabilities may include consideration of independent processing components, such as SIMD components, ALU capabilities, three-dimensional processing devices, computational shader devices, and low latency processing devices. The current workload analysis of the GPU can be performed dynamically, either continuously or periodically. For example, RLC 140 and/or command processor 150 may initiate the GPU workload analysis when the command or ring buffer completes execution, or when a new subset of ring buffers is read from system memory. In addition, a new workload profile can be generated whenever a workload needs to be determined; or the workload profile can be maintained, such as when a predetermined type of event, such as completing a ring buffer execution, reading a ring buffer subset from system memory, etc., occurs It's updated.In step 503, GPU 104 determines the priority order of the ring buffers waiting to be executed on GPU 104. In an embodiment, RLC 140 and command processor 150 determine the priority order based on the workload profile determined in step 501 above. The priority order determined by GPU 104 may be based on the ring buffer order specified by CPU 101. When the GPU-based dynamic workload profile optimizes the actual execution order, the priority order specified by the CPU 101 can be substantially obeyed. The ability to dynamically reorder the execution order enables the GPU to fine-grain control the usage of its processing components.In step 505, GPU 104 determines the priority order of the commands associated with each of the ring buffers. For example, RLC 140 and command processor 150 may determine the order based on the workload profile determined in step 501 above and the ring buffer prioritization determined in step 503. The prioritization of commands in the ring buffer may include determining which processing component to assign each command to in the GPU. By performing dynamic determination, for example, if a high-priority processing resource is available, a command with a low priority is executed with a high priority, or a command with a high priority is executed with a low priority when the high-priority processing resource is busy. Or the ability to perform reordering of commands within each ring buffer based on component availability on a low latency component that would otherwise be performed on a compute shader component enables the GPU to better use its resources.6 shows steps 601 through 609 implemented in accordance with an embodiment of the present invention to enable GPU 104 to accept higher priority commands during execution of one or more lower priority commands. For example, steps 601 through 609 can be implemented during process step 409.In step 601, GPU 104 determines if a context switch is needed to process another command. Whether a context switch is required may be determined based on one or more factors such as the priority of the currently executing process, the priority of the process to be performed, the execution time slice value, and the remaining execution time of each currently executing process. For example, command processor 150 may include functionality that takes into account one or more of the above factors and determines whether to force a context switch.In step 603, the command being executed and/or the ring buffer being executed is pre-blocked. Pre-blocking the command being executed and/or the ring buffer being executed includes saving the state of the pre-blocked command and/or ring buffer. In the embodiment of the present invention, the state of the pre-blocked command and/or the ring buffer is saved in a context save area (CSA) of the fabric in the GPU local memory. For example, if the ring buffer currently being executed is to be pre-blocked, the state of the ring buffer, including its pending command, data, and execution parameters such as a program counter, etc., is stored in an area in the GPU local memory, for example The CSA 121 of the GPU memory 120.In step 605, another command and/or another ring buffer is executed by the RLC 140 and the command processor 150. The swapped execution command and/or ring buffer may be executed for the first time on the GPU 104 or a command and/or ring buffer recovered from the CSA. For example, the swapped command and/or ring buffer may have been executed until the end of its time slice and saved to the CSA at the end of its time slice.In step 607, the currently executing command ends execution. The next command in the same ring buffer can be executed when the currently executing command ends execution. In an embodiment of the invention, GPU 104 may determine the order in which the commands within the ring buffer are executed, as described with reference to step 407 of process 400. In some embodiments, when a process execution is complete, GPU 104 may perform operations such as checking for higher priority ring buffers to be executed or checking higher priority commands in the same ring buffer to determine the next step of execution. And / or ring buffer.In step 607, the ring buffer currently being executed completes execution of all pending commands associated therewith. When a ring buffer execution is complete, GPU 104 may select another ring buffer, such as the next ring buffer in order of priority, for execution.The above embodiments may be described in a hardware description language such as Verilog, RTL, netlist, etc., by generating a mask work/mask, which may ultimately configure a manufacturing process to generate aspects embodying the present invention. One or more hardware devices.to sum upAs described in the above embodiments, the present invention is able to more efficiently allocate processing resources within a second processor, such as a GPU, that receive commands from a first processor, such as a CPU. The ability to prioritize and schedule the workload based on locally determined factors such as availability of the processing device, workload, etc., increases the utilization of the second processor.The Summary and Abstract sections of the specification may be presented to illustrate one or more, but not all of the embodiments of the invention.The implementation of the specific functions and relationships of the present invention has been described above with the aid of functional block diagrams. The scope of the functional block diagrams is arbitrarily defined in the description for convenience. The present invention may also define other ranges as long as the specific functions and relationships can be properly performed.The above description of the specific embodiments will fully disclose the general features of the present invention, so that those skilled in the art can easily make modifications and/or changes to different applications without undue experimentation without departing from the general inventive concept. . Therefore, such modifications and variations are intended to be within the meaning and scope of the embodiments disclosed herein. It is to be understood that the phraseology or terminology herein is for the purpose of description and descriptionThe breadth and scope of the present invention should not be limited to the above-described exemplary embodiments, but only the following claims and their equivalents. |
A system may include a root port and an endpoint upstream port. The root port may include transaction layer hardware circuitry to determine, by logic circuitry at a transaction layer of a protocol stack of a device, that a packet is to traverse to a link partner on a secure stream, authenticate a receiving port of the link partner, configure a transaction layer packet (TLP) prefix to identify theTLP as a secure TLP, associating the secure TLP with the secure stream, apply integrity protection and data encryption to the Secure TLP, transmit the secure TLP across the secure stream to the link partner. |
1.A device that includes:A transaction layer logic unit including a hardware circuit, the hardware circuit being used for:Associate the safety TLP with the safety flow;Encode a transaction layer packet (TLP) with integrity protection, and encrypt the data payload of the TLP with data encryption to form the secure TLP; andThe secure TLP is sent to the link partner across the secure flow.2.The device according to claim 1, further comprising a transaction layer logic circuit, the transaction layer logic circuit is configured to:Read the extended capability register indicating the capability to support the IDE; andIt is determined that the device and the link partner support integrity protection and data encryption for TLP encoding.3.The device according to claim 2, further comprising a transaction layer logic circuit, the transaction layer logic circuit being used for:Set in the control register to instruct the device and the link partner to support a secure stream using integrity protection or data encryption.4.The device according to claim 1, wherein the transaction layer logic unit encodes the secure TLP with a secure stream number, the secure stream number being unique to the secure stream that the secure TLP will cross .5.The device according to claim 1, further comprising an encryption engine, the encryption engine including a hardware circuit for encrypting the TLP.6.The device according to claim 5, wherein the encryption engine uses an encryption standard based on the American Encryption Standard Galois Counter Mode (AES-GCM) encryption protocol.7.The device according to claim 1, further comprising a data integrity protection engine, the data integrity protection engine comprising a hardware circuit for implementing data integrity protection on the TLP.8.The device according to claim 7, wherein the data integrity protection engine uses an integrity protocol based on the American Encryption Standard Galois Counter Mode (AES-GCM) protocol.9.The device according to claim 1, further comprising a transaction layer logic circuit, the transaction layer logic circuit is configured to:The TLP is augmented with information indicating that the TLP includes integrity protection and data encryption.10.The apparatus of claim 9, wherein the information is contained in one of a TLP prefix or a TLP header.11.The device according to claim 9, wherein the information includes an L bit that, when set, indicates that the TLP is the last secure TLP on the secure stream and is on the secure stream The subsequent TLP received will have a new encryption key set.12.The apparatus of claim 1, wherein the secure flow includes one or more sub-streams, and the one or more secure sub-streams include secure sub-streams for issued requests, unpublished requests, or completion.13.The device according to claim 12, further comprising a transaction layer logic circuit, the transaction layer logic circuit for:An initialization vector (IV) is constructed, and the initialization vector (IV) includes a fixed field specific to the device and a call field specific to the data to be sent.14.The device of claim 13, wherein the IV comprises a 96b IV, and wherein:The fixed field is in bits 95:64 of the IV, where bits 95:92 include a fixed value indicating the substream (encoded according to the definition above), andThe call field is in bits 63:0 of the IV, which contains the value of the linear feedback shift register, where the taps at positions 64, 63, 61, and 60 are initially set to the value 0000_0001h.15.The device according to claim 1, further comprising a transaction layer logic circuit, the transaction layer logic circuit is configured to:Determining that the TLP will be sent to the link partner on the selective safety flow or the link safety flow; andOne or more TLPs in the secure stream are selectively encoded, and/or the data payload of one or more TLPs is selectively encrypted.16.One method includes:The logic circuit at the transaction layer of the protocol stack of the device determines that the packet will be traversed to the link partner on the secure flow;Authenticate the receiving part of the link partner;Configuring the transaction layer packet (TLP) prefix to identify the TLP as a secure TLP;Associating the secure TLP with the secure flow;Applying integrity protection and data encryption to the secure TLP; andThe secure TLP is sent to the link partner across the secure flow.17.The method according to claim 16, further comprising:Associating the secure stream with the authentication key; andThe authentication key is associated with a key identifier (key ID), which is unique to each of data encryption and integrity protection.18.The method of claim 16, wherein associating the secure TLP with the secure stream comprises associating the secure TLP with a secure stream number, the secure stream number being encoded into the TLP prefix.19.The method of claim 16, wherein the data encryption is performed using Advanced Encryption Standard Galois Counter Mode (AES-GCM) encryption.20.The method of claim 16, wherein the integrity protection is performed using American Encryption Standard Galois Counter Mode (AES-GCM) integrity protection.21.A system including:Root complex including root port;Endpoint devices including upstream ports;Interconnection coupling the root port and the upstream port;The root port includes a protocol stack having a transaction layer, the transaction layer includes a hardware circuit, and the hardware circuit is used for:Encoding a transaction layer packet (TLP) with a secure TLP prefix, the secure TLP prefix indicating that the TLP will cross the interconnection on a secure flow;Associating the TLP with the security flow;Perform data encryption on the data payload of the TLP, and perform integrity protection on the TLP; andThe TLP is sent to the endpoint device.22.The system of claim 21, wherein the root port is directly linked to the upstream port, and wherein the secure TLP prefix includes a local TLP prefix.23.The system of claim 22, wherein associating the TLP with the secure flow includes setting the secure flow identifier in the TLP header to zero.24.The system according to claim 21, further comprising a switch complex including a downstream switch port coupled to the upstream port and an upstream switch port coupled to the root port, the transaction layer including a hardware circuit The hardware circuit is used to ensure the security of the TLP to be transmitted to the endpoint through the switch complex based on the requester identifier (RID) and address association register settings.25.The system of claim 21, wherein the secure TLP prefix includes:Indicating the first bit of the last TLP in the security stream;Indicate whether the TLP originates from the second place in the trusted environment;Indicating that the TLP includes the third digit of a message authentication code (MAC); andIndicates the counter value used for unposted requests and completed TLP counts. |
Integrity and data encryption via computer bus (IDE)Cross-references to related applicationsAccording to 35 USC §119(e), this application requires the title "Integrity and Data Encryption (IDE) Over Computer Buses" (Integrity and Data Encryption (IDE) Over Computer Buses", serial number submitted on August 21, 2019 For the benefit of the US provisional patent application 62/889,948, the entire content of each provisional patent application is incorporated herein by reference.Background techniqueA computer system or platform may include many components, such as a host including a central processing unit (CPU), a memory, a chipset, and/or many other devices coupled together through a computer bus. A computer bus is a communication system that can transmit data between devices or components within a computer or between computers. The computing system or platform can use a wide variety of devices coupled to the computer bus. The computer bus may include related hardware components (wires, optical fibers, etc.) and software, including communication protocols. There may be many types of computer buses, such as serial buses or parallel buses.Advances in semiconductor processing and logic cell design allow for an increase in the number of logic cells that can be present on integrated circuit devices. As a corollary, the computer system configuration has evolved from a single circuit or multiple integrated circuits in the system to multiple cores, multiple hardware threads and multiple logical processors existing on each integrated circuit, and integrated into such processors. Other interfaces. A processor or integrated circuit usually includes a single physical processor die, where the processor die may include any number of cores, hardware threads, logical processors, interfaces, memories, controller hubs, and so on. As processing power increases along with the number of devices in a computing system, communication between sockets and other devices becomes more critical. Correspondingly, the interconnection has grown from a more traditional multipoint branch bus that mainly handles electrical communication to a fully mature interconnection architecture that promotes fast communication. Regrettably, as the demand for future processors that will be consumed at even higher rates increases, the corresponding demand is projected onto the capabilities of the existing interconnect architecture. The interconnect architecture can be based on a variety of technologies, including Peripheral Component Interconnect Express (PCIe), Universal Serial Bus, etc.Description of the drawingsThe embodiments can be easily understood by considering the following specific embodiments in conjunction with the drawings. For ease of description, similar reference numerals are used to denote similar structural elements. The embodiments are shown in the illustrations of the drawings by way of example and not by way of limitation.Figure 1 shows an embodiment of a computing system including an interconnect architecture.Figure 2 shows an embodiment of an interconnect architecture including a layered protocol stack.Figure 3 shows an embodiment of a request or packet to be generated or received within the interconnect architecture.Figure 4 shows an embodiment of a transmitter and receiver pair for an interconnect architecture.5A and 5B are simplified block diagrams of hop-by-hop encryption and end-to-end encryption in the Peripheral Component Interconnect Express (PCIe) system architecture, respectively.FIG. 6A is a block diagram showing an exemplary connection system illustrating a safety flow and a safety link according to an embodiment of the present disclosure.Figure 6B is a block diagram showing a system with slots for a secure streaming protocol according to at least one embodiment.Figure 6C is a simplified block diagram showing a system implementing an end-to-end secure streaming protocol according to at least one embodiment.Figure 7 shows a secure flow state machine according to various embodiments.Figure 8 shows a secure TLP diagram according to various embodiments.Figure 9 shows a secure TLP prefix according to various embodiments.Figure 10 is a process flow diagram for forming a secure transaction layer packet for cross-secure stream transmission according to an embodiment of the present disclosure.FIG. 11 is an interaction diagram showing exemplary counters and keys that can be used in a secure streaming protocol according to at least one embodiment.Figure 12 shows a possible format of a TLP secure flow prefix that can be carried by each transaction according to at least one embodiment.Figures 13-15 are interaction diagrams showing possible transactions in a secure flow protocol operating in a restricted ordering mode using three flows according to at least one embodiment.16A-16C are schematic diagrams showing allowed and prohibited request reordering according to an embodiment of the present disclosure.Figure 17 is a schematic diagram of an exemplary integrity synchronization message for a secure link according to an embodiment of the present disclosure.FIG. 18 is a schematic diagram of an integrity synchronization message for selective security flow according to an embodiment of the present disclosure.FIG. 19 is a schematic diagram of an integrity check failure message for a safety link according to an embodiment of the present disclosure.FIG. 20 is a schematic diagram of an integrity check failure message for a selective security flow according to an embodiment of the present disclosure.FIG. 21 is a schematic diagram of an exemplary secure flow requester identifier (RID) association block according to an embodiment of the present disclosure.FIG. 22 is a schematic diagram of an exemplary secure flow address association block according to an embodiment of the present disclosure.Figure 23 illustrates an exemplary apparatus suitable for use in practicing various aspects of the present disclosure in accordance with various embodiments.Figure 24 illustrates an exemplary computer-readable non-transitory storage medium that may be adapted to store instructions that, in response to execution of the instructions by a device, cause the device to practice selected aspects of the present disclosure.FIG. 25 is a block diagram showing another embodiment of a computing system including a processor according to one or more embodiments.Figure 26 is a block diagram of an exemplary computer architecture according to at least one embodiment of the present disclosure, according to one or more embodiments.detailed descriptionThe present disclosure provides various possible embodiments of systems, methods, architectures, and devices for implementing integrity and/or data encryption (IDE) for interconnect security (for example, Peripheral Component Interface Express (PCIe) encryption). Example. For ease of understanding, the present disclosure will be described in the context of an extension of the PCIe protocol to ensure the security of the PCIe link between a device or an endpoint and a system on chip (SOI). However, the present disclosure is not limited to the PCIe system, but may be practiced by means of other interconnections or may be applicable to other interconnections.The following specific embodiments refer to the drawings. In different drawings, the same reference numerals may be used to identify the same or similar elements. In the following description, for the purpose of explanation and not limitation, specific details are set forth, such as specific structures, architectures, interfaces, technologies, etc., in order to provide a thorough understanding of various aspects of various embodiments. For example, specific details may include specific types of processor and system configuration, specific hardware structures, specific architecture and microarchitecture details, specific register configurations, specific instruction types, specific system components, specific measurement results/heights, specific processor pipeline stages, and Operation etc. However, it is obvious to those skilled in the art who benefit from the present disclosure that various aspects of the various embodiments can be practiced in other examples that depart from these specific details. In some instances, descriptions of well-known devices, circuits, and methods are omitted, so as to avoid unnecessary details to make the description of various embodiments difficult to understand.Each operation will be described in sequence as a plurality of discrete operations in a manner that is most helpful for understanding the exemplary embodiments, but the order of description should not be construed as implying that these operations are necessarily order-dependent. Specifically, these operations may not be performed in the order of introduction.The phrase "A and/or B" refers to (A), (B), or (A and B). The phrases "A/B" and "A or B" refer to (A), (B), or (A and B), which are similar to the phrase "A and/or B". For the purposes of this disclosure, the phrase "at least one of A and B" means at least one (A), at least one (B), or (at least one A and at least one B). The description may use the phrases "in an embodiment", "in at least one embodiment", "in one or more embodiments", "in some embodiments" and/or "in various embodiments", They can all refer to one or more of the same or different embodiments. In addition, the terms "including", "including", "having" and the like used with respect to the embodiments of the present disclosure are synonymous.An exemplary embodiment may be described as a process, which is depicted as a flowchart, a flow diagram, a data flow diagram, a structure diagram, or a block diagram. Although the flowchart can describe the operations as a sequential process, many operations can be performed in parallel, together, or simultaneously. In addition, the order of operations can be rearranged. The process can be terminated when its operations are completed, but the process can also have additional steps not included in the drawings. Procedures can correspond to methods, functions, procedures, subroutines, subroutines, and so on. When the procedure corresponds to a function, its termination may correspond to the function returning a result to the calling function and/or the main function.An embodiment may be described in the general context of computer-executable instructions (e.g., program code, software modules, and/or functional processes) executed by one or more of the aforementioned circuits. The program code, software module, and/or functional process may include routines, programs, objects, components, data structures, etc. that perform specific tasks or implement specific data types. The program codes, software modules, and/or functional processes discussed herein may be implemented using existing hardware in an existing communication network. For example, the program codes, software modules, and/or functional processes discussed herein may be implemented using existing network elements or existing hardware at the control node.As used herein, the term "circuit" refers to either including hardware components, or parts of hardware components, for example, hardware components are electronic circuits, logic circuits, processors (shared, dedicated, or group) and/or memory (shared, Dedicated or group), application specific integrated circuit (ASIC), field programmable device (FPD) (for example, field programmable gate array (FPGA), programmable logic device (PLD), complex PLD (CPLD), large capacity PLD ( HCPLD), structured ASIC or programmable chip (SoC)), digital signal processor (DSP), etc., which are configured to provide the described functions. In some embodiments, the circuit may execute one or more software or firmware programs to provide at least some of the described functions.As used herein, the term "processor circuit" may refer to or include a circuit capable of performing the following operations or may be part of the circuit: sequentially and automatically performing a sequence of arithmetic or logical operations; recording, storing, and/or transmitting digital data . The term "processor circuit" can refer to one or more application processors, one or more baseband processors, physical central processing units (CPUs), single-core processors, dual-core processors, triple-core processors, quad-core processors And/or any other device capable of executing or otherwise operating computer-executable instructions such as program codes, software modules and/or functional processes. As used herein, the term "interface circuit" may refer to or include a circuit that provides for the exchange of information between two or more components or devices, or be a part of the circuit. The term "interface circuit" may refer to one or more hardware interfaces (for example, a bus, an input/output (I/O) interface, a peripheral component interface, and/or a network interface card, etc.). As used herein, the terms "instantiate", "instantiate", etc. can refer to the creation of an instance, and "instance" can refer to the specific occurrence of an object, for example, it can occur during the execution of program code.As used herein, the term "computer device" can describe any sequence of arithmetic or logical operations that can be performed sequentially and automatically, is equipped to record/store data on a machine-readable medium, and communicate with one or more of a communication network. A physical hardware device that sends and receives data between other devices. A computer device can be regarded as equivalent to a computer, a computing platform, a computing device, and it may occasionally be referred to as a computer, a computing platform, a computing device, etc. hereinafter. The term "computer system" can include any type of interconnected electronic devices, computer devices, or components thereof. In addition, the terms "computer system" and/or "system" may refer to various components of a computer that are communicatively coupled to each other. In addition, the terms "computer system" and/or "system" may refer to multiple computer devices and/or multiple computing systems that are communicatively coupled to each other and configured to share computing and/or networking resources. As used herein, the term "user equipment" or "UE" may refer to a device with radio communication capabilities, such as a computer device, and may describe a remote user of network resources in a communication network. The term "user equipment" or "UE" may be considered synonymous with the following terms and occasionally referred to as the following terms in the following: client, mobile device, mobile device, mobile terminal, user terminal, mobile unit, mobile station , Mobile users, subscribers, users, remote stations, access agents, user agents, receivers, radio equipment, configurable radio equipment, configurable mobile devices, etc.Examples of "computer device", "computer system", "UE", etc. may include cellular phones or smart phones, feature phones, tablet personal computers, wearable computing devices, automatic sensors, laptop computers, desktop personal computers, video games Consoles, digital media players, portable messaging devices, personal digital assistants, e-book readers, augmented reality devices, server computer devices (for example, standalone, rack-mounted, blades, etc.), cloud computing services/systems, networks Components, in-vehicle infotainment system (IVI), in-vehicle entertainment (ICE) device, instrument cluster (IC), head-up display (HUD) device, on-board diagnostic (OBD) device, dashtop mobile device (DME), mobile data terminal (MDT) ), electronic engine management system (EEMS), electronic/engine control unit (ECU), electronic/engine control module, embedded system, microcontroller, control module, engine management system (EMS), networking or "smart" appliances, Machine type communication (MTC) devices, machine-to-machine (M2M), Internet of Things (IoT) devices, and/or any other similar electronic devices. In addition, the term "vehicle embedded computer device" may refer to any computer device and/or computer system that is physically installed on, built into, or otherwise embedded in the vehicle.The computing system or platform can use a wide variety of devices coupled to the computer bus. The computer bus may include related hardware components (for example, wires, optical fibers, etc.) and software, including communication protocols. The Peripheral Component Interconnect (PCI) bus or PCI Express (PCIe, PCI-E) can be a computer bus based on a protocol that will provide a mechanism for system software or system drivers to execute and couple to the PCI bus or PCIe bus. Various operations related to the configuration of the device. The devices or components coupled to the computer bus may also be referred to as functions. PCIe can operate in consumer applications, server applications, and industrial applications as a motherboard-level interconnect (thus linking peripherals installed on the motherboard), passive backplane interconnection, and as an expansion card interface for plug-in boards. PCIe devices communicate through logical connections called interconnects or links. A link is a point-to-point communication channel between two PCIe ports, which allows both ports to transmit and receive common PCI requests (for example, configuration, input/output (I/O) or memory read/write) and interrupts . At the physical level, a link can consist of one or more channels. Low-speed peripherals (such as 802.11 Wi-Fi cards) use a single-channel (×1) link, while graphics adapters typically use a much wider and faster 16-channel link.Although the following embodiments may be described with reference to a secure streaming protocol in an integrated circuit such as a computing platform or a microprocessor, other embodiments are applicable to other types of integrated circuits and logic devices. Similar techniques and teachings described herein can be applied to other types of circuits or semiconductor devices that can also benefit from the secure streaming protocol. For example, the disclosed embodiments are not limited to desktop computer systems or UltrabooksTM. Moreover, it can also be applied to other devices, such as hand-held devices, tablet computers, other thin notebook computers, system-on-chip (SOC) devices, and embedded applications. Some examples of portable devices include cellular phones, Internet Protocol devices, digital cameras, personal digital assistants (PDAs), and portable PCs. Embedded applications usually include microcontrollers, digital signal processors (DSP), system-on-chips, network computers (NetPC), set-top boxes, network hubs, wide area network (WAN) switches, or any other system capable of performing the functions and operations taught below. In addition, the devices, methods, and systems described herein are not limited to physical computing devices, but may be related to software optimization for achieving energy saving and efficiency. It will become apparent in the following description that the embodiments of the methods, devices, and systems described herein (whether with reference to hardware, firmware, software, or a combination thereof) will be the future of “green technology” that is weighed against performance considerations Critical.With the advancement of computing systems, its components will become more complex. As a result, the complexity of the interconnect architecture used for coupling and communication between components has also been increasing to ensure that the bandwidth requirements required for optimal component operation are met. In addition, different market segments have requirements for different aspects of the interconnection architecture to meet the needs of the market. For example, servers require higher performance, and mobile ecosystems can sometimes sacrifice overall performance in exchange for power savings. However, the sole purpose of most textures is to provide the highest possible performance with maximum power savings. Many interconnections will be discussed below, and they will likely benefit from various aspects of the embodiments discussed herein.One type of interconnect texture architecture includes Peripheral Component Interconnect (PCIe) Express (PCIe) architecture. The main purpose of PCIe is to enable components and devices from different vendors to be used in open architecture, clients (desktop and mobile), servers (standard, rack-scale and enterprise), and embedded communication devices across multiple market segments. Interoperability. PCI Express is a high-performance general-purpose I/O interconnect defined for a wide variety of future computing and communication platforms. Some PCI attributes (such as its usage model, load-storage architecture, and software interface) are maintained throughout its versions, while previous parallel bus implementations have been replaced by highly scalable, fully serial interfaces. More recent versions of PCI Express take advantage of point-to-point interconnects, switch-based technology, and packet protocols to provide new levels of performance and features. Power management, quality of service (QoS), hot plug/hot load support, data integrity, and error handling are some of the advanced features supported by PCI Express.Referring to FIG. 1, it shows an embodiment of the texture formed by point-to-point links interconnecting a group of components. The system 100 includes a processor 105 and a system memory 110 coupled to a controller hub 115. The processor 105 includes any processing element, such as a microprocessor, a host processor, an embedded processor, a coprocessor, or other processors. The processor 105 is coupled to the controller hub 115 through a front side bus (FSB) 106. In one embodiment, as described below, FSB 106 is a serial point-to-point interconnection. In another embodiment, the link 106 includes a serial differential interconnection architecture that conforms to different interconnection standards.The system memory 110 includes any storage device, such as random access memory (RAM), non-volatile (NV) memory, or other memory that can be accessed by devices in the system 100. The system memory 110 is coupled to the controller hub 115 through a memory interface 116. Examples of memory interfaces include double data rate (DDR) memory interfaces, dual channel DDR memory interfaces, and dynamic RAM (DRAM) memory interfaces.In one embodiment, the controller hub 115 is a root hub, a root complex, or a root controller in a peripheral component interconnection express (PCIe or PCIE) interconnection hierarchy. Examples of the controller hub 115 include a chipset, a memory controller hub (MCH), a north bridge, an interconnect controller hub (ICH), a south bridge, and a root controller/hub. The term chipset tends to refer to two physically separated controller hubs, that is, a memory controller hub (MCH) coupled to an interconnect controller hub (ICH). Note that current systems often include an MCH integrated with the processor 105, while the controller hub 115 will communicate with I/O devices in a similar manner to that described below. In some embodiments, peer-to-peer routing is optionally supported through the root complex 115.Here, the controller hub 115 is coupled to the switch/bridge 120 through a serial link 119. The input/output modules 117 and 121 (which may also be referred to as interfaces/ports 117 and 121) include/implement a layered protocol stack to provide communication between the controller hub 115 and the switch 120. In one embodiment, multiple devices can be coupled to the switch 120.The switch/bridge 120 routes packets/messages from the device 125 upstream (ie, toward the root complex to the upper end of the hierarchy) to the controller hub 115, and downstream from the processor 105 or system memory 110 (ie, leaving the root controller) Toward the lower end of the hierarchy) route to the device 125. In one embodiment, the switch 120 is referred to as a logical component of multiple virtual PCI-to-PCI bridge devices. The device 125 includes any internal or external device or component to be coupled to the electronic system, for example, I/O device, network interface controller (NIC), plug-in card, audio processor, network processor, hard drive, storage device, CD /DVD ROM, monitor, printer, mouse, keyboard, router, portable storage device, Firewire device, universal serial bus (USB) device, scanner and other input/output devices. In PCIe professional vocabulary, devices are often referred to as endpoints. Although not specifically shown, the device 125 may include a PCIe to PCI/PCI-X bridge to support old or other versions of PCI devices. Endpoint devices in PCIe are often classified as legacy, PCIe or root complex integrated endpoints.The graphics accelerator 130 is also coupled to the controller hub 115 through a serial link 132. In one embodiment, the graphics accelerator 130 is coupled to the MCH, and the MCH is coupled to the ICH. Thereafter, the switch 120 is coupled to the ICH, and accordingly the I/O device 125 is also coupled to the ICH. The I/O modules 131 and 118 will also implement a layered protocol stack to communicate between the graphics accelerator 130 and the controller hub 115. Similar to the MCH discussion above, the graphics controller or graphics accelerator 130 itself may be integrated into the processor 105.Look at Figure 2, which shows an embodiment of a layered protocol stack, which can be implemented into one or more components of a mobile computing device, for example, into an application processor, a baseband processor or a modem, and other examples in. The layered protocol stack 200 includes logical units that are implemented into hardware circuits and/or software to implement any form of layered communication stack, for example, the layered communication stack is a fast path interconnect (QPI) stack, a PCIe stack, Next-generation high-performance computing interconnect stack or other layered stacks. Although the following discussion with reference to Figure 2-4 involves the PCIe stack, similar principles can be applied to other interconnect stacks, such as OpenCAPITM, Gen-ZTM, UPI, Universal Serial Bus (USB), and accelerator-specific caches. Coherent Interconnect (CCIXTM), Advanced Micro DevicesTM (AMDTM), InfinityTM, Common Communication Interface (CCI) or Qualcomm's CentriqTM interconnect, among others. In one embodiment, the protocol stack 200 is a CPIe protocol stack, which includes a transaction layer 205, a link layer 210 (also referred to as a “data link layer” in the text), and a physical layer 220. The interfaces such as the interfaces 117, 118, 121, 122, 126, and 131 in FIG. 1 may be represented as the communication protocol stack 200. As a representation of a communication protocol stack, it can be referred to as a module or interface that implements/includes the protocol stack.PCI Express uses packets to convey information between components. Packets are formed in the transaction layer 205 and the data link layer 210 to carry information from the sending part to the receiving part. As the sent packets flow through other layers, these packets will be expanded with additional information, which is the information needed to manipulate the packets at these layers. The reverse process occurs on the receiving side, and the packets are transformed from their physical layer 220 representation to the data link layer 210 representation, and finally transformed into a form that can be processed by the transaction layer 205 of the receiving device.Transaction layerIn one embodiment, the transaction layer 205 will provide an interface between the processing core of the device and the interconnection architecture (eg, the data link layer 210 and the physical layer 220). In this regard, the transaction layer 205 is mainly responsible for the assembly and decomposition of packets (ie, transaction layer packets or TLP). The transaction layer 205 generally manages credit-based flow control for TLP. PCIe implements split transactions, that is, transactions with requests and responses separated by time, thereby allowing the link to carry other traffic while the target device collects data for response.In addition, PCIe utilizes credit-based flow control. In this scheme, the device announces the initial credit to each of the receiving buffers in the transaction layer 205. The external device at the opposite end of the link (for example, the controller hub 115 in FIG. 1) counts the number of credits consumed by each TLP. If the transaction does not exceed the credit limit, the transaction can be sent. When the response is received, the credit is restored. The advantage of the credit scheme is that, assuming that no credit limit is encountered, the delay in credit return does not affect performance.In one embodiment, the four transaction address spaces include memory address space, configuration address space, message address space, and input/output address space. A memory space transaction includes one or more of a read request and a write request to transfer data to/from the memory mapped location. In one embodiment, memory space transactions can use two different address formats, for example, a short address format (e.g., a 32-bit address) and a long address format (e.g., a 64-bit address). Use configuration space transactions to access the configuration space of the PCIe device. The transaction for the configuration space includes read requests and write requests. Message space transactions (or simply messages) are defined to support in-band communication between PCIe agents.Therefore, in one embodiment, the transaction layer 205 assembles the packet header/payload 206. The current packet header/payload format can be found in the PCIe protocol on the PCIe protocol website.Quickly refer to Figure 3, which shows an embodiment of a PCIe transaction descriptor. In one embodiment, the transaction descriptor 300 is a mechanism for conveying transaction information. In this regard, the transaction descriptor 300 supports the identification of transactions within the system. Other possible uses include tracking changes to the default transaction ordering and the association of transactions with channels.The transaction descriptor 300 includes a global identifier field 302, an attribute field 304, and a channel identifier field 306. In the illustrated example, the global identifier field 302 is depicted as including a local transaction identifier field 308 and a source identifier field 310. In one embodiment, the global identifier field 302 is unique to all pending requests.According to one embodiment, the local transaction identifier field 308 is a field generated by the requesting agent, and it is specific to all pending requests that need to be completed for the requesting agent. Furthermore, in this example, the source identifier 310 uniquely identifies the requesting agent within the PCIe hierarchy. Accordingly, the local transaction identifier 308 and the source ID 310 together provide a global identification of transactions within the hierarchy domain.The attribute field 304 specifies the characteristics and relationships of the transaction. In this regard, the attribute field 304 may be used to provide additional information that allows modification of the default manipulation of the transaction. In one embodiment, the attribute field 304 includes a priority field 312, a reserved field 314, a ranking field 316, and a no-sniffing field 318. Here, the initiator can modify the priority subfield 312 to assign priority to the transaction. The reserved attribute field 314 is reserved for future use or reserved for vendor-defined use. The possible usage model of usage priority or security attributes can be implemented using reserved attribute fields.In this example, the sort attribute field 316 is used to provide optional information conveying the sort type that can modify the default sort rule. According to an exemplary embodiment, the sorting attribute "0" means that the default sorting rule will be applied, where the sorting attribute "1" means loose sorting, where writing can be transferred in the same direction, and reading can be completed in the same direction. Direction transfer write. The sniffing attribute field 318 is used to determine whether the transaction is sniffed. As shown, the channel ID field 306 identifies the channel associated with the transaction.Link layerReferring to FIG. 2, the link layer 210 (also referred to as the data link layer 210) functions as an intermediate stage between the transaction layer 205 and the physical layer 220. In one embodiment, the data link layer 210 is responsible for providing a reliable mechanism to exchange transaction layer packets (TLP) between the two components of the link. One side of the data link layer 210 accepts the TLP assembled by the transaction layer 205, applies the packet sequence identifier 211 (ie identification number or packet number), calculates and applies the error detection code (ie, CRC212), and transfers the modified TLP Submit to the physical layer 220 for transmission to external devices across the physical layer.Physical layerIn one embodiment, the physical layer 220 includes a logical sub-block 221 and an electrical sub-block 222 to physically transmit the packet to an external device. Here, the logical sub-block 221 is responsible for the "digital" function of the physical layer 220. In this regard, the logical sub-block includes a sending part and a receiving part. The former is prepared for the outgoing information transmitted by the electrical sub-block 222, and the latter recognizes and prepares the received information, and then transmits it to the link layer 210.The physical layer 220 includes a transmitter and a receiver. The logic sub-block 221 supplies symbols to the transmitter, which will serialize the symbols and send them to the external device. The serialized symbols are supplied from an external device to the receiver, and the receiver converts the received signal into a bit stream. The bit stream is deserialized and supplied to the logic sub-block 221. In one embodiment, when sending/receiving ten-digit symbols, 8b/10b transmission codes are used. In other embodiments, among other examples, 128b/130b transmission coding is used. Here, a special symbol is used to construct a packet with a frame 223. In addition, in one example, the receiver also provides the symbol clock recovered from the incoming serial stream.As explained above, although the transaction layer 205, the link layer 210, and the physical layer 220 are discussed with reference to specific embodiments of the PCIe protocol stack, the layered protocol stack is not limited thereto. In fact, any layered protocol can be included/implemented. As an example, ports/interfaces represented as layered protocols include: (1) the first layer (ie, transaction layer) to assemble packets; the second layer (ie, link layer) to serialize packets ; And the third layer (ie, physical layer) used to send packets. As a specific example, the Common Standard Interface (CSI) layered protocol is used.Next, refer to FIG. 4, which shows an embodiment of PCIe serial point-to-point texture. Although an embodiment of a PCIe serial point-to-point link is shown, the serial point-to-point link is not limited to this because it includes any transmission path for transmitting serial data. In the illustrated embodiment, the basic PCIe link includes two low-voltage differential drive signal pairs: a transmit pair 406/411 and a receive pair 412/407. Correspondingly, the device 405 includes a sending logic unit 406 for sending data to the device 410 and a receiving logic unit 407 for receiving data from the device 410. In other words, two transmission paths (i.e., paths 416 and 417) and two reception paths (i.e., paths 418 and 419) are included in the PCIe link.The transmission path refers to any path used to transmit data, such as transmission lines, copper lines, optical lines, wireless communication channels, infrared communication links, or other communication paths. The connection between two devices (for example, the device 405 and the device 410) is referred to as a link, such as the link 415. A link can support one channel-each channel represents a set of differential signal pairs (one pair for sending and one pair for receiving). In order to scale the bandwidth, the link can aggregate multiple channels denoted by xN, where N is any supported link width, such as 1, 2, 4, 8, 12, 16, 32, 64 or wider.A differential pair refers to two transmission paths used to transmit differential signals, such as lines 416 and 417. As an example, when the line 416 switches from a low voltage level to a high voltage level (ie, a rising edge), the line 417 is driven from a high logic level to a low logic level (ie, a falling edge). Differential signals may exhibit better electrical characteristics, such as better signal integrity (ie, cross-coupling), voltage overshoot/undershoot, ringing, etc. It will allow for a better timing window, thereby achieving a faster transmission frequency.Integrity and Data Encryption (IDE) provides confidentiality, integrity and replay protection for TLP. IDE flexibly supports a variety of usage models while providing broad interoperability capabilities. The password mechanism is consistent with current industry best practices. For example, AES-CTR 256b and GMAC 96b can be used for encryption and integrity respectively; however, as security requirements evolve, the implementation of the cryptographic mechanism can be extended.The security model considers the threats brought by physical attacks on the link. Physical attacks include situations in which the adversary may use laboratory equipment, special-made plug-ins or malicious extensions to view data intended to be kept secret, modify the content of the TLP, and protect the TLP. Reorder and/or delete it. The TLP traffic can be protected as it passes through the switch, thereby extending the security model to address threats caused by reprogramming the switch routing mechanism or passing through a malicious switch.The IDE can be used to ensure the security of traffic in a trusted execution environment composed of multiple components-the framework of such a composition is outside the scope of the IDE.Only components designed to operate in an interoperable manner based on this protocol will support authentication and key exchange using component measurement and authentication (CMA) via data object exchange (DOE), but interoperability is not required At times, the IDE explicitly allows the use of component or platform-specific mechanisms. By using CMA via DOE, system firmware/software can establish secure connections between components without device-specific knowledge.The IDE establishes a secure flow between the two ports. The security flow will be described in more detail below. When there is no switch between the ports, it is possible to ensure the security of all (or selected) TLP traffic on the link. For situations with and without switches between the ports, it is possible to ensure the security of selected TLP traffic.The IDE establishes three TLP sub-flows corresponding to the three flow control categories-published requests, unpublished requests, and completed (in each flow direction). Within each of these sub-flows, switch reordering is constrained so that the TLP remains orderly between the two ports.Safe flowIn order to illustrate some exemplary techniques for using secure streaming protocols for serial interconnections according to the embodiments disclosed herein, it is important to understand the activities that may occur in the system, in such a system, in the trusted domain Link encryption is used in the environment. Accordingly, the basic information below can be regarded as a basis that can appropriately explain the present disclosure.Some of the new CPU capabilities include trusted domains, which provide a virtual computing environment without hypervisors in the Trusted Computing Base (TCB). For virtual machines managed by the trusted computing base, the hypervisor (or virtual machine manager (VMM)) will be removed from the trusted computing base. The virtual machine in this trusted domain can protect the confidentiality of its memory content and runtime central processing unit (CPU) state from other software, including the host VMM, unless the software is explicitly trusted by the domain Shared by the virtual machine itself. For example, it is also possible to protect the memory from VMM and other trust domains by using an encrypted memory controller. Generally speaking, the trust domain does not allow devices connected via the serial interconnection interface to access the memory protected by the trust domain. However, these connected devices often require access to protected data to perform their intended functions.An exemplary use case of a virtual computing environment with a hypervisor removed from a trusted computing base for virtual machines managed by a trusted computing base includes a cloud service provider that hosts the workload of many tenant virtual machines (VMs). Both the cloud service provider (CSP) and the cloud tenant may want the confidentiality of VM workloads. The tenant VM may not trust VMM or any software in the cloud data center. Therefore, running the trust domain VM (for which the hypervisor has been removed from the trust boundary) ensures that the VM cannot be attacked by the operation or other forms of access to the data center by the VMM or malicious users. In order to realize this secure VM operation, the memory and runtime CPU state must be kept secret and integrity protected to prevent data leakage or tampering attacks. The new CPU security capabilities can meet these security goals by using a memory controller that utilizes memory encryption and integrity protection, for example, Multi-Key Full Memory Encryption (MK-TME).A trusted domain (TD) is a type of virtual machine client that prevents attacks by running in a central processing unit (CPU) mode. Running in CPU mode will protect the confidentiality of its memory content and runtime CPU state from any other software, including the host VMM, unless the software is explicitly shared by the trust domain itself. The memory and runtime CPU state are isolated, making the memory opaque and generally unmodifiable. If any changes did occur, then those changes can be detected.However, the device connected to the server platform in the cloud is not trusted and therefore cannot access the storage of the trusted domain. For a device connected to a server platform via a serial interconnection interface (for example, Peripheral Component Interface Express (PCIe)), enabling the device to be directly allocated to the TD memory requires data to pass between the TD and the device via the PCIe link Flow to ensure security to enhance the confidentiality, integrity and replay protection of data. Specifically, in order to allow a device to directly access memory, TD needs to: 1) establish trust in the device so that the device can be recognized as a trusted entity, and 2) ensure the security of the connection between the server and the device. The ability to make data flowing on the link safe, and 3) the ability to enhance the producer-consumer ordering rules for transactions.As shown in FIGS. 5A and 5B, the hop-by-hop protocol (FIG. 5A) or the end-to-end protocol (FIG. 5B) can be used to perform encryption of transactions in PCIe. 5A and 5B show the distinction between hop-by-hop encryption and end-to-end encryption in an interconnect architecture that includes exemplary devices 530 and 532 connected to PCIe switch 520 via links 522 and 524, and The PCIe switch 520 is connected to a system on chip (SoC) 510 via a link 512. In Figure 5A, the hop-by-hop protocol includes different key pairs for each link to achieve encryption at each sending port and decryption at each receiving port. The keys 501A and 501B are used as the key pair of the link 512, the keys 503A and 503B are used as the key pair of the link 522, and the keys 505A and 505B are used as the key pair of the link 524. Therefore, data flowing through a hop-by-hop network with one or more intermediate devices (e.g., PCIe switch 520) is encrypted and decrypted several times before it reaches the destination.In the end-to-end protocol shown in FIG. 5B, a different key pair is supplied for each end-to-end link 507 and 509. Only provide keys at the initiator and target devices. For example, the keys 506A and 506B are used as a key pair for the end-to-end link 507, and the keys 508A and 508B are used as a key pair for the end-to-end link 509. The initiating device encrypts the data to be sent to the target device, the target device decrypts the data received from the initiating device, and the intermediate device simply routes the encrypted transaction. For example, when the SoC 510 transmits data to the device 530, the SoC 510 is the initiating device, and the device 530 is the target device. Conversely, when the device 530 transmits data to the SoC 510, the device 530 is the initiating device, and the SoC 510 is the target device.FIG. 6A is a block diagram showing an exemplary connection system illustrating a safety flow and a safety link according to an embodiment of the present disclosure.FIG. 6B is a block diagram showing a system 600b having a slot for a secure streaming protocol according to at least one embodiment. The interconnection architecture includes an initiator device 640a and a target device 640b. As described earlier in this article, the layered protocol stack includes logical units that are implemented into hardware circuits and/or software to implement any form of layered communication stack, for example, the layered communication stack is a fast path interconnection (QPI) ) Stack, PCIe stack, next-generation high-performance computing interconnect stack or other layered stacks. To facilitate discussion, FIG. 6B and subsequent drawings in this document may be mainly described in connection with the PCIe stack, although similar principles may be applied to other interconnect stacks. In at least one embodiment, both the initiator device 640a and the target device 640b include a PCIe stack, for example, the PCIe protocol stack 200 described with reference to FIG. 2. The PCIe stack in the initiator 640a includes a transaction layer 660a, a link layer 670a, and a physical layer 680a. The PCIe stack in the target device 640b includes transaction layers 660b, 670b, and a physical layer 680b.In at least one embodiment, the initiating device 640a and the target device 640b can ensure the security of the transaction layer packet according to the secure stream protocol. Transaction layer grouping is also referred to as "packet" and "TLP" in this article. As shown in FIG. 6C, the slot where the secure stream processing occurs includes the secure stream TLP insertion point (STX) between the transaction layer 660a and the link layer 670a of the initiator 640a, and the transaction layer 660b and chain of the target device 640b. The safe flow TLP detection point (SRX) between the road layers 670b.In various embodiments, in the secure stream protocol (also referred to herein as "SEC-STREAM"), each transaction type (ie, published, unpublished, and completed) can be treated as a separate protected stream or Treated as a safe substream. As used herein, "protected flow" or "secure flow" is intended to mean one or more transactions of a specific transaction type (or combination of specific transaction types), which are subject to confidentiality based on the specific transaction type (or combination of transaction types) Sex, integrity and replay protection. The data payload of the transaction is protected by confidentiality, integrity and replay. The metadata of the transaction (for example, TLP secure stream prefix, TLP header) is protected by integrity and replay. In various embodiments, an Advanced Encryption Standard Galois/Counter Mode (AES-GCM) structure with a 96-bit counter and a 96-bit message authentication code (MAC) may be used to cryptographically ensure the security of the traffic. However, it should be pointed out that this scheme can work equally well with similar types of security schemes, and is not limited to these specific details. For example, as an alternative, other cryptographic constructs that provide replay protection and integrity protection can be used. AES-CTR encryption and GMAC (and aggregated GMAC) can also be used. In other embodiments, larger AES-GCM configurations and/or larger MACs may be used.In terms of operation, in at least one embodiment, a packet is formed in the transaction layer 660a of the initiating device 640a. The packet may include, but is not necessarily limited to, a header with routing information and payload data to be communicated to the target device. At IDE TLP 662 and encryption engine 664, before the packet is passed to the link layer 670a, the data in the packet is encrypted, the TLP secure stream prefix can be generated, the prefix is inserted into the packet, and information about the packet (for example, prefix, Header, data) integrity code value (ICV), such as MAC, and ICV is added to the packet. In some embodiments, IDE information may be included in the TLP header. The packet is passed to the physical layer 680a. Packets are sent across link 690. After the link layer 670b in the target device 640b processes the received transaction, the IDE TLP decoder 668 and the decryption engine 666 decrypt the data in the packet, verify the ICV, and strip the TLP secure stream prefix from the packet , And then further processing by the transaction layer 660b. The hardware and/or software used to perform secure stream processing at the IDE TLP encoder logic unit 662 and the IDE TLP decoder logic unit 668 can be integrated into their respective transaction layers 660a and 660b, or can be implemented separately as a transaction layer The sub-layer between 640a, 640b and the link layer 670a, 670b.The present disclosure defines two operation modes for the secure streaming protocol to solve the problems related to loose ordering and read replay that may occur. These modes include restricted sequencing mode (ROM) and explicit counter mode (ECM).6C is a simplified block diagram of a PCIe interconnect architecture 600c including a root complex 602 connected to an endpoint 622 via a PCIe switch 612. The root complex 602 includes a root port 604 where encryption or decryption of TLP packets in an end-to-end (optional) secure stream is performed 606 (depending on whether the root complex is a sender or a receiver). The endpoint 622 includes an upstream port 624 at which encryption or decryption 626 is performed on TLP packets in an end-to-end (optional) secure stream (depending on whether the endpoint is a sender or a receiver). The PCIe switch 612 includes an upstream port 614 connected to the root port 602 via a link 603 and a downstream port 616 connected to the upstream port 624 of the endpoint 622 via a link 603. In the secure flow, the transaction is not encrypted or decrypted at the PCIe switch 612. Instead, the PCIe switch 612 uses the header data in the TLP packet to route the transaction. When the root port 604 is the initiating device, the root port 604 may perform encryption of the data payload of the packet, and when the root port 604 is the target device, the root port 604 may perform decryption of the data payload of the packet. Similarly, when the endpoint 622 is the initiating device, the endpoint 622 may perform encryption of the data payload of the packet, and when the endpoint 622 is the target device, the endpoint 622 may perform decryption of the data payload of the packet.In an embodiment, the IDE TLP encoder logic unit may reside at the transaction layer 660a, so as to encode the transaction layer packets with the TLP prefix for IDE and provide integrity protection. Similarly, the encryption engine can reside in the transaction layer 660 to encrypt TLP data. The encryption engine and the IDE TLP encoder logic unit may include hardware circuits, and in some embodiments, may reside on the same logic unit.Figure 7 shows a secure flow state machine according to various embodiments. Before using the secure stream, if non-default values will be used, then the operating parameters can be configured, and the key exchange can be completed, at which point the port is in the Ready_Insecure state. Some or all of this configuration is allowed to be done inside the component.When using the newly established secure stream to send or receive secure TLP, the port can be changed from Ready_Insecure to secure. When in Ready_Insecure, if any integrity (MAC) check fails, then the security or (if supported) Key_Refresh changes to Fail_Insecure. Detailed requirements for error handling will be given later in this section.The safe flow associated register associated with the flow can be programmed. When the safety stream is in use, it is allowed to modify the safety stream associated register. The modification of the value of the safe stream associated register may not affect the TLP transmission/reception in the process using the irrelevant stream. If the TLP send/receive is in the process of using the stream in which the secure stream-associated register is being modified, then the hardware behavior is undefined-it is strongly recommended that the software ensure that such modifications are not made.Set the IDE On bit in the IDE control register, if it is not already set. The subsequent TLP traffic selected according to the safe flow association register can be safely processed. The key refresh (if needed) can be managed by the system firmware/software, and its specific details are outside the scope of this specification.For a given TLP, if a secure link (secure flow with ID 0) is established, and one or more selective security flows are also established, then the association between the TLP and the selective security flow takes precedence, and it is possible to avoid All TLPs associated with any safety flow are associated with safety links. For the established security flow, Table 1 defines which TLP types are allowed and how they are associated with the security flow.In an embodiment, the selective security stream may include a security stream that allows the sender to selectively apply the IDE to the data block. Examples of selective security flows may include flows traversing a switch complex. Examples of link flows may include links that do not need to traverse a switch complex. The determination of whether the TLP will traverse the selective security flow or the link flow may be made based on destination information, for example, the information is address information for memory writing or a destination ID for completion.Table 1-TLP Types of Security FlowFigure 8 shows a secure TLP diagram 800 according to various embodiments. Integrity and data encryption are enabled and configured per flow, and apply to the TLP associated with that flow. Such a TLP is called a safe TLP. Encryption only applies to data payload 810 (if present) and ECRC (if present) when enabled. TLP integrity covers all TLP content associated with the stream when it is enabled, and the message authentication code (also known as MAC or integrity check) 818 applies to all TLPs on a per-TLP basis according to the selected operation mode, or makes the selected TLP includes cumulative MAC, which covers all TLP content transmitted from the previous TLP to include MAC. All secure TLPs must use the secure TLP prefix 816. As shown in the figure, the transaction layer logic unit appends the secure TLP prefix 816 to the front of any other prefix 814 or packet header 812.The present disclosure defines a new TLP security flow for TLP to indicate whether the TLP is part of a trusted IO session and convey other security flow information. Figure 9 shows a secure TLP prefix 900 according to various embodiments. If the request is issued by a trusted entity and will be consumed by another trusted entity, then the TLP can be part of a trusted IO session. Generally, trusted entities are part of the trust domain. Both the initiating device and the target device can be provided with a trusted entity and an untrusted entity. The existence of secure stream information (for example, in the prefix of TLP or otherwise stored in TLP) indicates that TLP is providing confidentiality (for example, encrypted data), integrity protection (for example, the integrity code value of encrypted data, security Security is ensured in the secure stream of stream information and TLP header) and playback protection (for example, encryption/decryption counter). It should be noted that the use of the TLP secure stream prefix is to facilitate the addition of this capability to existing implementations, and in other variants, the TLP header can be modified or it is possible to add an additional "security layer" to carry transactions. Security flow information in. In an alternative embodiment, some or all of the safe flow information carried in the safe flow prefix may be embedded in the payload of the packet. Although the prefix scheme will be described in detail below, it should be understood that any way to convey the required secure flow information can provide equivalent results, although there may be different implementations and/or bandwidth overhead trade-offs. Therefore, the encrypted payload is transparent to the intermediate switch. Intermediate switches can use metadata for buffer management and routing, but can detect any tampering or replay. Table 2 provides an exemplary embodiment of the secure TLP prefix.Table 2 TLP prefixIn some embodiments, the content described above as being added to the secure TLP prefix may also be added to the TLP header of the packet, in which case the secure TLP prefix may be omitted.Figure 10 is a process flow diagram for forming a secure transaction layer packet for cross-secure stream transmission according to an embodiment of the present disclosure. The initiation of a safe flow involves multiple steps, although some of these steps can be combined or performed in a different order than described herein. The exemplary first step is to establish the authenticity and identity of the component containing two ports, which will be the end of the secure flow. The second step is to "supply" the key-it can be done as part of the same exchange used to establish the authenticity and identity of the component, or it can be done through any other mechanism. Third, a secure connection must be configured. Finally, the establishment of a secure connection is triggered.At the beginning, the sending device can determine the packet to be sent to the receiving device using the secure flow (1002). This determination may be made by the first observation capability of the two devices used to support IDE in packet transmission. User settings, priority settings, data types, types of connected devices, or other reasons can shape the determination of sending data across a secure stream.For the implementation of key-based authentication using CMA and DOE technologies:Precisely define the association between the ports to be connected via the secure flow to be established. For a secure link (compared with selective secure flow), the two ports must have no switch between them, and for the upstream port, function 0 must be used to establish the authenticity and identity of the associated components, key exchange, and secure link The purpose of configuration and management.For the selective security flow, there is no module to establish the authenticity and identity of the associated components, key exchange, and configuration and management of the security link. For the CMA/DOE implementation, in a cryptographically secure manner, keys are exchanged via the defined CMA/DOE mechanism. For other embodiments, the key exchange is also performed in a secure manner according to the selected authentication mechanism.The TLP can be formed using payload data (if any) and any headers required to send and route the TLP across the link (1004). The TLP prefix can be generated and prepended to the TLP as described herein (1006).In order to form a secure TLP, the TLP is associated with a secure stream (either a selective secure stream or a secure link) (1008). This association can be done by selecting the requester ID and/or configuring the TLP to associate with the address of the secure flow. Other related technologies will be described in more detail below.The data payload can be encrypted (1010). If data encryption is to be performed, the data can be encrypted using, for example, AES-CTR encryption. Integrity protection (1012) can also be applied to TLP. For example, GMAC can be used for integrity protection. This article will describe more details. Once a secure TLP is formed and a secure flow is established, the secure TLP can be sent.The secure TLP is protected by the data link layer mechanism, so that physical link errors are detected and corrected, and then the received TLP is presented to the receiver's cryptographic processing mechanism. When integrity is enabled, all transaction layer content is integrity protected, and when encryption is enabled, all TLP data payloads (and ECRC, if any) are encrypted.IDE can use AES-CTR encryption defined in (for example) NIST Special Publication 800-38A, and GMAC integrity protection defined in (for example) NIST Special Publication 800-38D, which has these additional rules:a) The key size can be 256 bits.b) The generation and provision of the key is done outside the IDE, and the obtained key can be provided to the IDE hardware via implementation-specific technology.b.1) After this process, one port is recognized as PortX and the other port is recognized as PortY-for non-peer traffic, the downstream port must be PortX, and the upstream port must be PortY; for As far as the flow is concerned, a choice must be made through a channel not defined here.c) The key can be associated with the key ID.c.1) Each key ID can have a unique key.c.2) The number of key IDs supported is implementation specific.c.3) Between two ports that use a secure stream for communication, each port must have the same key associated with the secure stream, but it is not required to use the same key ID for the key in the two ports .c.4) After the key exchange, an implementation-specific module must be used to provide the key to the data path in a secure manner.c.5) The specific requirements for maintaining key security are platform and use case specific, and no definition is given here.d) Different keys are used for AES-CTR encryption and GMAC integrity.d.1) A separate key ID association mechanism is provided for this purpose.e) In the case of post-encryption TLP, MAC must operate independently of AES-CTR, and all inputs must be processed as additional authenticated data.The following provides a set of secure flow guidelines that provide additional details for forming a secure TLP:a) All safety TLPs must be associated with safety flows identified via safety flow numbers.a.1) The safety link must use the safety flow number zero, and other safety flows are not allowed to use this safety flow number.b) When only the secure link is enabled, the secure link must be used to ensure the security of all TLPs associated with the secure flow, and all these TLPs must use the key and counter set established for the secure link.c) When only the selective security flow is enabled, the security flow must be used based on the RID and address association register settings to ensure the security of the selected TLP, and the selected TLP must use the corresponding key and counter group based on the key ID.d) When both the secure link and one or more selective secure streams are enabled, the selected TLP must be related to the secure stream based on the RID and address association register settings and use the corresponding key and counter group based on the key ID And all other TLPs must use a secure link, and must use the key and counter set established for the secure link. In some embodiments, the stream number can be placed in the prefix.e) All safety TLPs that are not associated with the safety link must be associated with the safety flow based on the information contained in the TLP header.e.1) For the request, it is allowed to use the address and/or the requester ID to associate TLP with a specific security flow.e.2) For completion, it is allowed to use the completer ID and/or requester ID to associate TLP with a specific security flow.e.3) A port that supports a secure stream must provide a mechanism for distinguishing TLPs associated with a secure stream.f) Each port associated with a specific security flow must have a mechanism according to which the port knows the RID of another port associated with the security flow.g) A separate VC must use a separate security stream.h) Each security stream includes sub-streams:h.1) 0000b—the published request sent by PortX and (finally) received by PortY;h.2)0001b—unpublished request sent by PortX and (finally) received by PortY;h.3) 0010b-completion sent by PortX and (finally) received by PortY;h.4)0011b—issued request sent by PortY and (finally) received by PortX;h.5) 0100b—unpublished request sent by PortY and (finally) received by PortX;h.6) 0101b-completed by PortY and (finally) received by PortX;h.7) Reserved value 0110b-0111b;h.8) The value 1000b-11111 is allowed to be used for other purposes not defined in this specification.i) For each substream, there must be two counter blocks, one for AES-CTR and one for GMAC. They must all consist of these fields:i.1) Bit 127:124 contains a fixed value indicating the substream (encoded according to the definition above);i.2) Reserved bits 123:96;I.3) Bits 95:32 contain the value of the LFSR, the taps at positions 64, 63, 61 and 60 are set, and the taps advance each time the counter block is consumed;i.4) Bit 31:0 must be 0000_0001h.In some embodiments, a single counter block can be used. For each substream, for each [AES-GCM], there must be a 96b initialization vector IV with a deterministic structure, which consists of the following:The fixed field in bits 95:64 of the IV, where bits 95:92 contain a fixed value indicating the substream (encoded according to the definition above), and bits 91:64 are all 0s;The call field in bits 63:0 of this IV, which contains the value of LFSR, where the taps at positions 64, 63, 61, and 60 are initially set to the value 0000_0001h and each time the IV is consumed, the taps are advanced .j) The secure TLP must have a secure TLP prefix, which must be prepended to all other prefixes on the TLP.j.1) On the secure link, the local TLP prefix must be included in the integrity check of the TLPj.2) For selective security flow TLP, the local TLP prefix is not allowedk) The security TLP prefix includes:k.1) L bit-when set, it indicates that this is the last TLP in this substream using the current key group;k.1.1) The mechanism for establishing a new key group and managing the transition of the key group is not defined in this article;k.1.2) After transmitting the TLP with the L bit set, the transmitter must wait at least [500ns? ], and then send another TLP associated with this substream; all subsequent TLPs must use the new key set;k.1.3) After receiving the TLP with the L bit set, the receiver must switch to the new key set for all subsequent TLPs associated with this substream.k.2) T bit-when set, it indicates that TLP originates from within a trusted execution environment:k.2.1) Allow safe TLP originates from trusted execution environment and untrusted execution environment; rules for trusted execution environment [not defined in this article]l) M bit-when set, it indicates that the TLP includes MAC.m) PR_Sent_Counter—For unpublished requests and completions, this value must be determined according to the following rules. For published requests, PR_Sent_Counter is set to be reserved.The following rules apply to each security flow:For the transmitter, two 16-bit counters are maintained: PR_Sent_Counter-NPR and PR_Sent_Counter-CPL.For each issued request security TLP that is sent associated with the security flow, two counters are incremented.For each unpublished secure TLP sent associated with the secure stream, the PR_Sent_Counter-NPR value is included in the PR_Sent_Counter field of the secure TLP prefix, and PR_Sent_Counter-NPR must be reset to 0 afterwards.When PR_Sent_Counter-NPR exceeds 2^15, an integrity synchronization message can be sent, and then both PR_Sent_Counter-NPR and PR_Sent_Counter-CPL can be reset to 0. In an embodiment, the integrity synchronization message is allowed to be transmitted at other times for other reasons.For each completed secure TLP sent associated with the secure stream, the PR_Sent_Counter-CPL value must be included in the PR_Sent_Counter field of the secure TLP prefix, and PR_Sent_Counter-CPL must be reset to 0 afterwards.When PR_Received_Counter-CPL exceeds 2^15, an integrity synchronization message can be sent, and then both PR_Sent_Counter-NPR and PR_Sent_Counter-CPL must be reset to 0.For the receiver, two 16-bit counters must be maintained: PR_Received_Counter-NPR and PR_Received_Counter-CPL.For each issued request security TLP received associated with the security flow, two counters are incremented.When an unpublished request is received, the PR_Sent_Counter value carried in the secure TLP prefix can be subtracted from the PR_Received_Counter-NPR, and the result can be used to update the PR_Received_Counter-NPR.When the reception is completed, the PR_Sent_Counter value carried in the secure TLP prefix can be subtracted from the PR_Received_Counter-CPL, and the result can be used to update the PR_Received_Counter-CPL.When an integrity synchronization message is received, then:The PR_Sent_Counter-NPR value carried in the secure stream synchronization message must be subtracted from the PR_Received_Counter-NPR, and the result should be used to update the PR_Sent_Counter-NPR.The PR_Sent_Counter-CPL value carried in the secure stream synchronization message must be subtracted from the PR_Received_Counter-CPL, and the result should be used to update the PR_Received_Counter-CPL.When subtracting PR_Sent_Counter from the received TLP or from the integrity synchronization message, if either or both of PR_Received_Counter-NPR or PR_Received_Counter-CPL underflow, this indicates that an illegal TLP reordering has occurred. This is a reported error associated with the receiving port.When Per-TLP GMAC is enabled, integrity must be applied to each TLP related to the secure flow.It is necessary to calculate the GMAC of all contents including the TLP immediately after the data encryption (if enabled), which excludes the MAC value itself.When aggregated GMAC is enabled, as long as it is triggered by the writing of the trigger integrity check bit associated with the secure stream, and as long as it is selected by the sender via an implementation-specific mechanism, it must be associated with the secure stream. Integrity of the TLP application.In order for the first TLP to include the MAC, it is necessary to calculate the GMAC value of all TLP contents immediately after data encryption (if enabled) for all TLPs sent since the establishment of the secure stream and associated with the secure stream. Exclude the MAC value itself.In order for the subsequent TLP to include the MAC, it is necessary to calculate the GMAC that includes all TLP content immediately following data encryption (if enabled) for all TLPs sent since the last TLP including the MAC was sent and associated with the secure flow , Which includes the TLP with MAC that is currently being sent, but excludes the MAC value itself.When the integrity mode field is programmed to a supported value, an integrity check must be performed at the receiver for all TLPs including MAC. It is worth noting that the integrity check can occur after the configuration and confirmation of the LCRC.The following are the defined errors associated with the security flow:MAC check failed—The receiver failed to check the MAC of the received TLPPR-Received-Counter-NPR/PR_Received_Counter-CPL underflow-indicates that inappropriate reordering has been detectedPR-Received-Counter-NPR/PR_Received_Counter-CPL overflow-indicates failure to receive the required NPRIf one or both of these conditions are detected, then the security flow state machine for this security flow must enter Fail_Insecure.Receiving completion with UR or UC status is not a security error, and it does not necessarily trigger the transition to Fail_Insecure by itself.In Fail_Insecure, the key group used for the associated security flow must be marked as invalid.The receiver's manipulation of TLPs that fail the integrity check is implementation specific; it is strongly recommended to prevent such TLPs from causing unrecoverable data corruption.In order to exit Fail_Insecure, the associated security flow must be re-established using the new key group.In the Fail_Insecure state, the private data associated with the affected security flow must be protected in an implementation-specific manner.At the upstream port, upon entering Fail_Insecure, an integrity check failure message indicating the key ID of the associated link/flow (which thus identifies the associated secure flow) must be sent.When the downstream port receives an integrity check failure message, it must immediately enter Fail_Insecure for the associated security flow.When any link is disconnected, all security flows must transition to Fail_Insecure.Additional rules specific to safety links:When entering Fail_Insecure, according to the containment behavior configured in the IDE control register, the operation is determined for each port:000b—Force the link to be disconnected.001b—Memory and IO requests in both directions are terminated as UR; the received completion for memory/IO must be discarded; Cfg and Msg requests/completion continue to operate in both directions.010b—For the upstream port, it is the same as 000b; for the downstream port, it is the same as 000b, except that the Cfg request through the normal path is terminated as a UR, and the received completion is discarded; but the configuration traffic continues to pass through the system firmware intermediary (SFI) mechanism to operate, if the mechanism is available.011b—All requests in both directions are terminated as UR, and all received completions are discarded.In Fail_Insecure, for the downstream port, the configuration flow target structure in the configuration space of the port defined in this specification must continue to be accepted and completed, just as it should be done in other cases; not defined in this specification Configuring the traffic target structure (for example, VSEC) is allowed to be done as UR.In order to exit Fail_Insecure, either a basic reset (triggered by a platform-specific module) must be used, or the system firmware/software must clear the secure link at the downstream port, wait for 100μs, and thenOptionally, access the upstream port configuration register to perform error logging, and thenUse the auxiliary bus reset to issue a hot reset to the downstream components, and thenRe-enumerate/configure links and components.Safety and power management must be coordinated to maintain a safe environment. Referring to Table 3, the port maintains the safe state when in the state without underline, and the safe state is cleared when the port is in the (underlined) state.Table 3 Safety connection status related to D state and L stateDownstream component D state allowable upstream component D state allowable interconnection state D0 D0 L0, L0s, L1, L2/L3 ready D1 D0-D1 L1, L2/L3 ready D2 D0-D2 L1, L2/L3 ready D3 (Hot) D0-D3 (Hot) L1, L2/L3 Ready D3 (Cold) D0-D3 (Cold) L2, L3The system firmware/software must be aware of the PM transition that will lose the safe state, and take appropriate actions as needed to maintain safe operation-how to do this is outside the scope of this article.In all cases, the hardware must prevent leakage of private data and integrity violations-how to do this is implementation specific.FIG. 11 is an interaction diagram 1100 showing various possible counters and keys in a secure stream protocol that can be used to operate in a restricted ordering mode (ROM) using three secure streams according to at least one embodiment. The interaction diagram 1100 shows the initiating device 1110 and the target device 1130. Two connections 1102 and 1104 are established between the initiator device 1110 and the target device 1130. The connections 1102 and 1104 may include one or more intermediate devices (for example, switches, bridges), which are not shown for ease of illustration. The initiating device 1110 can transmit a transaction 1103 (for example, published or unpublished) to the target device 1130 via the connection 1102. In some cases, the transaction 1105 may be transferred from the target device to the initiating device via connection 1104 (eg, completed). The completion request is transmitted in response to a transaction that requires a response (e.g., an unpublished (NPR) transaction). For example, an NPR transaction may include a read request or a write request that needs to be responded to.The counters and keys shown in Figure 11 can be used in an implementation of a secure stream protocol, where each transaction type is treated as a separate protected stream with a separate counter and key. The three streams correspond to published transactions, unpublished transactions, and completed transactions. One or more embodiments may implement a counter-based scheme for encryption. An exemplary counter and key that can be used by the initiating device 1110 is shown at 1112. An exemplary counter and key that can be used by the target device 1130 is shown at 1132. For each direction of transaction flow, the initiator and target device in that direction maintain the following counters, which can be initialized during the setup of the secure flow protocol:Counters for issued requests (pr_enc_counter, pr_dec_counter)—These counters can be 64-bit counters with a 32-bit random prefix. This counter pair can be used for authenticated encryption and decryption of issued requests.Counters for unposted requests (npr_enc_counter, npr_dec_counter)—These counters can be 64-bit counters with a 32-bit random prefix. This counter pair can be used for authenticated encryption and decryption of unissued requests.Counters used to complete the request (cpl_enc_counter, cpl_dec_counter)—These counters can be 64-bit counters with a 32-bit random prefix. This counter pair can be used to complete the requested authenticated encryption and decryption.Counter (pr_sent_counter) for the transmitted issued request-this counter can be a 16-bit counter. This counter can be used to detect dropped/delayed issued requests. This counter contains a value indicating the number of issued requests transmitted since the last unissued request or completed request was transmitted. This counter also acts as a check, thereby enhancing the producer-consumer ordering so that unpublished requests and completed requests will not be reordered to the front of published requests.A counter for the received issued request (pr_received_counter)-this counter can be a 32-bit counter. This counter can be used to detect dropped/delayed issued requests. This counter contains a value indicating the number of published requests received since the last unpublished request or completed request was received. This counter also acts as a check, thereby enhancing the producer-consumer ordering so that unpublished requests and completed requests will not be reordered to the front of published requests.In addition to the encryption and decryption counters, the encryption key and the decryption key for the secure streaming protocol can be maintained at both the initiating device and the target device. The encryption key and decryption key can be initialized according to each session, and different encryption key pairs can be initialized according to each transaction type. For example, for a published transaction (for example, 1103) transmitted from the initiating device 1110 to the target device 1130, the initiating device 1110 may maintain a PR encryption key used to encrypt PR data to be transmitted to the target device 1130. The key is identified as pr_stream_enc_key, and the target device 1130 may hold the corresponding PR decryption key used to decrypt the PR data received by the initiating device 1110, which is identified as pr_stream_dec_key. Encryption and decryption can be performed in combination with the PR encryption counter and the PR decryption counter, respectively. In addition, the PR encryption key and the PR encryption counter can also be used by the initiating device to generate an integrity code value (ICV) for TLP (eg, TLP secure stream prefix, TLP header, encrypted data). The target device receiving the issued transaction can use the corresponding PR decryption key and PR decryption counter to verify the ICV of the received issued transaction.For an unposted transaction (for example, 1103) transmitted from the initiating device 1110 to the target device 1130, the initiating device 1110 may maintain the NPR encryption key used to encrypt the NPR data to be transmitted, which is identified as npr_stream_enc_key, and the target The device 1130 may maintain a corresponding NPR decryption key used to decrypt the received NPR data, which is identified as npr_stream_dec_key. Encryption and decryption can be performed in combination with the NPR encryption counter and the NPR decryption counter, respectively. In addition, the NPR encryption key and the NPR encryption counter can also be used by the initiating device to generate an integrity code value (ICV) for TLP (eg, TLP secure stream prefix, TLP header, encrypted data). The target device receiving the unposted transaction can use the corresponding NPR decryption key and NPR decryption counter to verify the ICV of the received unposted transaction.For the completed transaction transferred from the target device 1130 to the initiating device 1110, the target device 1130 may maintain the CPL encryption key used to encrypt the CPL data to be transferred, which is recognized as cpl_stream_enc_key, and the initiating device 1110 may maintain The corresponding CPL decryption key for decrypting the received CPL data is identified as cpl_stream_dec_key. Encryption and decryption can be performed in combination with the CPL encryption counter and the CPL decryption counter, respectively. In addition, the CPL encryption key and the CPL encryption counter can also be used by the target device to generate an integrity code value (ICV) for TLP (eg, TLP secure stream prefix, TLP header, encrypted data). The initiating device that receives the completed transaction can use the corresponding CPL decryption key and CPL decryption counter to verify the received ICV of the completed transaction.In at least one embodiment, symmetric encryption can be used. In this embodiment, for each pair of keys used for a transaction type, the same key is used for both encryption and decryption. For example, pr_stream_enc_key is equivalent to pr_stream_dec_key, npr_stream_enc_key is equivalent to npr_stream_dec_key, and cpl_stream_enc_key is equivalent to cpl_stream_dec_key.In one example, the Advanced Encryption Standard of Operation-Galois Counter Mode (AES-GCM) can be used to provide counter mode encryption of data and a message authentication code for data. Counter mode encryption uses a symmetric key cipher block encryption program. Generally speaking, a block encryption program is an encryption algorithm that uses a symmetric key to encrypt data blocks in a way that provides confidentiality or authenticity. The counter mode of operation turns the block encryption program into a stream encryption program. The block encryption program uses the key to encrypt the input block as the initialization vector (IV) connected to the counter value. The output of the block encryption program is used to encrypt the plaintext block (for example, through an XOR function) to generate a ciphertext. The successive value of IV and the counter value are used to encrypt successive blocks of plaintext to generate additional ciphertext blocks.In addition to generating cipher text from the input data, the GCM operation also calculates the Galois Message Authentication Code (GMAC). GMAC, more generally called "tag" or "authentication tag", is a few bytes of information used to authenticate a message (or transaction). GMAC is an example of ICV that can be generated by TLP packets (for example, TLP secure flow prefix, TLP header, encrypted data). In at least one embodiment, the multiplier function is used to calculate the GMAC based on the ciphertext block resulting from the encryption of the plaintext block. Ability to attach GMAC to ciphertext. Although AES-GCM is a possible type of encryption and authentication technology that can be used in one or more embodiments, it is obvious to those skilled in the art that any other suitable type of encryption and authentication (for example, SHA -3. Hash message authentication code (HMAC), AES-CTR, etc.).An exemplary algorithm for performing encryption may include an encryption algorithm that relies on an initialization vector (IV) constructed in a deterministic manner. IV can be regarded as the connection of fixed field and call field. The fixed field may include a single field or multiple fields, and may identify the device or context for the instance of the authenticated encryption function. The call field can identify the input group for the authenticated encryption function within the device or context. No two devices will share the same fixed field; nor will any two sets of inputs share the same call field. The call field may include an integer counter or a linear feedback shift register driven by a polynomial that ensures the maximum loop length. In either case, the call field is incremented every time the authentication encryption function is called.IV can be used for authentication encryption and decryption purposes. For example, for encryption, a given text P and additional authentication data A and IV, the text P and data A can be partially encrypted using IV and other items constructed in a deterministic manner. In this context, the text P may include data to be transmitted across the link, and the additional authentication data A may include a TLP header and/or prefix for integrity protection.IV can be used to generate a counter block, which can be incremented to establish a Galois counter function for text P, resulting in a new enhanced plain text C. Together with the additional authentication data A, the enhanced plaintext C can be enhanced by the Galois hash function to produce a single output block. After that, the output block is encrypted with the Galois counter function also generated by the IV and counter block.The details of the implementation of AES-GCM can be found in the NIST Special Publication 800-38D for Computer Security published by the US Department of Commerce in November 2007.Although the above-described embodiment provides a possible solution in which the same encryption/decryption key and counter are used to encrypt and decrypt the data in the TLP and verify the integrity of the TLP, it should be pointed out that Implement any other appropriate encryption/decryption and integrity verification schemes to ensure the security of transactions in the security flow. For example, in another embodiment, different keys can be used for encryption and ICV generation for each packet type. In other words, the first published key can be used to complete the encryption of the published request payload, and the second published key different from the first published key can be used to complete information about the published request payload, header, and ICV generation of the prefix. The first unpublished key may be used to complete the encryption of the unpublished request payload, and the second unpublished key different from the first unpublished key may be used to complete the ICV generation regarding the unpublished request payload, header, and prefix. The encryption of the completion request payload may be performed with the first completion key, and the ICV generation regarding the completion request payload, header, and prefix may be performed using a second completion key different from the first completion key. It should be noted that the ICV can be generated with respect to the TLP secure stream prefix, (one or more (if more than one is used)) TLP header, and encrypted payload data. However, in some embodiments, other fields of the TLP may also be included in the ICV (for example, ECRC).FIG. 12 shows a possible format of the TLP secure stream prefix 1200 that can be carried by each transaction in the system implementing the secure stream protocol according to at least one embodiment. The secure stream protocol uses two secure streams or three secure streams. The restricted ordering mode operation of the stream. The format includes a secure stream prefix indicator 1202, a secure stream prefix header 1204, and a pr_sent_counter value 1206. The Pr_sent_counter value 1206 represents the number of published transactions that have been transferred from the initiator to the target device since the last unposted or completed transaction was transferred from the initiator to the target device. The safe flow prefix indicator 1202 indicates the type of the TLP safe flow prefix 1200. For example, the prefix indicator 1202 may indicate that the TLP secure flow prefix 1200 contains information related to the secure flow protocol.In at least one embodiment, three bits are defined in the secure flow header 1204. The first bit (e.g., BIT 0) can be a trusted bit that is an indication of whether the transaction is part of a trusted IO session. The trusted bit is used to distinguish software entities or functions at both ends of the security flow. The secure stream can be shared by trusted and untrusted functions/software. Correspondingly, the trusted bit indicates whether the transaction was initiated by a trusted entity (e.g., initiating device) at one end and will be consumed by a trusted entity (e.g., target device) at the other end. For example, a device that is connected to the server platform and needs to directly access the memory in the trust domain of the server platform may be a trusted entity. The memory storage controller is one possible example of a trusted entity.The second bit (for example, BIT 1) is an indication of whether the pr_sent_counter value 1206 is included in the TLP secure stream prefix 1200. In at least one embodiment, the pr_sent_counter value 1206 is included in the TLP secure stream prefix of unpublished transactions and completed transactions, and the second bit can be set to 1 to indicate the presence of this counter in the TLP secure stream prefix.The third bit (for example, BIT 2) can be used as an indication of whether the secure streaming protocol is in restricted ordering mode (ROM) or in explicit counter mode (ECM). In one example, if the third bit is set to 0, then the secure streaming protocol is operating in restricted ordering mode, and if the third bit is set to 1, then the secure streaming protocol is operating in explicit counter mode. According to at least one embodiment, when the mode is ECM, counters used for encryption of data in TLP and integrity verification of TLP (for example, pr_enc_counter, npr_enc_counter, cpl_enc_counter) can be used as the first N words of the packet payload Section to carry.Looking at Figures 13-15, interaction diagrams illustrate possible transactions that can occur in an interconnect architecture implementing a secure streaming protocol that operates in a restricted ordering mode (ROM) according to one or more embodiments. The transactions, counters, and keys shown in Figures 13-15 are based on a secure three-stream protocol implementation. These three streams correspond to published transactions, unpublished transactions, and completed transactions, respectively.FIG. 13 is an interaction diagram 1300 showing the secure three-stream protocol for the issued request 1302 transmitted from the initiating device 1110 to the target device 1130. The initiator 1110 samples its PR encryption counter (for example, pr_enc_counter) and increments the sample value. The initiator 1110 also increments the value of its PR transmission counter (for example, pr_sent_counter). Sampling the counter may include obtaining the value of the counter and possibly storing it for quick access. The initiating device 1110 encrypts the data used to form the transaction layer packet (TLP) of the issued request 1302. Encryption can be performed using the incremented value of the PR encryption counter and the PR encryption key (for example, pr_stream_enc_key). The integrity code value (ICV), such as MAC, for TLP including encrypted data, TLP header, and TLP secure stream prefix is also calculated. The initiating device 1110 transmits to the target device 1130 the issued request that is secured by encrypted data and ICV.The target device 1130 samples the value of its PR decryption counter (for example, pr_dec_counter), and increments the sample value. The target device 1130 also increments the value of its PR reception counter (for example, pr_received_counter). The target device 1130 uses the incremented value of the PR decryption counter and the PR decryption key (for example, pr_stream_dec_key) to decrypt the data in the received TLP of the issued request. In at least one embodiment, the encryption key and decryption key used for the issued request are the same. The target device 1130 verifies the integrity of the TLP by verifying the received ICV for the TLP. In at least one embodiment, the ICV is a MAC, such as GMAC, which will be verified using a PR decryption counter and a PR decryption key. In another embodiment, different keys and counters (for example, pr_mac_key, pr_mac_counter) may be used to generate the ICV. If the ICV verification fails, an error is raised (for example, logging the error message, generating a response to be transmitted to the initiator, reinitializing the key, etc.). Otherwise, the target device consumes the packet.FIG. 14 is an interaction diagram 1400 showing a secure flow protocol operation for an unposted request 1402 (with or without data) transmitted from the initiating device 1110 to the target device 1130. The initiator 1110 samples the value of its NPR encryption counter (for example, npr_enc_counter), and increments the sampled value. The initiator 1110 also samples the value of its PR transmission counter (for example, pr_sent_counter), and resets the value in the PR transmission counter to zero. The initiating device 1110 encrypts the data used to form the TLP of the unissued request 1402. Encryption can be performed using the incremented value of the NPR encryption counter and the NPR encryption key (for example, npr_stream_enc_key). The integrity code value (ICV), such as MAC, for TLP including encrypted data, TLP header, and TLP secure stream prefix is also calculated. The initiating device 1110 transmits to the target device 1130 an unpublished request that is secured by encrypted data and ICV. In addition, the TLP also carries the sample value of the PR transmission counter to indicate how many published requests have been transmitted by the initiator 1110 since the last unpublished or completed transaction.The target device 1130 samples the value of its NPR decryption counter (for example, npr_dec_counter) and increments the sample value. The target device 1130 uses the incremented value of the NPR decryption counter and the NPR decryption key (for example, npr_stream_dec_key) to decrypt the data in the received TLP of the unissued request. In at least one embodiment, the encryption key and decryption key used for unpublished requests are the same. The target device 1130 verifies the integrity of the TLP by verifying the received ICV for the TLP. In at least one embodiment, ICV is MAC, which will be verified using the incremented value of the NPR decryption counter and the NPR decryption key. If the ICV verification fails, an error is raised (for example, logging the error message, generating a response to be transmitted to the target device, reinitializing the key, etc.). Otherwise, the value of the PR reception counter (for example, pr_received_counter) held by the target device 1130 is subtracted from the value of the PR transmission counter in the TLP received from the initiating device 1110. If the value of the resulting PR reception counter is less than zero, then this indicates that one or more issued requests have been discarded and/or delayed. Therefore, errors are raised (for example, logging of error messages, generating a response to notify the initiating device, terminating the session, etc.). Otherwise, the target device consumes the packet. In some embodiments, PR reception counter evaluation can occur before or in parallel with MAC verification.FIG. 15 is an interaction diagram 1500 showing a secure flow protocol operation for a completion request 1502 transmitted from the target device 1130 to the initiating device 1110. The target device 1130 samples the value of its CPL encryption counter (for example, cpl_enc_counter) and increments the sample value. The target device 1130 also samples the value of its own PR transmission counter (for example, the pr_sent_counter at the target device 1130), and returns the value in the PR transmission counter to zero. The target device 1130 encrypts the data used to form the TLP of the completion request 1502. Encryption can be performed using the incremented value of the CPL encryption counter and the CPL encryption key (for example, cpl_stream_enc_key). The integrity code value (ICV), such as MAC, for TLP including encrypted data, TLP header, and TLP secure stream prefix is also calculated. The target device 1130 transmits to the target device 1130 a completion request that is secured by the encrypted data and ICV. In addition, the TLP also carries the sample value of the PR transfer counter to indicate how many posted requests have been transferred by the target device 1130 since the last unposted or completed transaction was transferred from the target device 1130 to the initiating device 1110.The initiator 1110 samples the value of its CPL decryption counter (for example, cpl_dec_counter), and increments the sample value. The initiating device 1110 uses the incremented value of the CPL decryption counter and the CPL decryption key (for example, cpl_stream_dec_key) to decrypt the data in the received TLP of the completion request. In at least one embodiment, the encryption key and decryption key used to complete the request are the same. The initiating device 1110 verifies the integrity of the TLP by verifying the received ICV for the TLP. In at least one embodiment, ICV is MAC, which will be verified using the incremented value of the CPL decryption counter and the CPL decryption key. If the ICV verification fails, an error is raised (for example, logging the error message, generating a response to be transmitted to the target device, reinitializing the key, etc.). Otherwise, the value of the PR reception counter (for example, pr_received_counter) held by the target device 1110 is subtracted from the value of the PR transmission counter (for example, pr_sent_counter) received from the target device 1130. If the value of the resulting PR reception counter is less than zero, then this indicates that one or more issued requests have been discarded and/or delayed. Therefore, errors are raised (for example, logging of error messages, generating a response to notify the initiating device, terminating the session, etc.). Otherwise, the initiating device 1110 consumes the packet. In some embodiments, PR reception counter evaluation may occur before MAC verification.It should be noted that the operations of the initiating device 1110 and the target device 1130 are described with reference to transmitting published and unpublished requests from the initiating device 1110 and transmitting a completed transaction from the target device 1130 in response to an unpublished transaction. However, it is obvious that the initiating device 1110 can operate as a target device, and the target device 1130 can operate as an initiating device.The secure TLP can be reordered to meet the requirements of avoiding deadlock, but some other forms of reordering are prohibited when the secure TLP is passed between ports through PCIe. The following example will illustrate the selected reordering situation. Attacks based on TLP reordering (or delay with reordering effect) can be implemented using a variety of mechanisms, all of which result in the same observed behavior and will be detected using the mechanism defined by the IDE.16A-16C are schematic diagrams showing exemplary reordering for IDE TLP according to an embodiment of the present disclosure. Figure 16A shows a first exemplary TPL stream 1600 through texture. The source port 1602 can send a set of TLPs in a predetermined order determined by the requester. In this example, the requesting party has requested a published P1 request, an unpublished NP1 request, a published P2 request, and an unpublished NP2 request. Permissible reordering may include a situation where P2 bypasses NP1 and arrives at destination port 1604 before NP1.Figure 16B shows a second exemplary TPL stream 1610 through texture. Figure 16B shows prohibited reordering. In this example, NP1 bypasses P1, which is not allowed.Figure 16C shows a third exemplary TPL stream 1620 through texture. In the TPL stream 1620, the reordering of NP1 and NP2 is allowable for non-secure TLP; but the reordering of NP1 and NP2 is prohibited for secure TLP.Note that the PR_Sent_Counter value in the received TLP prefix is not required to match PR_Received_Counter, because the published request is allowed to pass the unpublished request and complete. When this (legal) bypass occurs, PR_Received_Counter can have a larger value than PR_Sent_Counter in the TLP prefix.A similar situation can be applied between the issued request and the completion.Note that reordering attacks may occur through retimers, switches, and any other devices or devices that can change the flow of TLP at any point between the start port and the destination port. Table 4 provides exemplary additions to the transaction-level error list.Table 4 Transaction layer error list for secure TLPIDE messageFigures 17-20 illustrate various exemplary integrity messages associated with safety links or selective safety flows in accordance with various embodiments. These messages can be applied to the computer bus 105 shown in FIG. 1. IDE messages are used in conjunction with optional integrity and data encryption (IDE) mechanisms. The following rules apply to the formation of IDE messages:· IDE message does not include data payload (TLP type is Msg).· Keep the length field.· The requester ID must be set to the ID of the sending port.·The integrity synchronization message associated with the safety link must use the local route (100b); the integrity synchronization message associated with the selective safety flow must use the route based on the ID (010b), where the destination ID must contain The value in the partner RID base field of the associated security flow RID association register set.· The integrity failure message associated with the safety link must use the route to the root complex (000b); the integrity synchronization message associated with the selective safety flow must use the route based on the ID (010b), where the purpose The local ID must contain the value in the partner RID base field of the associated secure stream RID association register set.· IDE messages use the default traffic type identifier (TC0). Receivers that implement IDE support are allowed to check for violations of this rule. If the receiver determines that the TLP violates this rule, it must process the TLP as an unsupported request. This is a reported error associated with the receiving port. Table 5 provides exemplary codes for IDE information.Table 5 IDE messageFigure 17 is a schematic diagram of an exemplary integrity synchronization message for a secure link according to an embodiment of the present disclosure. FIG. 18 is a schematic diagram of an integrity synchronization message for selective security flow according to an embodiment of the present disclosure. FIG. 19 is a schematic diagram of an integrity check failure message for a safety link according to an embodiment of the present disclosure. FIG. 20 is a schematic diagram of an integrity check failure message for a selective security flow according to an embodiment of the present disclosure.As shown in Figure 17, the integrity synchronization message associated with the safety link can use local routing (100b). As shown in Figure 18, the integrity synchronization message associated with the selective safety flow can use routing according to ID (010b), where the destination ID can include the partner RID base field in the associated safety flow RID association register set Value in. As shown in Figure 19, the integrity failure message associated with the secure link can use the route to the root complex (000b). As shown in Figure 20, the integrity synchronization message associated with the selective security flow can use routing according to ID (010b), where the destination ID can include the partner RID base field in the associated security flow RID association register set Value in.IDE messages can use the default traffic type identifier (TC0). Receivers that implement IDE support are allowed to check for violations of this rule. If the receiver determines that the TLP violates this rule, it can process the TLP as an unsupported request. This is a reported error associated with the receiving port.Switch rules for pass-through secure flowFor the case where the switch port itself is a terminal, the switch is allowed to support the pass-through secure flow, but not the secure flow.A switch that supports the pass-through secure flow must implement a modified ordering rule for the TLP with a secure TLP prefix passed through the switch when it is activated, as defined in Table 6. Although the switch does not have to reorder the TLPs with secure TLP prefixes based on loose ordering, it is allowed to have these TLPs with the RO bit set.IDO is not affected, because the secure flow always operates in paired connections independently of other flows.Table 6 IDE sorting rules for switches-per streamThe switch must only route secure TLP through ports that have the pass-through secure flow enable bit set. If a secure TLP is routed to a port with a cleared pass-through secure flow enable bit, then the secure TLP must be discarded by the switch, and this is an incorrectly routed secure TLP error, which is a defined error associated with the egress port. In some embodiments, the egress port can synthesize and return to completion when discarding the unpublished request TLP.IDE expansion capabilitiesAll ports that implement IDE must implement IDE expansion capabilities.Extensibility header (offset 00h)The following table EC1 provides the definition of the corresponding bits in the PCI Express Expansion Capability header.Table EC1: PCIe Extensibility HeaderIDE capability register (offset 04h)Table EC2: IDE Capability RegisterIDE control register (offset 08h)Table EC3: IDE control registerIDE status register (offset 0Ch)Table EC2: IDE status registerSafety link control register (offset 10h, if it exists)If the Secure Link Support bit in the IDE Capability Register is set, then this register must be implemented. If the security link support bit in the IDE capability register is cleared, then this register may not be implemented, and on the contrary, the first group of security stream registers must follow the IDE status register.Table EC5: Safety Link Control RegisterSafety link status register (offset 14h, if it exists)If the Secure Link Support bit in the IDE Capability Register is set, then this register must be implemented. If the security link support bit in the IDE capability register is cleared, then this register may not be implemented, and on the contrary, the first group of security stream registers must follow the IDE status register.Table EC6: Safety Link Status RegisterSafe flow control registerEach safety flow must have exactly one safety flow register block, where the block consists of a safety flow control register, a subsequent safety flow status register, a subsequent safety flow RID associated register, and one or more subsequent safety flow address associated register groups constitute. The secure stream ID associated with the secure stream register block is implied by the sequence in which the block appears in the IDE extension capabilities so that the first one corresponds to secure stream ID 1 (stream ID 0 is related to secure link And does not use RID or address association mechanism).Table EC7: Secure Flow Control RegisterSafe Stream Status RegisterEach safety flow must have exactly one safety flow register block, which consists of a safety flow control register, a subsequent safety flow status register, a subsequent safety flow RID associated register, and one or more subsequent safety flow address associated register groups constitute. The secure stream ID associated with the secure stream register block is implied by the sequence in which the block appears in the IDE extension capabilities so that the first one corresponds to secure stream ID 1 (stream ID 0 is related to secure link And does not use RID or address association mechanism).Table EC8: Safe Flow Status RegisterSafe Stream RID Association RegisterEach safety flow must have exactly one safety flow register block, where the block consists of a safety flow control register, a subsequent safety flow status register, a subsequent safety flow RID associated register, and one or more subsequent safety flow address associated register groups constitute. The secure stream ID associated with the secure stream register block is implied by the sequence in which the block appears in the IDE extension capabilities so that the first one corresponds to secure stream ID 1 (stream ID 0 is related to secure link And does not use RID or address association mechanism). FIG. 21 is a schematic diagram of an exemplary secure flow requester identifier (RID) association block according to an embodiment of the present disclosure. Table EC9 provides an exemplary secure flow RID associated register 1. Table EC10 provides an exemplary secure flow RID association register 2.Table EC9 Security Flow RID Associated Register 1 (offset +00h)Table EC10 Secure Flow RID Associated Register 2 (offset +04h)Safe Stream Address Association RegisterThere must be at least one safe flow address association block immediately following each safe flow RID association block. For a given security flow, the number of security flow address association blocks is determined by the hardware implementation. The system software must clear the V bits of all unused safe stream address association blocks. FIG. 22 is a schematic diagram of an exemplary secure flow address association block according to an embodiment of the present disclosure. Table EC11 provides an exemplary secure stream address association register 1. Table EC12 provides an exemplary secure stream address association register 2. Table EC13 provides an exemplary secure stream address association register 3. Table EC14 provides an exemplary secure stream address association register 4.Table EC11 Secure Stream Address Associated Register 1 (offset +00h)Table EC12 safe stream address associated register 2 (offset +04h)Table EC13 Secure Stream Address Associated Register 3 (offset +08h)Bit position Register description Attribute 31:0 Memory upper limit——corresponds to address bit[63:32]. RWTable EC14 Secure Stream Address Associated Register 4 (offset + 0ch)Bit position Register description Attribute 31:0 Memory base upper limit——corresponds to address bit[63:32]. RWFigure 23 illustrates an exemplary apparatus suitable for use in practicing various aspects of the present disclosure in accordance with various embodiments. The device 2300 is used to implement the programming aspects of the disclosed method. As shown in the figure, the apparatus 2300 includes one or more processors 2302 (each having one or more processor cores) and/or an optional hardware accelerator 2304 (which may be an ASIC or FPGA). In alternative embodiments, the hardware accelerator 2304 may be part of the processor 2302 or integrated together on the SOC. In addition, the device 2300 may include a memory 2306 (which may be any of many known permanent storage media) and a data storage circuit 2308 including a module 2310. In addition, the device 2300 may include an I/O interface 2322 coupled to one or more sensors 2328 and a display screen 2330. The I/O interface 2322 may include a transmitter 2326 and a receiver 2324. In addition, the apparatus 2300 may include a communication circuit 2316, and the communication circuit 2316 includes a transmitter (Tx) 2318 and a network interface controller (NIC) 2320. The components may be coupled to each other via the system bus 2336, and the system bus 2336 may represent one or more buses, for example, one or more PCIe buses. For various PCIe embodiments, the communication circuit 2316 and the I/O interface 2322 may include a transmitter 2318 and an NIC 2320, and a transmitter 2326 and a receiver 2324, respectively. Specifically, the corresponding transmitter 2318, NIC 2320, transmitter 2326, and receiver 2324 may include the packetization technology based on flipping described herein with reference to the accompanying drawings. In various embodiments, one or more of the other components, such as the processor 2302, the memory 2306, the storage 2308, etc., may also similarly include a high-speed serial link interface circuit for use with reference to this document. The high-speed serial bus 2336 (for example, the high-speed PCIe bus) of the secure streaming technology described in the figure is coupled and operated. In the case of multiple buses, they can be bridged by one or more bus bridges (not shown). The device 2312 can be coupled to the system bus 2336, and the device 2332 can be coupled to the I/O bus 2338. The device 2312 may include an interface 2314, and the device 2332 may include an interface 2334.In an embodiment, the processor 2302 (also referred to as "processor circuit 2302") may be one or more processing elements configured to perform basic arithmetic, logic, and input/output operations by implementing instructions. The processor circuit 2302 may be implemented as an independent system/device/package, or may be implemented as part of an existing system/device/package. The processor circuit 2302 may be one or more microprocessors, one or more single-core processors, one or more multi-core processors, one or more multi-threaded processors, one or more GPUs, one or more Ultra-low voltage processor, one or more embedded processors, one or more DSP, one or more FPD (hardware accelerator) (such as FPGA, structured ASIC, programmable SoC (PSoC), etc.), and/or other Processor or processing/control circuit. The processor circuit 2302 may be part of an SoC in which the processor circuit 2302 and other components discussed herein are formed into a single IC or a single package. As an example, the processor circuit 2302 may include: one or more Intel or Core processors; Advanced Micro Devices (AMD) accelerated processing units (APU), or processors; Apple’s A series, S series, W series, etc. Processor; processor; and/or Samsung processor; etc.In an embodiment, the processor circuit 2302 may include a sensor hub, which may function as a co-processor by processing data obtained by one or more sensors 2328. The sensor hub may include a configuration configured to integrate data obtained from each of the one or more sensors 2328 by performing arithmetic, logic, and input/output operations. In an embodiment, the sensor hub may be able to time-stamp the acquired sensor data, provide such data to the processor circuit 2302 in response to the query of the sensor data, buffer the sensor data, and continuously stream the sensor data to The processor circuit 2302 (including a separate stream for each of the one or more sensors 2328) reports sensor data based on predefined thresholds or conditions/triggers, and/or other similar data processing functions.In an embodiment, the memory 2306 (also referred to as “memory circuit 2306”, etc.) may be a circuit configured to store data or logic for operating the computer device 2300. The memory circuit 2306 can include many memory devices that can be used to provide a given amount of system memory. As an example, the memory circuit 2306 may be any suitable type, number, and/or combination of volatile memory devices (for example, random access memory (RAM), dynamic RAM (DRAM), static RAM (SRAM), etc.) and/or Non-volatile memory devices (for example, read-only memory (ROM), erasable programmable read-only memory (EPROM), electrically erasable programmable read-only memory (EEPROM), flash memory, anti-fuse, etc.), The memory circuit 2306 is configured according to any suitable implementation known. In various embodiments, individual memory devices can be formed by any number of different package types, such as single die package (SDP), dual die package (DDP), or four die package, dual in-line package. Insert a memory module (DIMM) (such as micro-DIMM or mini-DIMM) and/or any other similar memory device. In order to provide permanent storage of information such as data, application programs, operating systems, etc., the memory circuit 2306 may include one or more mass storage devices, such as solid state disk drives (SSDD); flash memory cards (such as SD cards, micro SD card, xD picture card, etc.) and USB flash drives; on-die memory or registers associated with the processor circuit 2302 (for example, in a low-power implementation); micro hard disk drive (HDD); three-dimensional crossover from and Point (3D XPOINT) memory and so on.In the case of using FPD, the processor circuit 2302 and the memory circuit 2306 (and/or the data storage circuit 2308) may include logic blocks or logic textures, storage units, input/output (I/O) blocks, and other interconnection resources , Which can be programmed to perform the various functions of the exemplary embodiments discussed herein. The storage unit may be used to store data in a look-up table (LUT), which may be used by the processor circuit 2302 to implement various logic functions. The storage unit may include any combination of various levels of memory/storage, the memory/storage including but not limited to EPROM, EEPROM, flash memory, SRAM, anti-fuse, etc.In an embodiment, a data storage circuit 2308 (also referred to as a "storage circuit 2308", etc.) with a shared or separate controller can provide permanent storage of information such as the module 2310, operating system, and the like. The data storage circuit 2308 may be implemented as: a solid state drive (SSD); a solid state disk drive (SSDD); a serial AT attached (SATA) memory device (for example, SATA SSD); a flash memory drive; a flash memory card (for example, SD card, micro SD card, xD picture card, etc.) and USB flash drive; three-dimensional cross-point (3D Xpoint) memory device; on-die memory or register associated with processor circuit 2302; hard disk drive (HDD); micro HDD; resistance change memory ; Phase change memory; holographic memory; or chemical memory; and others. As shown in the figure, the data storage circuit 2308 is incorporated into the computer device 2300; however, in other embodiments, the data storage circuit 2308 may be implemented as one or more devices separate from other elements of the computer device 2300.In some embodiments, the data storage circuit 2308 may include an operating system (OS) (not shown), which may be a general operating system or an operating system specially written or customized for the computer device 2300. The OS may include one or more drivers, libraries, and/or application programming interfaces (APIs), which provide code and/or software components for the module 2310 and/or control system configuration control and/or acquisition/processing from one or more Data from two sensors 2328.The module 2310 may be a software module/component for performing various functions of the computer device 2300 and/or implementing the functions of the embodiments discussed herein. In an embodiment in which the processor circuit 2302 and the memory circuit 2306 include a hardware accelerator (for example, an FPGA unit, a hardware accelerator 2304) and a processor core, a logic unit (alternatively required The adoption of programming instructions executed by the processor core) pre-configure the hardware accelerator (e.g., FPGA unit) (e.g., by means of appropriate bitstreams, logic blocks/textures, etc.). For example, the module 2310 may include logic units for the corresponding entities discussed with respect to the display screen 2330, on-screen input devices, on-screen input interface controller 2318, off-screen input devices, transmitter 2326, and receiver 2324.The components of the computer device 2300 can communicate with each other through the system bus 2336. The system bus 2336 may include any number of technologies, such as Local Interconnect Network (LIN); Industry Standard Architecture (ISA); Extended ISA (EISA); PCI; Extended PCI (PCIx); PCIe; Inter-Integrated Circuit (I2C) bus; Parallel Small Computer System Interface (SPI) Bus; Common Application Programming Interface (CAPI); Point-to-Point Interface; Power Bus; Proprietary Bus, such as Ultra Path Interface (UPI), Accelerator Link (IAL) or SoC-based interface used in Some other proprietary bus; or any number of other technologies. In some embodiments, the system bus 2336 may be a controller area network (CAN) bus system, a time trigger protocol (TTP) system, or a FlexRay system, which may allow various devices (for example, one or more sensors 2328, etc.) to use messages Or frame to communicate with each other.The communication circuit 2316 may include a circuit for communicating with a wireless network or a wired network. For example, the communication circuit 2316 may include a transceiver (Tx) 2318 and a network interface controller (NIC) 2320. The communication circuit 2316 may include one or more processors (e.g., baseband processors, modems, etc.) dedicated to a specific wireless communication protocol.The NIC 2320 may be included to provide a wired communication link to the network and/or other devices. Wired communication can provide Ethernet connections and/or Ethernet over USB, etc., or can be based on other types of networks, such as DeviceNet, ControlNet, Data Highway+, PROFIBUS or PROFINET, and many other networks. An additional NIC 2320 may be included to allow connection to a second network (not shown) or other devices. For example, the first NIC 2320 provides communication with the network 150 via Ethernet, and the second NIC 2320 provides communication with another kind of network. For communication with other devices, for example, the other type of network is a personal area network (PAN) including a personal computer (PC) device. In some embodiments, various components of the device 2300 (eg, one or more sensors 2328, etc.) may be connected to the processor 2302 via the NIC 2320 as discussed above rather than via the I/O circuit 2322 as discussed below.Tx 2318 may include one or more radio devices to communicate wirelessly with the network and/or other devices. The Tx2318 may include a hardware device capable of communicating with a wired network and/or other devices through a solid-state or non-solid-state medium using modulated electromagnetic radiation. Such hardware devices may include switches, filters, amplifiers, antenna elements, etc., to generate or otherwise generate radio waves to transmit data to one or more other devices, and convert the received signals into The available information (such as digital data) to one or more other components of the computer device 2300 to facilitate over-the-air communications (OTA). In some embodiments, various components of the device 2300 (eg, one or more sensors 2328, etc.) may be connected to the device 2300 via the Tx 2318 as discussed above rather than via the I/O circuit 2322 as discussed below. In one example, one or more sensors 2328 may be coupled with the device 2300 via a short-range communication protocol.Tx 2318 may include one or more radio devices, which are compatible with any number of 3GPP (Third Generation Partnership Project) protocols, especially with Long Term Evolution (LTE), Long Term Evolution Advanced (LTE-A), Professional Long Term Evolution Advanced (LTE-A Pro) is compatible with the fifth generation (5G) New Radio (NR). It should be noted that a radio device compatible with any number of other fixed, mobile or satellite communication technologies and standards can be selected. These technologies and standards may include, for example, any cellular wide area network radio communication technology, which may include, for example, 5G communication system, global system for mobile communication (GSM) radio communication technology, general packet radio service (GPRS) radio communication technology or enhanced Data rate GSM Evolution (EDGE) radio communication technology. Other third-generation partnership project (3GPP) radio communication technologies that can be used include UMTS (Universal Mobile Telecommunications System), FOMA (Free Multimedia Access), 3GPP LTE (Long Term Evolution), 3GPP LTE Advanced (Long Term Evolution Advanced), professional Advanced 3GPP LTE (Professional Advanced Long Term Evolution), CDMA2000 (Code Division Multiple Access 2000), CDPD (Cellular Digital Packet Data), Mobitex, 3G (Third Generation), CSD (Circuit Switched Data), HSCSD (High Speed Circuit Switched Data) , UMTS (3G) (Universal Mobile Telecommunications System (third generation)), W-CDMA (UMTS) (Wideband Code Division Multiple Access (Universal Mobile Telecommunications System)), HSPA (High Speed Packet Access), HSDPA (High Speed Downlink) Packet Access), HSUPA (High Speed Uplink Packet Access), HSPA+ (High Speed Packet Access Plus), UMTS-TDD (Universal Mobile Telecommunications System-Time Division Multiplexing), TD-CDMA (Time Division-Code Division Multiple Access) ), TD-SCDMA (Time Division-Synchronous Code Division Multiple Access), the 8th version of 3GPP (Pre-4G) (the 8th version of the third generation partnership project (pre-4th generation)), the 9th version of the 3GPP (the 9th version of the Third Generation Partnership Project), 3GPP version 10 (Third Generation Partnership Project version 10), 3GPP version 11 (Third Generation Partnership Project version 11), 3GPP version 12 (Third Generation Partnership Project version 12) Partnership Project), 3GPP Release 13 (3rd Generation Partnership Project Release 13), 3GPP Release 14 (3rd Generation Partnership Project Release 14), 3GPP LTE Extra, LTE License Assisted Access (LAA), UTRA (UMTS Terrestrial Radio Access), E-UTRA (Evolved UMTS Terrestrial Radio Access), LTE Advanced (4G) (Long Term Evolution Advanced (fourth generation)), cdmaOne (2G), CDMA2000 (3G) (code division Multi-access 2000 (third generation)), EV-DO (evolution data optimization or only evolution data), AMPS (1G) (advanced mobile phone system (first generation)), TACS/ETACS (full access communication system/ Extended full access communication system), D-AMPS (2G) (digital AMPS (second generation)), PTT (push to talk), MTS (mobile phone system), IMTS (improved mobile phone system), AMTS ( Advanced mobile phone system), OLT (Norwegian Offentlig Landmobil Telefoni, public land mobile phone), MTD (Swedish abbreviation of Mobiltelefonisystem D, mobile phone system D), Autotel/PALM (Public Automatic Land Mobile), ARP (Finnish for Autoradiopuhelin) "Car radio telephone"), NMT (Nordic Mobile Telephone), Hicap (NTT (Nippon Telegraph and Telephone Company) high capacity version), CDPD (Cellular Digital Packet Data), Mobitex, DataTAC, iDEN (Integrated Digital Enhanced Network), PDC (Personal Digital Cellular), CSD (Circuit Switched Data) ), PHS (Personal Handyphone System), WiDEN (Broadband Integrated Digital Enhanced Network), iBurst, License-Free Mobile Access (UMA, also known as 3GPP Universal Access Network or GAN standard), Wireless Gigabit Alliance (WiGig) standard , General millimeter wave standards (wireless systems operating at 10-90Ghz and above, such as WiGig, IEEE 802.11ad, IEEE 802.11ay), etc. In addition to the standards listed above, any number of satellite uplink technologies can be used for uplink transceivers, including, for example, compliance with ITU (International Telecommunication Union) or ETSI (European Telecommunications Standards Institute) standards and Other standard radios. Therefore, the examples provided in this article are understood to be applicable to various other communication technologies, including existing ones as well as those that have not yet been created. The implementation, components, and details of the above-mentioned protocol may be those known in the art, and are omitted herein for the sake of simplicity.The input/output (I/O) interface 2322 may include a circuit for connecting the computer device 2300 with external components/devices (such as one or more sensors 2328, etc.), such as an external expansion bus (such as a universal serial bus (USB) ), FireWire, Thunderbolt, PCI/PCIe/PCIx, etc.). The I/O interface circuit 2322 may include any suitable interface controllers and connectors to communicate with one or more of the processor circuit 2302, the memory circuit 2306, the data storage circuit 2308, the communication circuit 2316, and other components of the computer device 2300. To be interconnected. The interface controller may include, but is not limited to, a memory controller, a storage controller (for example, a redundant array of independent disks (RAID) controller), a baseboard management controller (BMC), an input/output controller, a host controller, etc. Connectors may include, for example, buses (e.g., bus 2336), ports, slots, jumpers, interconnect modules, jacks, modular connectors, and the like. The I/O circuit 2322 can make the device 2300 couple with one or more sensors 2328, etc. via a wired connection, such as using USB, FireWire, Thunderbolt, RCA, video graphics array (VGA), digital video interface (DVI) And/or Mini DVI, High Definition Multimedia Interface (HDMI), S-Video and/or the like.The one or more sensors 2328 may be any device configured to detect events or environmental changes, convert the detected events into electrical signals and/or digital data, and send/transmit the signals/data to the computer device 2300. Some of the one or more sensors 2328 may be sensors used to provide computer-generated sensory input. Some of the one or more sensors 2328 may be sensors for motion and/or object detection. Examples of such one or more sensors 2328 may include, among others, a charge coupled device (CCD), a complementary metal oxide semiconductor (CMOS) active pixel sensor (APS), a lensless image capture device/camera, and a thermal image (infrared) camera. , Optical imaging detection and ranging (LIDAR) systems and/or similar devices. In some embodiments, the one or more sensors 2328 may include a lensless image capturing mechanism that includes an array of aperture elements, where the light passing through the array of aperture elements defines the pixels of the image. In an embodiment of motion detection, one or more sensors 2328 may be coupled or associated with a light generating device (for example, one or more infrared projectors used to project a grid of infrared light onto the scene), wherein , The infrared camera can record the reflected infrared light to calculate the depth information.Some of the one or more sensors 2328 may be used for position and/or orientation detection, peripheral/environmental condition detection, and so on. Examples of such one or more sensors 2328 may include, among other things, a microelectromechanical system (MEMS) with piezoelectric, piezoresistive, and/or capacitive components, which may be used to determine environmental conditions or location information related to the computer device 2300. In an embodiment, the MEMS may include a 3-axis accelerometer, a 3-axis gyroscope, and/or a magnetometer. In some embodiments, the one or more sensors 2328 may also include one or more gravimeters, altimeters, barometers, proximity sensors (for example, infrared radiation detectors, etc.), depth sensors, ambient light sensors, and thermal sensors (thermometers). ) And/or ultrasonic transceivers and/or similar devices.Each of these elements, for example, one or more processors 2302, hardware accelerator 2304, memory 2306, data storage circuit 2308 including module 2310, input/output interface 2322, one or more sensors 2328, communication including Tx2318 The circuit 2316, the NIC 2320, the system bus 2336, the I/O bus 2338, the device 2312, and the device 2332 can perform their conventional functions known in the art. In addition, they can be used to store programming instructions for implementing various operating system functions and/or application programs (especially operations associated with the secure streaming technology described above in connection with the drawings) and to host the execution of the instructions. The various elements may be implemented by assembly instructions supported by the processor 2302 or a high-level language (for example, C) that can be compiled into such instructions. Operations associated with the device 2300 that are not implemented by software may be implemented by hardware, for example, via a hardware accelerator 2304 and/or firmware.The number, capabilities, and/or capacity of these elements 2302-2338 may vary according to the number of other devices that the device 2300 is configured to support. In other respects, the configuration of the elements 2302-2338 is known, and accordingly will not be further described.Those skilled in the art should realize that the present disclosure can be embodied as a method or a computer program product. Correspondingly, in addition to being embodied in hardware as described earlier, the present disclosure may also take the form of a complete software embodiment (including firmware, resident software, pseudo code, etc.) or a combination of software and hardware embodiments. Forms, they can be collectively referred to as "circuits", "modules" or "systems."In addition, the present disclosure may take the form of a computing program product embodied in any tangible or non-transitory presentation medium, the medium having computer usable program code embodied in the medium. Figure 24 illustrates an exemplary computer-readable non-transitory storage medium 2400 that may be adapted to store instructions that, in response to their execution by a device, cause the device to practice selected aspects of the present disclosure. As shown, the non-transitory computer-readable storage medium 2402 may include a number of programming instructions 2404 (also referred to herein as "instructions"). The programming instructions 2404 may be configured to enable the device (for example, the device 2300) to perform various programming operations associated with, for example, operating system functions and/or application programs, in response to the execution of the programming instructions, especially in relation to the above reference The drawings describe the operations associated with the secure flow technology.In an alternative embodiment, the programming instructions 2404 may be transferred to multiple computer-readable non-transitory storage media 2402. In an alternative embodiment, the programming instructions 2404 may be provided on the computer-readable transient storage medium 2402 (eg, signals). Any combination of one or more computer usable or computer readable media can be utilized. For example, a computer-usable or computer-readable medium may be (for example, but not limited to) an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, device, device, or propagation medium. A more specific example (non-exclusive list) of computer readable media would include the following options: electrical connection with one or more wires, portable computer disk, hard disk, random access memory (RAM), read only memory (ROM) , Erasable programmable read-only memory (EPROM or flash memory), optical fiber, portable compact disk read-only memory (CD-ROM), optical storage device, transmission media (for example, those that support the Internet or Intranet), or Magnetic storage device. Note that the computer-usable or computer-readable medium may even be a paper or other suitable medium on which the program is printed, because the program can be captured electronically via, for example, an optical scan of the paper or other medium, and then the program can be captured as appropriate. Ways to compile, interpret or other processing (if necessary), and then store it in the computer memory. In the context of this document, a computer-usable or computer-readable medium can be any medium that can contain, store, communicate, propagate, or transfer a program for use by or with an instruction execution system, device, or device. Or device combined use. The computer usable medium may include a propagated data signal in baseband or as part of a carrier wave, the signal having computer usable program code embodied by it. The computer usable program code can be transmitted using any suitable medium, including but not limited to wireless, wireline, fiber optic cable, RF, and the like.FIG. 25 is a block diagram showing another embodiment of a computing system including a processor according to one or more embodiments. According to the present disclosure, for example, in the embodiment described herein, the system 2500 includes a component, such as a processor 2502, to execute an algorithm for process data using an execution unit including a logic unit. System 2500 represents a processing system based on PENTIUM IIITM, PENTIUM 4TM, XeonTM, Itanium, XScaleTM, and/or StrongARMTM microprocessors available from Intel Corporation of Santa Clara, California, although other systems (including those with other microprocessors) PCs, engineering design workstations, set-top boxes, etc.). In one embodiment, the exemplary system 2500 executes a version of the WINDOWSTM operating system available from Microsoft Corporation of Redmond, Washington, although other operating systems (e.g., UNIX and Linux), embedded software, and/or GUI. Therefore, the embodiments of the present disclosure are not limited to any specific combination of hardware circuits and software.The embodiments are not limited to computer systems. Alternative embodiments of the present disclosure can be used in other devices, such as portable devices and embedded applications. Some examples of portable devices include cellular phones, Internet Protocol devices, digital cameras, personal digital assistants (PDAs), and portable PCs. Embedded applications may include a microcontroller, a digital signal processor (DSP), a system on a chip (SoC), a network computer (NetPC), a set-top box, a network hub, a wide area network (WAN) switch, or can perform one or more according to at least one embodiment Any other system with multiple instructions.In this exemplary embodiment, the processor 2502 includes one or more execution units 2508 to implement an algorithm that will execute at least one instruction. An embodiment may be described in the context of a single-processor desktop or server system, but alternative embodiments may be incorporated into a multi-processor system. System 2500 is an example of a "hub" system architecture. The computer system 2500 includes a processor 2502 for processing data signals. As an illustrative example, the processor 2502 includes a complex instruction set computer (CISC) microprocessor, a reduced instruction set computing (RISC) microprocessor, a very long instruction word (VLIW) microprocessor, and processes that implement a combination of instruction sets. Processor, or any other processor device, such as a digital signal processor, etc. The processor 2502 is coupled to a processor bus 2510, and the processor bus 2510 sends data signals between the processor 2502 and other components in the system 2500. Elements of the system 2500 (e.g., graphics accelerator 2512, memory controller hub 2516, memory 2520, I/O controller hub 2530, wireless transceiver 2526, flash BIOS 2528, network controller 2534, audio controller 2536, serial extension The port 2538, the old I/O controller 2540 with the user input interface 2542) perform their conventional functions well known to those skilled in the art.In one embodiment, the processor 2502 includes a level 1 (L1) internal cache memory 2504. Depending on the architecture, the processor 2502 may have a single internal cache or multiple levels of internal cache. Other embodiments include a combination of internal cache and external cache, depending on the specific implementation and requirements. The register file 2506 stores different types of data in various registers, including integer registers, floating-point registers, vector registers, grouping registers, shadow registers, checkpoint registers, status registers, and instruction pointer registers.The execution unit 2508 (including logic units for performing integer and floating-point operations) also resides in the processor 2502. In one embodiment, the processor 2502 includes a microcode (μcode) ROM to store microcode, which, when run, executes algorithms for certain macro instructions or handles complex situations. Here, the microcode may be updatable to handle the logic bug/fix of the processor 2502. For one embodiment, the execution unit 2508 includes a logic unit to process the packaged instruction set 2509. By including the packed instruction set 2509 into the instruction set of the general-purpose processor 2502 (together with associated circuits to execute the instructions), the packed data in the general-purpose processor 2502 can be used to perform many operations used by multimedia applications. Therefore, many multimedia applications can be accelerated and executed more efficiently by using the entire width of the processor's data bus to perform operations on packed data. Doing so may eliminate the need to transfer smaller data units across the processor's data bus one data element at a time in order to perform one or more operations.Alternative embodiments of the execution unit 2508 can also be used in microcontrollers, embedded processors, graphics devices, DSPs, and other types of logic circuits. The system 2500 includes a memory 2520. The memory 2520 includes a dynamic random access memory (DRAM) device, a static random access memory (SRAM) device, a flash memory device, or other memory devices. The memory 2520 stores instructions and/or data to be executed by the processor 2502 represented by data signals.Note that any of the foregoing features or aspects of the embodiments described herein can be applied to one or more of the interconnects shown in FIG. 25. For example, an on-die interconnect (ODI) (not shown) used to couple the internal units of the processor 2502 implements one or more aspects of the above-described embodiments. Alternatively, these embodiments are associated with the following: processor bus 2510 (for example, Intel Quick Path Interconnect (QPI) or other known high-performance computing interconnects), high-bandwidth memory path 2518 to memory 2520, Point-to-point links to graphics accelerator 2512 (e.g., conform to Peripheral Component Interconnect Express (PCIe) texture), controller hub interconnect 2522, I/O or other interconnections (e.g., , USB, PCI, PCIe). Some examples of such components include audio controller 2536, firmware hub (flash BIOS) 2528, wireless transceiver 2526, data storage 2524, legacy I/O controller 2540 with user input and keyboard interface 2542, serial expansion port (For example, Universal Serial Bus (USB)) and network controller 2534. The data storage device 2524 may include a hard disk drive, a floppy disk drive, a CD-ROM device, a flash memory device, or other mass storage devices.Figure 26 is a block diagram of an exemplary computer architecture 2600 in accordance with at least one embodiment of the present disclosure, in accordance with one or more embodiments. Figure 26 shows another computing device 2600 arranged in a point-to-point (PtP) configuration according to an embodiment, wherein one or more interconnects implement one or more features according to at least one embodiment of the present disclosure. Specifically, Figure 26 shows a system in which processors, memories, and input/output devices are interconnected through many point-to-point interfaces. Generally, one or more of the computing systems or computing devices described herein may be configured in the same or similar manner as the computing system 2600.The processors 2670 and 2680 may be implemented as single-core processors 2674a and 2684a or multi-core processors 2674a-2674b and 2684a-2684b. The processors 2670 and 2680 may each include caches 2671 and 2681 used by their respective core or cores. A shared cache (not shown) can be included in either processor, or included outside of the two processors, but is connected to the processor via the PP interconnect, so that when the processor is placed in a low power mode, either processor Or the local cache information of the two processors can be stored in the shared cache.The processors 2670 and 2680 may also each include integrated memory controller logic units (MC) 2672 and 2682 to communicate with memory elements 2632 and 2634, which may be part of the main memory attached locally to the respective processors . In an alternative embodiment, the memory controller logic units 2672 and 2682 may be discrete logic units separate from the processors 2670 and 2680. The memory element 2632 and/or 2634 can store various data used by the processors 2670 and 2680 to implement the operations and functions outlined herein.The processors 2670 and 2680 may be any type of processors, such as those discussed in this article in connection with other figures. The processors 2670 and 2680 can exchange data via a point-to-point (PtP) interface 2650 using point-to-point interface circuits 2678 and 2688, respectively. The processors 2670 and 2680 may each use point-to-point interface circuits 2676, 2686, 2694, and 2698 to exchange data with an input/output (I/O) subsystem 2690 via individual point-to-point interfaces 2652 and 2654. The I/O subsystem 2690 (which may be a chipset in at least one embodiment) may use the interface circuit 2692 (which may be a PtP interface circuit) to exchange data with the high-performance graphics circuit 2638 via the high-performance graphics interface 2639. In one embodiment, the high-performance graphics circuit 2638 is a dedicated processor, such as a high-throughput MIC processor, network or communication processor, compression engine, graphics processor, GPGPU, embedded processor, or similar device. The I/O subsystem 2690 can also communicate with the display 2616 to display data viewable by a human user. In an alternative embodiment, any or all of the PtP links shown in FIG. 26 may be implemented as a multipoint bus instead of a PtP link.The I/O subsystem 2690 can communicate with the bus 2610 via the interface circuit 2696. The bus 2610 may allow one or more devices (for example, the bus bridge 2618 and the I/O device 2614) to communicate through it. Via the bus 2610, the bus bridge 2618 can communicate with other devices, for example, the other device is a user interface 2622 (such as a keyboard, mouse, touch screen, or other input device), a communication device 2626 (such as a modem, a network interface device, or other types of input devices). A communication device that can communicate via a computer network 2660), an audio I/O device 2624, and/or a data storage device 2628. The data storage device 2628 may store code and data 2630 that can be executed by the processor 2670 and/or 2680. In alternative embodiments, any part of the bus architecture can be implemented by means of one or more PtP links.The computer system shown in FIG. 26 is a schematic diagram of an embodiment of a computing system that can be used to implement the various embodiments discussed herein. It should be appreciated that the various components of the system shown in FIG. 26 can be combined into a system-on-chip (SoC) architecture, or into any other suitable configuration capable of implementing the functions and features of the examples and embodiments provided herein.The computer program code used to implement at least some operations of the present disclosure may be written in any combination of one or more programming languages, including object-oriented programming languages (such as Java, Smalltalk, C++, etc.) and conventional procedures Programming language (for example, "C" programming language or similar programming language). The program code may run entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer, or entirely on the remote computer or server. In the latter case, the remote computer can be connected to the user’s computer through any type of network (including a local area network (LAN) or a wide area network (WAN)), or can be connected to an external computer (for example, using an Internet service provider through Internet).The present disclosure will be described with reference to flowcharts and/or block diagrams of methods, devices (systems) and computer program products according to embodiments of the present disclosure. It should be understood that each block of the flowcharts and/or block diagrams and any combination of blocks in the flowcharts and/or block diagrams can be implemented by computer program instructions. These computer program instructions can be provided to the processor of a general-purpose computer, a special-purpose computer, or other programmable data processing equipment to produce a machine so that the instructions are executed by the processor of the computer or other programmable data processing equipment. At this time, a module for implementing the function/action specified in one or more blocks of the flowchart and/or block diagram is created.These computer program instructions can also be stored in a computer-readable medium. These instructions can direct a computer or other programmable data processing device to function in a specific manner, so that the instructions stored in the computer-readable medium produce an article including instruction modules. The instruction module implements the functions/actions specified in one or more blocks of the flowchart and/or block diagram.Computer program instructions can also be loaded on a computer or other programmable data processing equipment, so that a series of operating steps can be executed on the computer or other programmable equipment to produce a computer-implemented process, so that the computer or other programmable data processing equipment can be executed on the computer or other programmable equipment. When the instructions are executed on the programming device, the instructions are provided for implementing the functions/actions specified in one or more blocks of the flowchart and/or block diagram.The flowcharts and block diagrams in the drawings illustrate the architecture, functions, and operations of possible implementations of systems, devices, computer-readable media, methods, and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagram may represent a module, section, or part of code and/or hardware, which includes one or more executable instructions for implementing specified logical functions. It should also be noted that in some alternative implementations, the functions noted in the blocks may occur out of the order noted in the figures. For example, two blocks shown as successive may actually be executed substantially simultaneously, or the two blocks may sometimes be executed in the reverse order, depending on the functions involved. It should also be pointed out that each block in the block diagrams and/or flowcharts and the combination of blocks in the block diagrams and/or flowcharts can be implemented by a dedicated hardware-based system or dedicated hardware and computer instructions that perform specified functions or actions. Combined to implement. As used herein, a "computer-implemented method" may refer to a computer system with one or more processors, a mobile device such as a smart phone (which may include one or more processors), a tablet Any method executed by computers, laptops, set-top boxes, game consoles, etc.One or more embodiments may be implemented as a computer process, a computing system, or an article, for example, a computer program product with a computer readable medium. The computer program product may be a computer storage medium that can be read by a computer system and encoded with computer program instructions for executing a computer process.It is intended that the corresponding structures, materials, actions and equivalent solutions of all modules or steps plus functional elements in the following claims include any structure or material used to perform functions in combination with other specifically claimed elements. Or action. The description of the present disclosure is introduced for the purpose of illustration and explanation, but it is not intended to be exclusive or limited to the present disclosure in the disclosed form. Many modifications and changes will be obvious to those of ordinary skill in the art without departing from the scope and spirit of the present disclosure. The embodiments are selected and described in order to best explain the principles and practical applications of the present disclosure, and to enable those of ordinary skill in the art to understand the present disclosure, so as to obtain various modified embodiments suitable for the envisaged specific use.The foregoing description of one or more embodiments provides illustrations and descriptions, but is not intended to be exclusive, nor is it intended to limit the scope of the embodiments to the exact form disclosed. There may be various modifications and changes based on the above teachings, or various modifications and changes may be made from the practice of various embodiments.Various exemplary embodiments of the present disclosure have been described, including but not limited to:Example X01 may include a device that includes a module for managing integrity and data encryption (IDE) through a computer bus.Example X02 may include the device of Example 1 and/or some other examples herein, wherein the computer bus includes a PCI-related bus.Example X03 may include the device of Example 1 and/or some other examples herein, wherein the module for managing the IDE includes a packet structure, a port-level mechanism, configuration registers, or operating rules for a computer bus.Example Z01 may include a device that includes one or more elements to perform the method described in or related to any example herein or any other method or process described herein.Example Z02 may include one or more non-transitory computer-readable media including instructions that, when executed by one or more processors of an electronic device, cause the electronic device to execute any of the examples herein. One or more elements of the method described in or related to any of the examples herein or any other method or process described herein.Example Z03 may include a device, which includes a logical unit, module to perform one or more elements of the method described in any example herein or related to any example herein or any other method or process described herein Or circuit.Example Z04 may include the method, technique, or process described in or related to any example herein, or a part or fragment thereof.Example Z05 may include a device that includes: one or more processors and one or more computer-readable media including instructions that, when executed by the one or more processors, cause the one or more Each processor executes a method, technique, or process or part thereof described in or related to any example herein.Example Z06 may include a signal or a part or fragment thereof described in or related to any example herein.Example 1 is a device that includes: an encryption engine for authenticating the identity of a link partner of a secure flow transaction; and a transaction layer logic unit including a hardware circuit that is used to: The transaction layer packet (TLP) is encoded and/or the data payload of the TLP is encrypted with data encryption to form a secure TLP; the secure TLP is associated with the secure stream; and the secure TLP is sent across the secure stream to the link partner.Example 2 may include the subject matter of Example 1, and may also include: a transaction layer logic circuit for reading the extension capability register; and determining that the device and the link partner support integrity protection and data encryption for TLP encoding.Example 3 may include the subject matter of Example 2, and may also include: a transaction layer logic circuit for setting in a control register to instruct the device and the link partner to support a secure flow using integrity protection or data encryption.Example 4 may include the subject matter of any of Examples 1-3, wherein the transaction layer logic unit encodes the secure TLP with a secure flow number, and the secure flow number is unique to the secure flow that the secure TLP will cross.Example 5 may include the subject matter of any of Examples 1-4, and may also include: an encryption engine, which includes a hardware circuit for encrypting TLP.Example 6 may include the subject matter of Example 5, wherein the encryption engine uses an encryption standard based on the American Encryption Standard Counter (AES-CTR) encryption protocol.Example 7 includes the subject matter of any of Examples 1-6, and may also include: a data integrity protection engine, which includes a hardware circuit for implementing data integrity protection for TLP.Example 8 includes the subject matter of Example 7, wherein the data integrity protection engine uses an integrity protocol based on the Galois Message Authentication Code (GMAC) protocol.Example 9 includes the subject matter of any of Examples 1-9, and may also include a transaction layer logic circuit for encoding the TLP with a prefix indicating that the TLP includes one or both of integrity protection or data encryption.Example 10 includes the subject matter of Example 9, wherein the prefix includes an L bit which, when set, indicates that the TLP is the last secure TLP on the secure stream, and subsequent TLPs received on the secure stream will have a new Encryption key set.Example 11 may include the subject matter of any of Examples 9-10, wherein the prefix includes an M bit that, when set, indicates that the TLP includes a message authentication code (MAC).Example 12 may include the subject matter of any of Examples 1-11, wherein the secure flow includes one or more sub-streams, and the one or more secure sub-streams include secure sub-streams for published requests, unpublished requests, or completions.Example 13 may include the subject matter of Example 12, and further include: a transaction layer logic circuit for providing a counter block for data encryption and a counter block for integrity protection for each security sub-stream in the security stream.Example 14 may include the subject matter of any of the examples 1-13, and may also include: transaction layer logic circuits, which are used to determine that the TLP will transit to the link partner through the switch complex; and use integrity protection to secure the flow Encoding and/or encrypting the data payload of each TLP of the secure stream.Example 15 may include the subject matter of any of Examples 1-14, and may also include transaction layer logic circuits that determine that the TLP will be sent to the link partner without traversing the switch complex; and optionally One or more TLPs in the secure stream are encoded and/or the data payload of one or more TLPs is selectively encrypted.Example 16 is a method that includes: determining, through a logic circuit at the transaction layer of the protocol stack of the device, that the packet will traverse to the link partner on the secure flow; authenticating the receiving port of the link partner; and grouping the transaction layer ( The TLP) prefix is configured to identify the TLP as a secure TLP; associate the secure TLP with the secure stream; apply integrity protection and/or data encryption to the secure TLP; and send the secure TLP to the link partner across the secure stream.Example 17 may include the subject matter of Example 16, and may also include associating the secure stream with an authentication key; and associating the authentication key with a key identifier (key ID), which is data encryption and integrity Specific to each of the sexual protections.Example 18 may include the subject matter of any of Examples 16-17, wherein associating a secure TLP with a secure stream includes associating a secure TLP with a secure stream number, the secure stream number being encoded into the TLP prefix.Example 19 may include the subject matter of any of Examples 16-18, in which data encryption is performed using Advanced Encryption Standard Counter Mode (AES-CTR) encryption.Example 20 may include the subject matter of any of Examples 16-19, in which Galois Message Authentication Code (GMAC) is used to perform integrity protection.Example 21 is a system that includes: a root complex including a root port; an end device including an upstream port; and an interconnect that couples the root port with the upstream port. The root port may include a protocol stack with a transaction layer, the transaction layer including a hardware circuit used to encode a transaction layer packet (TLP) with a secure TLP prefix, the secure TLP prefix indicating that the TLP will transit on the secure stream Interconnect; Associate TLP with a secure stream; Perform one or both of data encryption of the TLP's data payload or integrity protection of the TLP; and send the TLP to the endpoint device.Example 22 may include the subject matter of Example 21, wherein the root port is directly linked to the upstream port, and wherein the secure TLP prefix includes the local TLP prefix.Example 23 may include the subject matter of Example 22, wherein associating the TLP with the secure flow includes setting the secure flow identifier in the TLP header to zero.Example 24 may include the subject matter of Example 21, and further include: a switch complex including a downstream switch port coupled to an upstream port and an upstream switch port coupled to a root port, and the transaction layer includes a hardware circuit for requesting party-based The identifier (RID) and address association register settings ensure the security of the TLP to be transmitted to the endpoint through the switch complex.Example 25 may include the subject matter of Example 21, wherein the secure TLP prefix may include the first bit indicating the last TLP in the secure stream; the second bit indicating whether the TLP originates from a trusted environment; and the second bit indicating whether the TLP is derived from a trusted environment; and indicating that the TLP includes a message authentication code ( The third bit of MAC); and a counter value indicating the count of unissued requests and completed TLPs.Example 26 is a device that includes a module for the following operations: Encoding transaction layer packets (TLP) with integrity protection and/or encrypting the data payload of the TLP with data encryption to form a secure TLP; The secure TLP is associated with the secure stream; and the secure TLP is sent to the link partner across the secure stream.Example 27 may include the subject matter of Example 26, and may also include a module for the following operations: read the extension capability register; and determine that the device and the link partner support integrity protection and data encryption for TLP encoding.Example 28 may include the subject matter of Example 27, and may also include a module for setting in a control register to instruct the device and link partner to support a secure stream using integrity protection or data encryption.Example 29 may include the subject matter of any of Examples 26-28, wherein the transaction layer logic unit encodes a secure TLP with a secure flow number, and the secure flow number is unique to the secure flow that the secure TLP will cross.Example 30 may include the subject matter of any of Examples 26-29, and may also include an encryption engine having a hardware circuit for encrypting TLP.Example 31 is a non-transitory computer-readable medium storing instructions that, when executed, cause a hardware process to perform operations, the operations including: determining, through a logic circuit at the transaction layer of the protocol stack of the device, that the packet will be Traverse the secure flow to the link partner; authenticate the receiving port of the link partner; configure the transaction layer packet (TLP) prefix to identify the TLP as a secure TLP; associate the secure TLP with the secure stream; apply to the secure TLP Integrity protection and data encryption; and sending secure TLP to link partners across secure streams.Example 32 may include the subject matter of Example 31, and may also include: associating the secure stream with the authentication key; and associating the authentication key with the key identifier (key ID), the key ID is data encryption and integrity Specific to each of the sexual protections.Example 33 may include the subject matter of Example 31, wherein associating the secure TLP with the secure flow includes associating the secure TLP with the secure flow number, the secure flow number being encoded into the TLP prefix.The foregoing description of one or more embodiments provides illustrations and descriptions, but is not intended to be exclusive, nor is it intended to limit the scope of the embodiments to the exact form disclosed. There may be various modifications and changes based on the above teachings, or various modifications and changes may be made from the practice of various embodiments. |
Various systems and methods for caching and tiering in cloud storage are described herein. A system for managing storage allocation comprises a storage device management system to maintain an access history of a plurality of storage blocks of solid state drives (SSDs) managed by the storage device management system; and automatically configure each of a plurality of storage blocks to operate in cache mode or tier mode, wherein a ratio of storage blocks operating in cache mode and storage blocks operating in tier mode is based on the access history. |
1.A system for managing storage allocation, the system comprising:A storage device management system for:Maintaining an access history of a plurality of memory blocks of the solid state drive SSD managed by the storage device management system;Automatically configuring each of the plurality of memory blocks to operate in a cache mode or a layered mode, wherein a ratio of memory blocks operating in the cache mode to memory blocks operating in the layered mode is based on the access history.2.The system of claim 1 wherein said storage device management system is operative to: maintain said access history:Determining an average access frequency of each of the plurality of memory blocks;Determining access consistency of each of the plurality of memory blocks.3.The system of claim 2, wherein to automatically configure each of the plurality of memory blocks, the storage device management system is configured to:Configuring a memory block having a relatively high average access frequency and relatively high access consistency to operate in a layered mode;A memory block having a relatively low average access frequency and relatively low access consistency is configured to operate in a cache mode.4.The system of claim 2 wherein said storage device management system is for ordering said plurality of storage blocks based on said access consistency.5.The system of claim 4 wherein said storage device management system is operative to identify an access consistency threshold and to automatically configure each of said plurality of storage blocks, said storage device management system A memory block for access consistency having an access consistency threshold is configured to operate in a hierarchical mode.6.The system of claim 5 wherein, in order to automatically configure each of said plurality of memory blocks, said storage device management system is operative to have access consistency that does not exceed said access consistency threshold The memory block is configured to operate in a cache mode.7.The system of claim 5 wherein said storage device management system is operative to adjust said access consistency threshold to maximize a hit rate of said plurality of memory blocks stored on said SSD.8.The system of claim 5 wherein said storage device management system is operative to: said said weighted by said average access frequency of each of said plurality of memory blocks during each time period A consistency weighting function is accessed to adjust the access consistency threshold.9.A method of managing storage allocation, the method comprising the steps of:Maintaining an access history of a plurality of memory blocks of the solid state drive SSD managed by the storage device management system at the storage device management system;Each of the plurality of memory blocks is automatically configured by the storage device management system to operate in a cache mode or in a layered mode, wherein the memory blocks operating in the cache mode operate in a hierarchical mode The ratio of the storage blocks is based on the access history.10.The method of claim 9 wherein the step of maintaining said access history comprises the steps of:Determining an average access frequency of each of the plurality of memory blocks;Determining access consistency of each of the plurality of memory blocks.11.The method of claim 10, wherein the step of automatically configuring each of the plurality of memory blocks comprises the steps of:Configuring a memory block having a relatively high average access frequency and relatively high access consistency to operate in a layered mode;A memory block having a relatively low average access frequency and relatively low access consistency is configured to operate in a cache mode.12.The method of claim 10 further comprising the step of ordering said plurality of memory blocks based on said access consistency.13.The method of claim 12 further comprising the step of identifying an access consistency threshold, andWherein the step of automatically configuring each of the plurality of memory blocks comprises the step of configuring a memory block having access consistency exceeding the access consistency threshold to operate in a hierarchical mode.14.The method of claim 13 wherein automatically configuring each of the plurality of memory blocks comprises configuring a memory block having access consistency that does not exceed the access consistency threshold to be in a cache mode operating.15.The method of claim 13 further comprising the step of:The access consistency threshold is adjusted to maximize the hit rate of the plurality of memory blocks stored on the SSD.16.The method of claim 13 further comprising the step of:The access consistency threshold is adjusted based on a weighting function of the access consistency weighted by the average access frequency of each of the plurality of memory blocks per time period.17.The method of claim 10, wherein the access consistency of each of the plurality of memory blocks is a standard deviation of accesses and a block corresponding to each of the plurality of memory blocks A ratio of average access frequencies, wherein the standard deviation of the accesses is calculated based on the number of accesses in a given period, and the average access frequency is calculated for the given period.18.The method of claim 10 wherein the step of determining said access consistency of said plurality of memory blocks comprises the step of using a weighted average of said access consistency of respective ones of said plurality of blocks .19.The method of claim 9 further comprising the steps of:The automatic configuration step of claim 9 is performed cyclically.20.The method of claim 9, wherein the SSDs managed by the storage device management system are not shared across device pools.21.The method of claim 9, wherein the storage device management system manages a plurality of storage devices organized in a plurality of tiers, and wherein the method further comprises the steps of:Identifying a new drive to be incorporated into the plurality of storage devices;Identifying data transfer metrics for the new drive;The new drive is merged into the plurality of tiers based on the data transfer metric.22.The method of claim 21 wherein said data transmission metric comprises an input/output operation IOPS per gigabyte per second.23.The method of claim 22 wherein said IOPS of said new driver is obtained from data table information corresponding to said new driver.24.A machine readable medium comprising instructions which, when executed by a machine, cause the machine to perform the operations of the method of any one of claims 9 to 23.25.An apparatus for managing storage allocation, the apparatus comprising:A device for maintaining an access history of a plurality of storage blocks of a solid state drive SSD managed by the storage device management system at a storage device management system;Means for the storage device management system to automatically configure each of the plurality of storage blocks to operate in a cache mode or a hierarchical mode, wherein the storage blocks operating in the cache mode are in a hierarchical mode The ratio of the operational memory blocks is based on the access history. |
Cache operations and tiering operations for cloud storageTechnical fieldEmbodiments described herein relate generally to storage device management, and in particular to caching and tiering for cloud storage.Background techniqueA solid state drive (SSD) is a data storage device that uses an integrated circuit component as a memory for permanently storing data. SSD drives use an interface compatible with traditional block input/output hard disk drives (HHD) that provide backward compatibility and simple replacement in a variety of applications. Most SSDs use NAND-based flash memory, and this NAND-based flash memory retains data without power. SSDs have been incorporated into storage arrays as a caching mechanism. SSDs can also be used for storage. Current implementations of SSDs for storage and caching involve managing both independently. SSDs can be allocated from device pools. Thus, there is a limited number of SSDs available for storage or caching.DRAWINGSIn the drawings, which are not necessarily to scale, the same reference reference The same reference numbers with different letter suffixes may represent different instances of similar components. Some embodiments are illustrated by way of example and not limitation in the accompanying drawings in which: FIG.1 is a schematic diagram showing a computing environment for cache operations and tiered operations in cloud storage, in accordance with an embodiment;2 is a chart showing time-varying SSD storage pool partitioning based on data access metrics, in accordance with an embodiment;3 is a block diagram showing a plurality of storage pools in accordance with an embodiment;4 is a block diagram showing a storage pool with a flexible hierarchical design, in accordance with an embodiment;5 is a block diagram showing a system for managing storage allocation, in accordance with an embodiment;6 is a flowchart showing a method of managing a storage allocation, according to an embodiment;7 is a block diagram showing an example machine on which any one or more of the techniques (eg, methods) discussed herein may be performed, in accordance with an example embodiment.Detailed waysThe systems and methods described herein provide cache operations and tiered operations for cloud storage. In cloud storage, a combination of HHD and SSD can be used. For top-level services, SSDs can be used as primary storage mechanisms where low-level services provide HHD for storage. In addition, SSDs can be used as caches between SSD-based top-level services or between HDD-based lower-level services.Current implementations of SSDs for storage and caching involve managing both independently. SSDs can be allocated from device pools. Thus, there is a limited number of SSDs available for storage or caching. The administrator typically guesses the expected I/O pattern based on stored content, user base, or other aspects, and then adds some margin of error when allocating SSD pools for storage or caching. Once the allocation is submitted, there is no easy way to reallocate solid state capacity to storage or caching operations. In contrast, reallocation typically involves reconfiguring the storage subsystem, which can result in downtime, negotiation with consumers, and storage of administrator resources. As a result, the storage service owner will more conservatively determine the size of the overall SSD pool to meet the expected demand for input/output operations per second (IOPS), resulting in higher overall solution costs. In addition, current storage designs tend to share federated cache resources across several storage pools, resulting in I/O contention across these pools.A storage pool typically includes multiple layers of devices. A device layer refers to a collection of similar or identical devices or device types that provide substantially equivalent performance. Roughly based on a rough description of the device (such as by disk size (eg, 500GB, 1Tb, and 2TB drives of a layer that is organized into tissue) or by disk rotation speed (eg, 7.2K rpm of layers that are organized apart) The device layer is organized by 10K rpm or by some combination of drive capacity and rotational speed.The mechanisms described herein discuss adaptive integrated solid state cache operations and tiered operations that dynamically and automatically perform SSD capacity allocation between cache mode and layered mode (storage mode). This allows the storage capacity manager to optimize the overall solid state storage pool size, resulting in reduced acquisition costs and configuration management overhead. In addition, the manager can allocate overall solid state storage capacity for both tiering and caching on a per pool basis, thereby eliminating cross-pool contention and associated sizing complexity. The drives can also be organized into dynamic tiering such that when a drive is added to the storage pool, this drive is added to the existing tier or added to the newly created tier based on the IOPS density of the drive. This type of management results in lower storage availability and ongoing support costs, simplified configuration management, and increased storage system performance.FIG. 1 is a schematic diagram showing a computing environment 100 for caching operations and tiering operations in cloud storage, in accordance with an embodiment. Computing environment 100 includes a plurality of hosts 102 and cloud storage systems 104 that are communicatively coupled via network 106. The host 102 can be a device such as a smart phone, a cellular phone, a mobile phone, a laptop computer, a tablet computer, a music player, a wearable device (eg, a watch, a glasses-based device, etc.), a desktop Computers, laptops, hybrids, in-wall devices, or other networked devices.Network 106 may include a local area network (LAN), a wide area network (WAN), a wireless variant network (eg, a wireless LAN (WLAN), such as a network compliant with the IEEE 802.11 family of standards or a wireless WAN (such as a cellular network), a public switched telephone network (PSTN) network, ad hoc network, personal area network (eg, Bluetooth) or other combination or arrangement of network protocols and network types. Network 106 may include a single local area network (LAN) or wide area network (WAN) or a combination of LAN or WAN (such as the Internet) The various devices in Figure 1 (e.g., host 102) can be coupled to network 106 via one or more wired or wireless connections.The cloud storage system 104 includes: cloud storage operating software 106 that manages a random access memory (RAM) cache 108; an SSD storage pool 110 that includes an SSD that operates as an SSD cache 112 and operates as an SSD 114 layered; Layer 116. Disc tiering is organized roughly based on disk performance (eg, SSD tier 114 and hard disk tier 116). In many implementations, the SSD layering is on top of the hierarchical hierarchy, and conventional disk drives occupy the intermediate and lower partial layers of the hierarchical hierarchy. The intermediate layer is referred to as performance layering in the conventional sense, and the intermediate layer may include a serial attached SCSI (SAS) driver. The lower part of the layer is referred to as capacity layering in the conventional sense, and the lower part of the layer may include near-line SAS (NL-SAS) or Serial ATA (SATA) drives, which are slower actuators with larger capacities. SAS drives largely replace older SCSI disks and are known as standards in enterprise storage. Among these three types of disks (SAS, NL-SAS, and SATA), SAS is the most reliable, maintaining its performance and performing better than NL-SAS and SATA disks. SAS disks have been tested to perform reliably in near 100% duty cycle, while NL-SAS and SATA disks have been designed and tested to perform at much lower levels of duty cycle.At the top of the hierarchical hierarchy is an SSD layer 114, which includes disks from the SSD storage pool 110. SSD storage pool 110 is partitioned into SSD cache 112 and SSD hierarchy 114. The size of the SSD cache 112 is manually configured by an administrator in a conventional sense, and this administrator makes a rough estimate of the expected input/output (I/O) modes and requirements. Once the size is determined, there is no easy way to redistribute the solid state capacity. In the embodiment shown in FIG. 1, cloud storage operating software 106 can dynamically partition solid state storage capacity between cache operations and tiered operational functions, which provides significantly more cost than manually sizing. Benefit storage pool configuration.Cloud storage operating software 106 can be configured to monitor the use of SSD storage pool 110. Blocks that are accessed on a continuous and continuous basis at a frequency above the access consistency threshold may be marked as hierarchical, in which case the primary copy is migrated to reside in the solid state hierarchy 114. In this way, these layered blocks no longer need to be periodically flushed to a much lower hard disk based storage tier 116, thereby saving the relatively scarce IOPS of the much slower hard disk. And bandwidth. Conversely, a block in which the access mode falls below the access consistency threshold can be marked as being in cache mode and can be stored in SSD cache 112, and the primary copy is retained on hard disk hierarchy 116. The cached block is easily dumped or overwritten in response to changing the access mode based on a standard caching algorithm. These caching algorithms allow the capacity of the SSD cache 112 to be serially shared between blocks at the expense of periodically updating the copies on the hard disk hierarchy 116, along with significant time varying access patterns.The relative size of the SSD cache 112 and the layer 114 capacity can be dynamically adjusted based on summary information about the block access mode using a machine learning algorithm. Specifically, for each block, the average access (read and write) for each day is tracked and a measure of the variability of the access pattern is determined. A block with relatively high variability indicates burst or intermittent access, which is suitable for cache operations. A block with relatively low variability indicates a stable access mode, which is more suitable for layered operations. In order to achieve greater time domain resolution, variability can be measured at smaller intervals, such as every hour or less. In an embodiment, to measure the access mode variability of a block, the ratio of the average number of accesses divided by the standard deviation of the accesses may be used. In another embodiment, the time between accesses can be used to determine the variability of the access mode.For example, in a given period (eg, day), the number of accesses to a block may be sampled multiple times (eg, every minute). The results are then averaged over this period to provide an average number of visits. The standard deviation for this period can be calculated (for example, with 1440 samples in a day). A relatively low standard deviation can represent a relatively consistent access pattern, and a relatively high standard deviation can represent a relatively volatile or burst access pattern. The standard deviation can be scaled by the average and the result can be used as an access consistency metric.As another example, the time between visits may be measured during a given day period, and the average time between visits may be calculated for multiple sub-periods (eg, per minute). The standard deviation (average per minute) of the average time between visits within one day can be calculated relative to 1440 samples. Similar to the previous example, a relatively low standard deviation can represent a consistent access pattern and vice versa.An access consistency metric can be calculated for each data block in cloud storage system 104. In this manner, if a block of data is initially stored in the hard disk hierarchy 116 and then moved to the SSD cache 112 when requested by the host 102, then this block access is tracked. If the data block is later flushed from the SSD cache 112 because the data block is not being accessed and the SSD 112 space is needed for another data block, then the previous access of this data block is stored and maintained, so that in the future, if This data block is again requested and moved to the SSD cache 112, which can be properly taken into account in the access consistency metric.Based on the access frequency, the blocks may be ordered in descending order and for the determined cutoff of SSD storage pool 110. This cutoff can be a small multiple of the available overall SSD capacity. For this subset, the iterative approach to the start of the heuristic based threshold as measured by "variability" is used to derive the dynamic partitioning operation between the cache and the hierarchical operation functions. Blocks with a stable access history are designated as hierarchical, and the remaining blocks are designated as cached. The access consistency threshold may be iteratively shifted up or down for an overall goal of maximizing the hit rate (eg, SSD hit rate) weighted by the total number of visits per time period. In this way, with such blocks having similar hit ratios, blocks with a higher number of accesses are preferred than blocks with lower access.2 is a chart 200 showing time-varying SSD storage pool partitioning operations based on data access metrics, in accordance with an embodiment. SSD storage pool 110 may partition between SSD cache 112 and SSD hierarchy 114. The x-axis of graph 200 is the access consistency of the block, while the y-axis of graph 200 is the average access frequency of the block. Note that the lower end of the y-axis is the medium to high access frequency. This is due to the fact that only blocks with at least a medium to high access frequency will be stored in the SSD store. Those blocks with lower access frequencies may be stored in capacity storage (e.g., hard disk hierarchy 116). A block having access consistency that changes significantly over time is stored in the SSD cache 112. Blocks that are frequently used and have relatively consistent I/O patterns (eg, high access consistency) are stored in the SSD layer 114. The capacity of the SSD storage allocated to the cache mode or the tiered mode is determined by the access consistency threshold, and the capacity for any of the cache or tiering may vary over time.FIG. 3 is a block diagram showing a plurality of storage pools according to an embodiment. Each of pool A 302 and pool B 304 includes extreme performance tiering, performance tiering, and capacity tiering. As discussed above, extreme performance tiering typically includes SSDs; performance tiering typically includes fast, high-performance, reliable SAS drives; capacity tiering typically includes lower-performance, high-capacity NL-SAS drives. The NL-SAS driver can be a SAS driver with a lower specification (eg, a lower winding speed or a lower average between failures (MTBF)). The NL-SAS drive can also be other drive introspection (e.g., SATA) with an interpolator or bridge for switching between SAS commands and native drive commands (e.g., SATA commands).In contrast to conventional SSD cache operations that share an SSD cache between pools, in the configuration shown in Figure 3, each pool (302 and 304) includes its own for adaptive and dynamic partitioning to high speed. Cache and tiered solid-state storage pools. By sharing the logical SSD cache across pools, competition is eliminated and the performance of each pool is increased.In a conventional implementation, the pools (302 and 304) are organized by having the tier based on the type of drive (eg, SSD, SAS, NL-SAS). In these conventional implementations, for each layer, a single drive type can be specified by the administrator. For example, when configuring SAS tiering, the administrator can be presented with a list of drive types and capacities so that the administrator can specify a 300GB 15K RPM SAS drive or a 600GB 10K RPM drive or a 900GB 10K drive instead of a mix of these drive types/capacities.A drawback of these types of limitations is that cloud storage administrators are forced to add pools based on automated storage tiering with the initial selected drive type. In order to be able to use newer drive types, cloud storage administrators are forced to start configuring new pools, which can result in fragmentation of storage capacity across a larger number of pools, each of which has a single unified storage The pool has much lower capacity. This fragmentation of storage capacity leads to multiple inefficiencies that collectively drive higher overall total cost of ownership and associated competitive concerns.To overcome these limitations, storage tiering can be designed and implemented based on driver performance metrics rather than just device classification (eg, SSD, SAS, NL-SAS). In an embodiment, multiple separate drive capacities and driver IOPS are used together in a comprehensive metric called IOPS density. The IOPS density is IOPS divided by capacity. The tiering can be arranged in order or IOPS density.An example hierarchical hierarchy based on IOPS density is:Top layered SSD (highest IOPS density)SAS Layering 10K RPM–300GB10K RPM–600GB10K RPM–900GB10K RPM–1.2TBNL-SAS Layering 7.2K RPM–2TB7.2K RPM–3TB7.2K RPM–4TB (minimum IOPS density)The data placement algorithm does not need to be changed in principle - it is only adapted to accommodate a larger number of layers. Typically, this method begins by first filling the highest execution tier (up to the specified threshold) and then migrating the least active data block to the next low performance storage tier until it is populated to its specified threshold. And continue to make the ripple down effect to the lowest performance layer. This provides the best overall performance by maximizing the optimally executed hierarchical I/O traffic to the storage pool.4 is a block diagram showing a storage pool with a flexible hierarchical operation design, in accordance with an embodiment. In contrast to the pool described in FIG. 3, the pool shown in FIG. 4 includes several sub-layers within each of performance layer 402 and capacity layer 404. These sub-layers are organized according to IOPS density. Although only a few sub-layers are shown, it should be understood that any number of sub-layers can be implemented. Moreover, although only one SSD layering is shown, it should be understood that additional SSD layering can be implemented in accordance with the same principles described above. Furthermore, although basic layering (eg, SSD, SAS, NL-SAS) is shown, it should be understood that hierarchical names may be removed; drivers may be organized by IOPS density; and may be based on various thresholds based on IOPS density Use them as a "performance" or "capacity" drive.FIG. 5 is a block diagram showing a system 500 for managing storage allocation, in accordance with an embodiment. System 500 can include a storage device management system 502. Storage device management system 502 can be implemented in whole or in part by cloud storage operating software 106.The storage device management system 502 can be configured to: maintain an access history of a plurality of storage blocks of a solid state drive (SSD) managed by the storage device management system; and automatically configure each of the plurality of storage blocks to Cache mode or hierarchical mode operation, wherein the ratio of memory blocks operating in cache mode to memory blocks operating in hierarchical mode is based on this access history. In an embodiment, storage device management system 502 is configured to perform these operations cyclically (such as daily, hourly, minute, etc.).In an embodiment, to maintain this access history, the storage device management system is configured to determine an average access frequency of each of the plurality of storage blocks and determine access consistency of each of the plurality of storage blocks. In a further embodiment, in order to automatically configure each of the plurality of memory blocks, the storage device management system is configured to: configure a block having a relatively high average access frequency and a relatively high access consistency to be divided into Layer mode operation; and blocks having a relatively low average access frequency and relatively low access consistency are configured to operate in a cache mode.In another embodiment, storage device management system 502 is configured to order a plurality of storage blocks based on access consistency. In a further embodiment, the storage device management system 502 is configured to identify an access consistency threshold and to automatically configure each of the plurality of storage blocks, the storage device management system 502 for having an access consistency greater than The storage block of threshold access consistency is configured to operate in a hierarchical mode. In a further embodiment, in order to automatically configure each of the plurality of memory blocks, the storage device management system 5052 is configured to configure the memory blocks having access consistency that does not exceed the access consistency threshold to be in a cache mode. operating.In an embodiment, the storage device management system 502 is configured to adjust the access consistency threshold to maximize the hit rate of the plurality of memory blocks stored on the SSD.In an embodiment, storage device management system 502 is configured to adjust an access consistency threshold based on a weighting function of access consistency weighted by an average access frequency of each of a plurality of memory blocks within a time period.In an embodiment, the access consistency of each of the plurality of memory blocks is a ratio of a standard deviation of the access to an average access frequency of a respective block of each of the plurality of memory blocks.In an embodiment, to determine access consistency of a plurality of memory blocks, the storage device management system is configured to use a weighted average of access consistency of respective boxes in the plurality of boxes.In an embodiment, SSDs managed by storage device management system 502 are not shared across device pools. As discussed above, maintaining a separate SSD cache for each pool reduces cross-pool competition and increases performance.In an embodiment, the storage device management system 502 manages a plurality of storage devices organized in a plurality of tiers, and wherein the storage device management system is operative to: identify new drives to be merged into the plurality of storage devices; The data transfer metric for the new device; and the merging of this new drive into multiple tiers based on this data transfer metric. In a further embodiment, this data transfer metric includes input/output operations (IOPS) per gigabyte per second. In an embodiment, the IOPS of the new device is obtained from data table information corresponding to the new device. In another embodiment, to obtain IOPS for a new device, the storage device management system is configured to: monitor the new device during operation of the plurality of storage devices; and measure an average IOPS of the new device based on the monitoring.In an embodiment, to incorporate this new device into multiple tiers, the storage device management system is configured to: identify a new tier for this new device; and incorporate this new tier into multiple tiers.In an embodiment, multiple layers are organized from faster operations to slower operations based on data transfer metrics of multiple storage devices. Layering can be composed of multiple ranges of IOPS densities to account for small changes in driver performance. For example, two SASs (one of which is a 2GB 10K RPM drive with 145 IOPS and the other with a 3GB 15RPM drive of 220 IOPS) can be placed in the same layer because the 2GB drive has an IOPS density of 72.5. The 3GB drive has an IOPS density of 73.3. Such stratification can consist, for example, of a drive having an IOPS density ranging from 72.0 to 74.0.FIG. 6 is a flow diagram showing a method 600 of managing storage allocation, in accordance with an embodiment. At block 602, at the storage device management system, an access history of a plurality of memory blocks of a solid state drive (SSD) managed by the storage device management system is maintained. In an embodiment, method 600 includes the step of performing an automatic configuration cyclically.In an embodiment, the step of maintaining an access history comprises the steps of: determining an average access frequency of each of the plurality of memory blocks; and determining access consistency of each of the plurality of memory blocks. In a further embodiment, the step of automatically configuring each of the plurality of memory blocks comprises the step of configuring a block having a relatively high average access frequency and a relatively high access consistency to operate in a hierarchical mode And configuring a block with a relatively low average access frequency and relatively low access consistency to operate in a cache mode.In a further embodiment, method 600 includes the step of ordering a plurality of memory blocks based on access consistency. In a further embodiment, method 600 includes the step of identifying an access consistency threshold, and wherein the step of automatically configuring each of the plurality of memory blocks comprises the step of: aligning accesses having an access consistency threshold The memory blocks are configured to operate in a hierarchical mode.At block 604, each of the plurality of memory blocks is automatically configured by the storage device management system to operate in a cache mode or a layered mode, wherein the memory blocks operating in the cache mode are in a layered mode The ratio of the operational memory blocks is based on the access history.In an embodiment, the step of automatically configuring each of the plurality of memory blocks comprises the step of configuring a memory block having access consistency that does not exceed an access consistency threshold to operate in a cache mode.In an embodiment, method 600 includes the step of adjusting an access consistency threshold to maximize a hit rate of a plurality of memory blocks stored on the SSD.In an embodiment, method 600 includes the step of adjusting an access consistency threshold based on a weighting function that weights access consistency by an average access frequency of each of the plurality of memory blocks over a time period.In an embodiment, the access consistency of each of the plurality of memory blocks is a ratio of a standard deviation of the access to an average access frequency of a respective block of each of the plurality of memory blocks.In an embodiment, the step of determining access consistency of the plurality of memory blocks comprises the step of using a weighted average of access consistency of respective ones of the plurality of blocks.In an embodiment, SSDs managed by the storage device management system are not shared across device pools.In an embodiment, the storage device management system manages a plurality of storage devices organized in a plurality of tiers, and the method 600 includes the steps of: identifying a new drive to be merged into the plurality of storage devices; identifying the new device Data transfer metrics; and merging this new drive into multiple tiers based on this data transfer metric. In a further embodiment, the data transfer metrics include per-gigabytes per second input/output operations (IOPS). In an embodiment, the IOPS of this new device is obtained from data table information corresponding to the new device. In an embodiment, the IOPS of this new device is obtained by the following steps: monitoring the new device during operation of the plurality of storage devices; and measuring the average IOPS of the new device based on the monitoring. The monitoring can be performed as an initial test, configuration, or installation process when a new device is introduced into the pool.In an embodiment, the step of merging the new drive into the plurality of tiers comprises the steps of: identifying a new tier for the new device; and merging the new tier into the plurality of tiers.In an embodiment, multiple layers are organized from faster operations to slower operations based on data transfer metrics of multiple storage devices.Embodiments can be implemented in one or a combination of hardware, firmware, and software. Embodiments can also be implemented as instructions stored on a computer readable storage device, which are readable and executable by at least one processor to perform the operations described herein. A computer readable storage device can include any non-transitory mechanism for storing information in a form readable by a machine (eg, a computer). For example, a computer readable storage device may include read only memory (ROM), random access memory (RAM), magnetic disk storage media, optical storage media, flash memory devices, and other storage devices and media.As described herein, examples may include multiple components, modules, or mechanisms or may operate on multiple components, modules, or mechanisms. A module can be hardware, software, or firmware communicatively coupled to one or more processors to perform the operations described herein. A module may be a hardware module, and thus, a module may be considered a tangible entity capable of performing the specified operations and can be configured or arranged in a particular manner. In an example, the circuitry can be arranged in a specified manner (eg, internally or relative to an external entity (such as other circuitry)). In an example, one or more computer systems (eg, stand-alone devices, client or server computer systems) or all of one or more hardware processors may be implemented by firmware or software (eg, instructions, application portions, or applications) Or a module that is configured to operate to perform the specified action. In an example, the software can reside on a machine readable medium. In an example, the software causes the hardware to perform the specified operations when executed by the underlying hardware of the module. Thus, the term "hardware module" is understood to encompass a tangible entity that is physically constructed, specially configured (eg, hardwired) or temporarily (eg, transiently) configured (eg, programmed). To operate or perform some or all of any of the operations described herein in a specified manner. Considering an example in which modules are temporarily configured, each of these modules does not have to be instantiated at any time. For example, where a module includes a general purpose hardware processor configured using software, the general purpose hardware processor can be configured as a corresponding different module at different times. The software can accordingly configure the hardware processor to, for example, constitute a particular module at one instance and a different module at another instance. A module can also be a software or firmware module that operates to perform the methods described herein.7 is a block diagram showing a machine in an example form of computer system 700 within which a set of instructions or sequences of instructions may be executed to cause the machine to perform any of the methods discussed herein, in accordance with an example embodiment. In an alternate embodiment, the machine operates as a standalone device or may be connected (eg, networked) to other machines. In a networked deployment, a machine can act as a server or client machine in a server-client network environment, or a machine can act as a peer machine in a peer-to-peer (or distributed) network environment. The machine may be an onboard vehicle system, a set top box, a wearable device, a personal computer (PC), a tablet PC, a hybrid tablet, a personal digital assistant (PDA), a mobile phone, or an instruction capable of executing an action specifying which machine to take (sequential Or in any other way). Moreover, although only a single machine is shown, the term "machine" shall also include a set of instructions (or instructions) for performing any one or more of the methods discussed herein, either individually or in combination. Set) a collection of any machine. Similarly, the term "processor-based system" shall include the control or operation by a processor (eg, a computer) to perform the instructions, either individually or in combination, in order to perform one or more of any one or more of the methods discussed herein. Any collection of multiple machines.The example device 700 includes at least one processor 702 (eg, a central processing unit (CPU), a graphics processing unit (GPU), or both, a processor core, a compute node, etc.), a main memory 704, and a static memory 706, They can communicate with one another via a link 708 (eg, a bus). Computer system 700 can further include a video display unit 710, an alphanumeric input device 712 (eg, a keyboard), and a user interface (UI) navigation device 714 (eg, a mouse). In one embodiment, video display unit 710, input device 712, and UI navigation device 714 are incorporated into a touch screen display. Computer system 700 can additionally include a storage device 716 (eg, a drive unit), a signal generation device 718 (eg, a speaker), a network interface device 720, and one or more sensors (not shown), such as global positioning System (GPS) sensor, compass, accelerometer, or other sensor.The storage device 716 includes a machine readable medium 722 on which is stored one or more sets of data structures and instructions 724 (eg, software), the set of one or more data structures and instructions 724 Any one or more of the methods or functions described herein may be embodied or utilized by any one or more of the methods or functions described herein. During execution of the instructions 724 by the computer system 700, the instructions 724 may also reside entirely or at least partially within the main memory 704, within the static memory 706, and/or within the processor 702, wherein the main memory 704, the static memory 706 And processor 702 also constitutes a machine readable medium.Although machine-readable medium 722 is illustrated as a single medium in the exemplary embodiments, the term "machine-readable medium" can include a single medium or multiple mediums that store one or more instructions 724 (eg, centralized or distributed) Database, and/or associated cache and server). The term "machine-readable medium" shall also be taken to include any instruction capable of storing, encoding or carrying instructions for execution by a machine or any tangible capable of storing, encoding or carrying data structures utilized by or associated with such instructions. The medium, and the instructions cause the machine to perform any one or more of the methods of the present disclosure. The term "machine readable medium" shall therefore be taken to include, without limitation, solid state memory as well as optical and magnetic media. Specific examples of machine-readable media include non-volatile memory, including by way of example, and not limited to, semiconductor memory devices (eg, electrically programmable read only memory (EPROM), electrically erasable programmable read only memory (EEPROM)) And flash memory devices; disks, such as internal hard disks and removable disks; magneto-optical disks; and CD-ROM and DVD-ROM disks.The instructions 724 can be further transmitted or received via the communication network 726 using a transmission medium via a network interface device 720 that utilizes any of a variety of well-known transmission protocols (eg, HTTP). Examples of communication networks include local area networks (LANs), wide area networks (WANs), the Internet, mobile telephone networks, plain old telephone service (POTS) networks, and wireless data networks (eg, Wi-Fi, 3G, and 4G LTE/LTE-A or WiMAX) The internet). The term "transmission medium" shall also be taken to include any intangible medium that is capable of storing, encoding, or carrying instructions for execution by the machine, and includes digital or analog communication signals or other intangible medium for facilitating communication of such software.Additional comments and examples:Example 1 includes a theme (such as a device, device, or machine) for managing storage allocation, the subject matter comprising: a storage device management system for: maintaining a solid state drive (SSD) managed by the storage device management system Access history of a plurality of memory blocks; and automatically configuring each of the plurality of memory blocks to operate in a cache mode or in a hierarchical mode, wherein the memory blocks and the operations in the cache mode operate The ratio of memory blocks operated by the layer mode is based on the access history.In Example 2, the subject matter of Example 1 may include: wherein, in order to maintain the access history, the storage device management system is configured to: determine an average access frequency of each of the plurality of storage blocks; And determining access consistency of each of the plurality of memory blocks.In the example 3, the subject matter of any one of examples 1 to 2 may include: wherein, in order to automatically configure each of the plurality of storage blocks, the storage device management system is configured to: A block having a relatively high average access frequency and relatively high access consistency is configured to operate in a layered mode; and a block having a relatively low average access frequency and relatively low access consistency is configured to operate in a cache mode.In Example 4, the subject matter of any one of examples 1 to 3 may include wherein the storage device management system is to sort the plurality of storage blocks based on the access consistency.In the example 5, the subject matter of any one of examples 1 to 4 may include: wherein the storage device management system is configured to identify an access consistency threshold, and to automatically configure the plurality of storage blocks Each of the storage blocks, the storage device management system configured to configure a storage block having access consistency that exceeds the access consistency threshold to operate in a hierarchical mode.In the example 6, the subject matter of any one of examples 1 to 5 may include, wherein, in order to automatically configure each of the plurality of storage blocks, the storage device management system is to have A memory block that does not exceed the access consistency threshold of the access consistency threshold is configured to operate in a cache mode.In the example 7, the subject matter of any one of examples 1 to 6 may include: wherein the storage device management system is configured to adjust the access consistency threshold to cause the plurality of storage on the SSD Maximize the hit rate of the memory blocks.In the example 8, the subject matter of any one of examples 1 to 7 may include: wherein the storage device management system is configured to: store a block by each of the plurality of storage blocks based on a time period The average access frequency weights the weighting function of the access consistency to adjust the access consistency threshold.In the example 9, the subject matter of any one of the examples 1 to 8 may include, wherein the access consistency of each of the plurality of memory blocks is a standard deviation of the access and the plurality of The ratio of the average access frequency of each of the respective blocks in the memory block.In the example 10, the subject matter of any one of examples 1 to 9 may include: wherein the storage device management system is configured to use the plurality of storage blocks in order to determine the access consistency of the plurality of storage blocks A weighted average of the access consistency of the corresponding blocks in the blocks.In the example 11, the subject matter of any one of the examples 1 to 10 may include: wherein the storage device management system is configured to cyclically perform the automatic configuration step of claim 1.In Example 12, the subject matter of any one of Examples 1 to 11 may include wherein the SSDs managed by the storage device management system are not shared across a device pool.In the example 13, the subject matter of any one of the examples 1 to 12, wherein the storage device management system manages a plurality of storage devices organized in a plurality of layers, and wherein the storage device a management system for: identifying a new driver to be incorporated into the plurality of storage devices; identifying a data transmission metric of the new device; and merging the driver to the plurality of points based on the data transmission metric In the layer.In Example 14, the subject matter of any of Examples 1 to 13 can include wherein the data transmission metric comprises an input/output operation per gigabyte per second (IOPS).In Example 15, the subject matter of any one of Examples 1 to 14 may include wherein the IOPS of the new device is obtained from data table information corresponding to the new device.In the example 16, the subject matter of any one of the examples 1 to 15 may include, wherein, in order to acquire an IOPS of the new device, the storage device management system is configured to: operate at the plurality of storage devices Monitoring the new device during; and measuring an average IOPS of the new device based on the monitoring.In the example 17, the subject matter of any one of examples 1 to 16 may include: wherein, in order to merge the new driver into the plurality of layers, the storage device management system is configured to: The new device identifies a new layer; and merges the new layer into the plurality of layers.In Example 18, the subject matter of any one of examples 1 to 17 can include, wherein the plurality of layers are organized from a faster operation to a comparison based on data transmission metrics of the plurality of storage devices Slow operation.Example 19 includes a subject for managing storage allocation (such as a method, means for performing an action, a machine readable medium comprising instructions (when the instructions are executed by a machine, the instructions cause the machine to perform an action) or The apparatus for performing) includes maintaining an access history of a plurality of memory blocks of a solid state drive (SSD) managed by the storage device management system at a storage device management system; and automatically managing by the storage device management system Configuring each of the plurality of memory blocks to operate in a cache mode or in a layered mode, wherein a ratio of memory blocks operating in a cache mode to memory blocks operating in a layered mode is based on the access history .In Example 20, the subject matter of Example 19 may include: wherein the step of maintaining the access history comprises the steps of: determining an average access frequency of each of the plurality of memory blocks; and determining the plurality of Access consistency for each of the memory blocks.In the example 21, the subject matter of any one of the examples 19 to 20 may include, wherein the step of automatically configuring each of the plurality of memory blocks includes the step of: having a relatively high average The blocks of access frequency and relatively high access consistency are configured to operate in a layered mode; and blocks having a relatively low average access frequency and relatively low access consistency are configured to operate in a cache mode.In Example 22, the subject matter of any one of Examples 19 to 21 may include the step of sorting the plurality of storage blocks based on the access consistency.In Example 23, the subject matter of any one of Examples 19 to 22 may include the step of: identifying an access consistency threshold, and wherein the step of automatically configuring each of the plurality of storage blocks comprises The following steps: Configuring a storage block having access consistency exceeding the access consistency threshold to operate in a hierarchical mode.In the example 24, the subject matter of any one of the examples 19 to 23 may include, wherein the step of automatically configuring each of the plurality of memory blocks comprises the step of: having no more than The access-consistent memory block accessing the consistency threshold is configured to operate in a cache mode.In Example 25, the subject matter of any one of Examples 19 to 24 may include the step of: adjusting the access consistency threshold to maximize a hit rate of the plurality of memory blocks stored on the SSD .In Example 26, the subject matter of any one of Examples 19 to 25 may include the step of weighting based on the average access frequency of each of the plurality of memory blocks during a time period A weighting function of access consistency is used to adjust the access consistency threshold.In the example 27, the subject matter of any one of examples 19 to 26 may include: wherein the access consistency of each of the plurality of memory blocks is a standard deviation of the access and the plurality of The ratio of the average access frequency of each of the respective blocks in the memory block.In the example 28, the subject matter of any one of the examples 19 to 27, wherein the determining the access consistency of the plurality of memory blocks comprises the step of: using the plurality of blocks A weighted average of the access consistency of the corresponding block.In Example 29, the subject matter of any one of Examples 19 to 28 may include the step of cyclically performing the automatic configuration step of claim 19.In the example 30, the subject matter of any one of the examples 19 to 29 may include wherein the SSD managed by the storage device management system is not shared across a device pool.In the example 31, the subject matter of any one of examples 19 to 30 may include: wherein the storage device management system manages a plurality of storage devices in a plurality of layers, and wherein the method further The method includes the steps of: identifying a new driver to be incorporated into the plurality of storage devices; identifying a data transmission metric of the new device; and merging the new driver to the plurality of layers based on the data transmission metric in.In Example 32, the subject matter of any one of examples 19 to 31 can include wherein the data transmission metric comprises an input/output operation per gigabyte per second (IOPS).In Example 33, the subject matter of any one of examples 19 to 32 may include wherein the IOPS of the new device is obtained from data table information corresponding to the new device.In Example 34, the subject matter of any one of Examples 19 to 33 may include: wherein an IOPS of the new device is acquired by: monitoring the new device during operation of the plurality of storage devices And measuring an average IOPS of the new device based on the monitoring.In the example 35, the subject matter of any one of examples 19 to 34, wherein the step of merging the new driver into the plurality of layers comprises the step of: identifying a new one for the new device Layering; and incorporating the new layer into the plurality of layers.In the example 36, the subject matter of any one of examples 19 to 35, wherein the plurality of storage devices are organized to be from a faster operation to a data transmission metric based on the plurality of storage devices Slower operation.The example 37 includes at least one machine readable medium comprising instructions that, when executed by a machine, cause the machine to perform the operations of any of the examples 19 to 36.Example 38 includes a device comprising means for performing any of the examples 19 to 36.Example 39 includes a theme (such as a device, device, or machine) for managing storage allocations, the subject matter comprising: maintaining, at a storage device management system, a plurality of solid state drives (SSDs) managed by the storage device management system Means for storing an access history of the block; and means for automatically configuring, by the storage device management system, each of the plurality of storage blocks to operate in a cache mode or in a layered mode, wherein The ratio of the memory blocks of the cache mode operation to the memory blocks operating in the layered mode is based on the access history.In Example 40, the subject matter of Example 39 can include: wherein the means for maintaining the access history comprises: means for determining an average access frequency of each of the plurality of memory blocks; Means for determining access consistency of each of the plurality of memory blocks.In the example 41, the subject matter of any one of examples 39 to 40 may include, wherein the means for automatically configuring each of the plurality of memory blocks comprises: for having a relatively high A block having an average access frequency and a relatively high access consistency configured to operate in a hierarchical mode; and configured to operate a block having a relatively low average access frequency and relatively low access consistency to operate in a cache mode s installation.In Example 42, the subject matter of any one of Examples 39 to 41 can include means for ordering the plurality of storage blocks based on the access consistency.In Example 43, the subject matter of any one of examples 39 to 42 may include: means for identifying an access consistency threshold, and wherein for automatically configuring each of the plurality of storage blocks The means of the block includes means for configuring a memory block having access consistency that exceeds the access consistency threshold to operate in a hierarchical mode.In the example 44, the subject matter of any one of the examples 39 to 43 may include, wherein the means for automatically configuring each of the plurality of storage blocks comprises: for not having more than The access-consistent storage block of the access consistency threshold is configured as a device operating in a cache mode.In Example 45, the subject matter of any one of examples 39 to 44 may include: adjusting the access consistency threshold to maximize a hit rate of the plurality of memory blocks stored on the SSD s installation.In Example 46, the subject matter of any one of Examples 39 to 45 may include: for weighting the average access frequency by each of the plurality of memory blocks during a time period A weighting function of access consistency is used to adjust the access consistency threshold.In the example 47, the subject matter of any one of examples 39 to 46, wherein the access consistency of each of the plurality of memory blocks is a standard deviation of the access and the plurality of The ratio of the average access frequency for each of the corresponding blocks in the memory block.In the example 48, the subject matter of any one of examples 39 to 47, wherein: the means for determining the access consistency of the plurality of memory blocks comprises: using the plurality of blocks Means of weighted averaging of the access consistency of the corresponding blocks in .In the example 49, the subject matter of any one of the examples 39 to 48 may include means for cyclically performing the automatic configuration step of claim 39.In Example 50, the subject matter of any one of Examples 39 to 49 can include wherein the SSDs managed by the storage device management system are not shared across a pool of devices.In the example 51, the subject matter of any one of examples 39 to 50, wherein the storage device management system manages a plurality of storage devices in a plurality of layers, and wherein the device further Included: means for identifying a new drive to be incorporated into the plurality of storage devices; means for identifying a data transmission metric of the new device; and for using the new drive based on the data transmission metric A device incorporated into the plurality of layers.In the example 52, the subject matter of any one of examples 39 to 51 can include wherein the data transmission metric comprises an input/output operation per gigabyte per second (IOPS).In the example 53, the subject matter of any one of examples 39 to 52, wherein the IOPS of the new device is obtained from data table information corresponding to the new device.In the example 54, the subject matter of any one of examples 39 to 53 may include: wherein an IOPS of the new device is acquired by: monitoring the new device during operation of the plurality of storage devices And measuring the average IOPS of the new device based on the monitoring.In the example 55, the subject matter of any one of examples 39 to 54 may include, wherein the means for incorporating the new driver into the plurality of layers comprises: for the new device Means identifying a new layered; and means for incorporating the new layer into the plurality of layers.In the example 56, the subject matter of any one of examples 39 to 55, wherein the plurality of layers are organized from a faster operation to a data transmission metric based on the plurality of storage devices Slower operation.The above detailed description includes references to the drawings that form a part of the detailed description. The drawings illustrate, by way of illustration, specific embodiments that may be practiced. These embodiments are also referred to herein as "examples." Such examples may include elements in addition to those shown or described. However, examples including elements shown or described are also contemplated. Moreover, any combination or permutation of those elements (or one or more aspects thereof) shown or described is also contemplated, or relative to a particular example (or one or more aspects thereof) or relative to Or other examples described (or one or more aspects thereof).The publications, patents, and patent documents cited in this specification are hereby incorporated by reference in their entirety in their entirety herein in their entirety herein The use of the combined references is a supplement to this document in the event of inconsistent use between this document and those documents incorporated by reference; for the inconsistencies that cannot be reconciled, the use in this document is dominant.In the present document, the terms "a" or "an" (as used in the art) are used to include one or more than one, regardless of any other instance or use of "at least one" or "one or more." In this document, the term "or" is used to mean a non-exclusive or such that "A or B" includes "A non-B", "B non-A", and "A and B" unless otherwise indicated. In the appended claims, the terms "including" and "in which" are used as a concise English equivalent of the corresponding terms "comprising" and "wherein". Moreover, in the following claims, the terms "including" and "comprising" are open-ended, that is, include those elements that are listed after the term in the claims. Systems, devices, articles or processes of the elements are still considered to fall within the scope of the claims. Moreover, in the following claims, the terms "first," "second," and "third," etc. are used merely as labels, and are not intended to impose a numerical requirement on the subject.The above description is intended to be illustrative, and not restrictive. For example, the above examples (or one or more aspects thereof) can be used in combination with other examples. Other embodiments may be utilized by one of ordinary skill in the art in view of the above description. The abstract is intended for the reader to quickly determine the nature of the present disclosure. It is submitted with the understanding that it will not be used to interpret or limit the scope or meaning of the claims. In addition, in the above Detailed Description, various features may be grouped together to make the disclosure fluid. However, the claims may not list every feature disclosed herein, as embodiments can feature a subset of the features. Additionally, embodiments may include fewer features than those disclosed in the particular examples. Thus, the appended claims are hereby incorporated in their entirety in the claims The scope of the embodiments disclosed herein should be determined by reference to the appended claims and the appended claims. |
Barrier structures are included within the packaging material of a packaged semiconductor device, such barrier structures including barrier bodies which overlie the die-die pad assembly of the device on either side thereof. The barrier bodies act as baffles which limit diffusion of moisture through the packaging material into the area of the die-die pad assembly of the device, the barrier bodies including apertures therethrough which control such diffusion in a manner that avoids delamination problems in the area of the die-die pad assembly, meanwhile also avoiding undesirable trapping of gas within the packaging material. |
What is claimed is: 1. A semiconductor device comprising:a lead frame having a die pad and a tie bar structure extending from the die pad; a die an the die pad to form a die-die pad assembly; a barrier structure having a barrier body on one side of and spaced from the die-die pad assembly, the barrier structure being secured to the tie bar structure; wherein the barrier body generally overlies the die-die pad assembly. 2. A semiconductor device comprising:a lead frame having a die pad and a tie bar structure extending from the die pad; a die an the die pad to form a die-die pad assembly; a barrier structure having a barrier body on one side of and spaced from the die-die pad assembly, the barrier structure being secured to the tie bar structure; wherein the barrier body is on the die side of the die-die pad assembly. 3. The semiconductor device of claim 1 wherein the barrier body is on the die pad side of the die-die pad assembly.4. A barrier structure for resisting moisture flow to a die-die pad assembly of a semiconductor device, wherein the die pad has a tie bar structure extending therefrom, the device comprising a barrier body, the barrier structure being mountable to the tie bar structure so that the barrier body is spaced from the die-die pad assembly, wherein the barrier structure further comprises a protrusion extending from the barrier body for securing the barrier structure to a tie bar structure.5. The device of claim 4 wherein the projection is a stamped projection.6. A method of fabricating a semiconductor device comprising:providing a lead frame assembly having leads, a die pad, a die on the die pad, and a tie bar structure extending from the die pad; providing a first barrier structure secured to the tie bar structure and having a first barrier body positioned on the die pad side of and spaced from the die-die pad assembly; wire bonding the die to the leads of the lead frame; providing a second barrier structured to the tie bar structure and having a second barrier body on the die side of and spaced from the die-die pad assembly; and providing molding compound around the die-die pad assembly, wire bonding, and first and second barrier structures. 7. The method of claim 6 and further comprising the step of providing that the first and second barrier structures comprise first and second barrier bodies each having a plurality of apertures therethrough.8. The method of claimed 6 and further comprising the step of providing first and second additional barrier structures secured to the tie bar structure and comprising respective first and second additional barrier bodies positioned respectively on the die and die pad sides of the assembly and the first additional barrier body, the second barrier body positioned generally pad-die pad assembly, the first barrier body generally between the die-die pad between the die-die pad assembly and the second additional barrier body.9. The device of claim 4 wherein the barrier body has a plurality of apertures therethrough.10. The semiconductor device of claim 1 and further comprising a second barrier structure having a second barrier body on the other side of and spaced from the die-die pad assembly, the barrier structures each being secured to the tie bar structure.11. The semiconductor device of claim 1 wherein the barrier body has a plurality of apertures therethrough.12. A semiconductor device comprising:a lead frame having a die pad and a tie bar structure extending from the die pad; a die on the die pad to form a die-die pad assembly, a barrier structure having a barrier body on one side of and spaced from the die-die pad assembly, the barrier structure being secured to the tie bat structure; and further comprising an additional barrier structure having an additional blame body on the one side of the die-die pad assembly, the additional barrier structure being secured to the tie bar structure, the first-mentioned barrier body being positioned generally between the additional barrier body and the die-die pad assembly. 13. The semiconductor device of claim 12 wherein the first and second barrier bodies each define a plurality of apertures therethrough, the apertures defined by the first barrier body being generally non-aligned with the apertures defined by the additional barrier body.14. The semiconductor device of claim 1 wherein the tie bar structure comprises a plurality of tie bars extending from the die pad, the barrier structure being secured to the plurality of tie bars.15. A semiconductor device comprising:a lead frame having a die pad and a tie bar structure extending from the die pad; a die on the die pad to form a die pad assembly; a barrier structure having a barrier body on one side of and spaced from the die-die pad assembly, the barrier structure being secure to the tie bar structure; wherein the barrier structure further comprises a protrusion extending from the barrier body and secured to the tie bar structure. 16. The semiconductor device of claim 1 and further comprising molding compound surrounding the die-die pad assembly and barrier structure.17. The semiconductor device of claim 12 wherein the first and additional barrier bodies generally overlie the die-die pad assembly. |
BACKGROUND OF THE INVENTION1. Field of the InventionThis invention relates generally to semiconductor packaging, and more particularly, to a package of a semiconductor device having increased ability to limit moisture flow through the packaging material and into the area of the die and die pad2. Discussion of the Related ArtReference is made to FIGS. 1-3, wherein a typical packaged semiconductor structure 10 is shown. As is well known, a die pad 12 has a die 14 epoxied thereon and tie bars 16, 18 extending therefrom. Bonding wires 20 connect the die 14 to the inner ends of leads 22. The die pad 12, die 14, wire bonding 20 and inner ends of the leads 22 are packaged in epoxy 24, commonly referred to as plastic packaging, with the outer ends of the leads 22 extending from the epoxy 24. These leads 22 may be plugged into a printed circuit board 26, as shown in FIG. 3.Over a period of time, moisture 28 diffuses through the plastic package 24 and into the area of the die 14, die pad 12, and wire bonding 20. Through the operation of the device 10, heat buildup can cause moisture in those critical areas to vaporize, creating large amounts of steam pressure in pockets adjacent to these critical areas. In the case of moisture near the lower side of the die pad 12, blisters may form of sufficiently large size to cause connection problems between the printed circuit board 26 and the device 10. In the case of moisture near the upper side of the die 14, wire bond failure, die face damage, and/or die fracture can occur. Obviously, situations where such delaminations can occur are highly undesirable and can result in severe reliability problems.Current practice is to measure how well each packaged device performs under JEDEC stress testing (specified temperature and humidity environment for a specified time for each Level). Manufacturers of semiconductor devices strive to reach Level 1 as that is the only level for which neither device baking nor a hermetic bagging are required. This "bake and bag"process involves additional expense, and would not be necessary if the package were inherently resistant to moisture flow therethrough to the area of the die 14, die pad 12 and wire bonding 20.FIG. 4 shows a packaged semiconductor device 29 which utilizes an anodized aluminum heat spreader 30 in contact with the die pad 32 for dissipation of heat buildup during the operation of the device 29. In the fabrication of the device 29, the heat spreader 30 is placed in the well of a transfer molding machine, legs 34 extending from the heat spreader 30 contacting the bottom of the well, and a lead frame 36 having a die 38 associated therewith and wire bonded thereto is placed over the heat spreader 30 with the die pad 32 in contact with the heat spreader 30, so as to be an interference fit with the heat spreader 30. As molding compound 40 is forced into the area of the die 38 and die pad 32, air can be trapped above the heat spreader 30 and beneath the die pad 32 to form an air void therebetween. The heat spreader 30 includes apertures 42 therethrough to allow this air void to pass through the apertures 42 and away from the area of the die pad 32, thereby relieving pressure from this area. While this configuration is effective for this purpose, the problem of limiting moisture ingress into the area of the die pad 32, die 38, and wire bonding is not addressed. Indeed, the apertures 42 in the heat spreader 30 will readily allow moisture therethrough directly into the area of the die pad 32 in contact with the heat spreader 30. Furthermore, the problem of moisture ingress into the area of the die 38 and wire bonding through the packaging material thereabove is not addressed.Therefore, it would be highly desirable to provide a system for limiting moisture diffusion through the plastic packaging material to the critical areas of the die, die pad, and wire bonding, so that the delamination problems described above are avoided.SUMMARY OF THE INVENTIONIn the present invention, in the environment of a semiconductor die epoxied to a die pad, with tie bars extending from the die pad, barrier structures are secured to the tie bars so as to have barrier bodies positioned on opposite sides of and overlying the die-die pad assembly. Each barrier body is spaced from the die-die pad assembly, and has a plurality of small apertures therethrough. As moisture diffuses through the plastic package material, the barrier bodies act as baffles to limit such diffusion in the direction of the die-die pad assembly to only that passing through the apertures. From these apertures the moisture diffuses in a spreading manner. Additional barrier structures may be provided which have barrier bodies overlying the above-described barrier bodies, the additional barrier bodies having apertures therethrough which are non-aligned with the apertures of the first-described barrier bodies to provide an even longer moisture flow path to the die-die pad area.The present invention is better understood upon consideration of the detailed description below, in conjunction with the accompanying drawings. As will become readily apparent to those skilled in the art from the following description, there are shown and described embodiments of this invention simply by way of the illustration of the best mode to carry out the invention. As will be realized, the invention is capable of other embodiments and its several details are capable of modifications and various obvious aspects, all without departing from the scope of the invention. Accordingly, the drawings and detailed description will be regarded as illustrative in nature and not as restrictive.BRIEF DESCRIPTION OF THE DRAWINGSThe novel features believed characteristic of the invention are set forth in the appended claims. The invention itself, however, as well as said preferred mode of use, and further objects and advantages thereof, will best be understood by reference to the following detailed description of illustrative embodiments when read in conjunction with the accompanying drawings, wherein:FIGS. 1-4 are sectional views of prior art semiconductor devices as described above;FIG. 5 is a sectional view of a semiconductor device incorporating the present invention;FIG. 6 is a sectional view of the device of FIG. 5, taken along the line 6-6 of FIG. 5;FIG. 7 is a sectional view of the device of FIG. 5, taken along the line 7-7 of FIG. 5.FIG. 8 is a plan view of a preferred embodiment of barrier structure of the present invention;FIG. 9 is a side view of the barrier structure of FIG. 8;FIG. 10 is an enlarged view of a portion of the structure of FIG. 5, showing the barrier structures limiting moisture flow;FIG. 11 it is a plan view of a typical lead frame for use in with the present invention;FIG. 12 is a sectional view of a portion of the lead frame of FIG. 11, showing the barrier structure mounted thereto, and the step of wire bonding;FIG. 13 is a plan view of a portion of an alternative embodiment of lead frame;FIG. 14 is a plan view of an alternative embodiment of barrier structure; andFIG. 15 is a sectional view showing the combination of the structures of FIGS. 13 and 14.DETAILED DESCRIPTIONReference is now made in detail to specific embodiments of the present invention which illustrate the best mode presently contemplated by the inventors for practicing the invention.As shown in FIGS. 5, 6 and 7, a die pad 50 has tie bars 52, 54 extending therefrom, and has epoxied thereon a silicon chip or die 56. The embodiment of FIGS. 5-7 includes barrier structures 58, 60, 62, 64, one of which is shown in detail in FIGS. 8 and 9. The barrier structure 58 is of very thin material, for example, Cu(Be) foil, and includes a rectangular barrier body 66 defining a plurality of apertures 67 therethrough and protrusions 68, 70 extending therefrom. The barrier structure 58 may be formed by a stamping process to define the rectangular overall configuration and to punch the plurality of apertures 67 through the barrier body 66, and also to form the protrusions 68, 70 extending from the barrier body 66. The barrier structure 58 may alternatively be fabricated of, for example, Alloy 42 (42 % Ni+58 % Fe), or copper alloy.Again with reference to FIGS. 5-7, the barrier body 66 of the barrier structure 58 is positioned on the die side of the die 56-die pad 50 assembly, with the protrusions 68, 70 of the barrier structure 58 being secured to the tie bars 52, 54 by welding or glue. In such state, the barrier body 66 generally overlies and is spaced from the die 56-die pad 50 assembly, so as to be spaced from the die 56 and wire bonding 72, which connects the die 56 to leads 73. The barrier body 74 of the barrier structure 62 (similar in configuration to barrier structure 58) is positioned on the die pad side of the die 58-die pad 50 assembly, with the protrusions 78, 80 of the barrier structure 62 likewise being secured to the tie bars 52, 54 by welding or glue. The barrier body 74 generally overlies and is spaced from the die 56-die pad 50 assembly, so as to be spaced from the die pad 50. Additional barrier structure 60 overlies the barrier structure 58, and is spaced therefrom with the barrier body 66 of the barrier structure 58 positioned between the barrier body 82 of the barrier structure 60 and the die 58-die pad 50 assembly. This barrier structure 60 includes protrusions 84, 86 which are longer than the protrusions 68, 70 of the barrier structure 58, and is dimension so that the protrusions 84, 86 lie outward of the ends of the barrier structure 58. These protrusions 84, 86 of the barrier structure 60 are also welded or glued to the tie bars 52, 54. A similar structure 64 is provided on the die pad 50 side of the die 58-die pad 50 assembly. That is, additional barrier structure 64 has its body 88 overlying and spaced from the body 74 of the barrier structure 62, and has protrusions 90, 92 secured to the tie bars 52, 54.As will be seen from Figures, the apertures 67 in the barrier body 66 are non-aligned with the apertures 94 in the barrier body 82, i.e., they are in staggered relationship. Similarly, the apertures 96 in the barrier body 74 are non-aligned with the apertures 98 in the barrier body 88. As shown in FIG. 10, as moisture 99 flows through the packaging material 100 and toward the barrier body 82, this moisture 99 is blocked to a substantial extent by the barrier body 82 from passing into the area between the barrier bodies 82, 66. However, the moisture 99 which does pass through the apertures 94 of the barrier body 82, it will be seen, diffuses downwardly and outwardly as shown. Thus, a very limited amount of moisture passes through the apertures 94 of the barrier body 82, from which it is spread and diffused. It is only this very limited amount of moisture which may reach the barrier body 66. Thus, a very minimal amount of moisture will pass through the aperture 67 of the barrier body 66 and toward the die 58-die pad 50 assembly.The barrier structures 62, 64 on the underside of the die 58-die pad 50 assembly clearly limit moisture flow to that area in the same manner.The spacing of the barrier bodies from the die 58-die pad 50 assembly and from each other is significant in achieving the proper limitation of moisture flow into the critical areas. This spacing between elements allows the moisture to diffuse into relatively large areas so as to aid in the protection of critical structures.While not essential, it is preferable that the barrier bodies not be continuous, but include such apertures shown and described Providing such apertures avoids the possible problem of molding compound intruding during transfer molding to trap bubbles of gas, creating voids. If such voids were present, local tensile stresses could increase and thermal impedance could be degraded. The apertures provided in the barrier bodies avoid such a potential problem by allowing such air to be vented away from the critical areas of the die, die and wire bonding.The apertures in the barrier body may be spaced more widely in the least critical areas, those adjacent the junction of the die-die pad assembly, i.e., near the center of the barrier body, so that venting would move progressively inward from the periphery due to molding compound ingression. The effect is to provide the greatest limitation to moisture flow where the need is greatest, where the differential CTE (Coefficient of Thermal Expansion) is greatest, far from the die center.The apertures in each barrier body may for example be 0.2 mm in diameter with a pitch of 1 mm.In furtherance of describing the assembly of a semiconductor device incorporating the present invention, with reference to FIG. 11, a typical lead frame 110 is shown therein. As is well-known, a pair of rails 112, 114 are connected by dam bars 116, 118, with tie bars 120, 122 extending outward from a die pad 124 between a pair of dam bars 116, 118, the tie bars 120, 122 being connected to the rails 112, 114. Leads 126 are connected to the dam bars 116, 118 as is also well-known.A portion of this lead frame 10 is shown in sectional view of FIG. 12, with the barrier structure 62 and barrier structure 64 being mounted thereto as described above. The die pad 124 has a die 128 epoxied thereto, and the barrier structure 62, 64 are secured to the tie bars 120, 122 on the die pad side of the die 128-die pad 114 assembly. The structure of FIG. 12 is now placed in a wire bonding machine 130 with a cavity 132 sufficiently deep to receive the barrier structures 62, 64. Then, wire bonding is undertaken, after which the barrier structures 58, 60 are added, being secured to the tie bars 120, 122 as described above, on the die side of the die 128-die pad 124 assembly. Then, transfer molding is undertaken to form the plastic package of the device, and the rails 112, 114 and portions of the dam bars 116, 118 are removed as is well known so that the leads 126 of the device are defined and extend from the package.It is to be noted that spot welding of the protrusions to the tie bars 120, 122 takes place with the dam bars 116, 118, rails 112, 114 and tie bars 120, 122 in place, that is, prior to the removal of the rails 112, 114 and portions on the dam bars 116, 118. Thus, the entire structure is shorted out, and there will not be any voltage imposed on a pin of the device relative to any other pin due to the spot welding operation.FIG. 13 shows an alternative embodiment of lead frame 150, wherein each tie bar 152, 154 is bifurcated in the respective area 152A, 154A where it extends to connect to a rail 156, 158. In this configuration, each barrier structure 160 has four protrusions 162, 164, 166, 168 stamped therein (FIG. 14), a protrusion being welded to each extended leg of a tie bar. The mounting of the barrier structures160 to the bifurcated ends 152A, 154A by welding or glue as described above is shown in FIG. 15. This embodiment can be used when increased mounting stability of the barrier structures is desired.As yet another embodiment, the barrier structures may have protrusions which match up with the four diagonal tie bars of a quad flat pack lead frame configuration.It will therefore be seen that the present invention provides for highly effective limitation of moisture flow through a package to the critical die-die pad area of a semiconductor device. This resistance to moisture in this critical areas avoids problems of delamination as described above, so that high device reliability is achieved.The foregoing description of the embodiment of the invention has been presented for purposes of illustration and description. It is not intended to be exhaustive or to limit the invention to the precise form disclosed. Other modifications or variations are possible in light of the above teachings. The embodiments were chosen and described to provide the best illustration of the principles of the invention and its practical application to thereby enable one of ordinary skill of the art to utilize the invention in various embodiments and with various modifications as are suited to the particular use contemplated. All such modifications and variations are within the scope of the invention as determined by the appended claims when interpreted in accordance with the breadth to which they are fairly, legally and equitably entitled. |
A leadframe 110 has a peripheral frame 112. A die attach pad (DAP) 130 is positioned inwardly and downwardly of the peripheral frame 112. Two spaced apart parallel arms 174, 180 engage one side of theDAP 130. In one embodiment the arms 174, 180 are portions of a U-shaped strap 170. |
1.A lead frame includes:Peripheral frameworka die attach pad, or DAP, positioning the die attach pad, DAP, inwardly and downwardly from the peripheral frame; andA generally U-shaped strap that connects the peripheral frame and the DAP.2.The lead frame according to claim 1:Wherein the peripheral frame includes a first side rail and a second side rail, the second side rail being positioned opposite and substantially parallel to the first side rail;Wherein the DAP has a first side with opposite longitudinal ends; andThe substantially U-shaped band includes:An elongated central body portion, said elongated central body portion having opposite first and second ends;a first arm portion having a proximal end and a distal end, the proximal end being connected to the first end on the central body portion; andA second arm portion, the second arm portion having a proximal end and a distal end, the proximal end being connected to the second end on the central body portion.3.The lead frame of claim 2, wherein the distal ends of the first and second arm portions are connected to the first side of the DAP.4.The lead frame of claim 3, wherein a portion of the DAP extends laterally outward between the first arm portion and the second arm portion.5.The lead frame of claim 4, wherein said elongated central body portion of said generally U-shaped ribbon is connected to said first side rail of said peripheral frame.6.The lead frame of claim 1, wherein the lead frame peripheral frame is positioned substantially parallel to the DAP.7.The lead frame of claim 2, wherein a portion of said DAP extends laterally outwardly between said first arm portion and said second arm portion of said generally U-shaped strap, and wherein said DAP's The DAP, which extends laterally outwardly from a portion where the first arm portion and the second arm portion are attached, is longitudinally recessed.8.The lead frame of claim 7, a die mounted on the DAP and extending laterally outwardly between the first arm portion and the second arm portion.9.The lead frame of claim 1 wherein said lead frame peripheral frame includes first and second transverse rails positioned oppositely, each of said first and second transverse rails having Multiple leads protruding inward from it.10.The lead frame of claim 9, wherein the first and second transverse rails are parallel rails.11.The lead frame of claim 9, wherein the first rail and the second rail are laterally extending rails.12.The lead frame of claim 2, wherein said elongate central body portion of said U-shaped strip is positioned in a substantially coplanar relationship with said peripheral frame.13.The lead frame of claim 12, wherein the center body portion of the U-shaped ribbon is connected at its center portion to the first side rail of the peripheral frame.14.A lead frame assembly includes:a peripheral frame, said peripheral frame comprising a first rail;a die attach pad or DAP that positions the die attach pad or DAP from the peripheral frame down and inwardly, and the die attach pad or DAP has lateral sides;First downwardly and inwardly extending parallel arms and second downwardly and inwardly extending parallel arms, said first downwardly and inwardly extending parallel arms and said second downwardly and inwardly extending parallels The arms are positioned in spaced relationship and operative to support the first downwardly and inwardly extending parallel arms and the second at its first end adjacent the opposite end of the first rail. A downwardly and inwardly extending parallel arm, and at its second end adjacent the opposite end of the lateral side of the DAP, the first downwardly and inwardly extending parallel arm and the base are attached The second downward and inward extending parallel arms are described.15.The lead frame assembly of claim 14, further comprising a first die mounted on the DAP, wherein a portion of the die is positioned between two spaced apart arm portions.16.The lead frame assembly of claim 15, further comprising a second die mounted on the DAP in a laterally spaced relation to the first die.17.The lead frame assembly of claim 14, wherein the first end of the parallel arm engages the first rail.18.The lead frame assembly of claim 14, wherein the first end of the parallel arm is attached to the first rail by a cross member engaging the first arm and the second arm.19.A lead frame includes:Peripheral frameworka die attach pad, or DAP, positioning the die attach pad, DAP, inwardly and downwardly from the peripheral frame; andTwo spaced apart parallel arms that engage one side of the DAP.20.The lead frame of claim 19 wherein the two spaced apart parallel arms are portions of a U-shaped strip. |
Lead frameBackground techniqueIntegrated circuit packages typically include one or more integrated circuit die and metal lead frames, with one or more integrated circuit die and metal lead frames encapsulated within a protective coating of a plastic mold compound. Prior to encapsulation, the die(s) are mounted on a metal lead frame and electrically connected to a metal lead frame. The lead frame has leads that are exposed by the molding compound holding portion after the encapsulation occurs. The lead frame allows the die(s) to be electrically connected to electrical components external to the integrated circuit package.Summary of the InventionThe lead frame has a peripheral frame. The die attach pad (DAP) is positioned inwardly and downwardly from the peripheral frame. A generally U-shaped strap connects the peripheral frame and the DAP.The lead frame has a peripheral frame. The die attach pad (DAP) is positioned inwardly and downwardly from the peripheral frame. Two spaced apart parallel arms engage one side of the DAP.The lead frame has a peripheral frame with a first rail. The die attach pad (DAP) is positioned down and inward from the peripheral frame, and the die attach pad (DAP) has lateral sides. First downwardly and inwardly extending parallel arms and second downwardly and inwardly extending parallel arms are positioned in a spaced relationship and are operable at their first ends adjacent opposite ends of the first rail. Supporting first downwardly and inwardly extending parallel arms and second downwardly and inwardly extending parallel arms, and attaching first directions at second ends of opposite ends thereof adjacent to the lateral sides of the DAP Lower and inwardly extending parallel arms and second parallel arms extending downward and inward.Description of the drawingsFIG. 1 is a top isometric view of a first lead frame assembly.Figure 2 is a top isometric view of another lead frame assembly.Figure 3 is a top isometric view of another lead frame assembly.FIG. 4 is a top plan view of the lead frame assembly of FIG. 3 .FIG. 5 is a top plan view of an alternative embodiment of the lead frame assembly of FIG. 3 .FIG. 6 is a flow diagram of a method of reconfiguring a lead frame to accommodate a larger die without increasing the lead frame footprint.detailed descriptionFIG. 1 is a top isometric view of a lead frame assembly 8 . This assembly 8 includes a lead frame 10 having a top surface 11 and a bottom surface 13 . The lead frame 10 has a generally rectangular peripheral frame 12 that includes a first longitudinal end guide rail 14, a second longitudinal end guide rail 16 opposite the first end guide rail 14, a first lateral side guide rail 18, and a first side guide rail 18的相对的第二侧侧导轨20。 18 opposite the second lateral side guide 20. (It will be immediately understood from the previous sentence that, in order to establish a reference frame for describing the lead frame assembly 8 of FIG. 1 , the direction extending substantially between the bottom and the top of the sheet is arbitrarily designated as “longitudinal” and generally vertical/ The direction transverse to the longitudinal direction is designated as "transverse." The first end guide 14 has a first plurality of leads 22 extending inwardly therefrom, and the second end guide 16 has a second plurality extending inwardly therefrom. The lead 28, the second plurality of leads 28 may be in a mirror image relationship with the first plurality of leads 22. The peripheral edge 15 of the generally rectangular peripheral frame 12 defines a lead frame footprint.With continued reference to FIG. 1, the die attach pad (DAP) 30 is positioned inwardly and downwardly with respect to the peripheral frame 12. DAP 30 has a top surface 31 and a bottom surface 33 . (For purposes of describing the relative positions and orientations of various objects and/or portions thereof, such as "horizontal" and "longitudinal" as discussed above, such as "upper", "lower", "above", "below ... Terms such as "horizontal," "vertical," etc. are used herein in relative terms. Therefore, referring to the surface of the die pad as the top of the die pad only means that it is the top surface in the reference frame, such as As shown and described in the specification and drawings, the present disclosure does not use such terminology to establish an absolute frame of reference relative to the Earth's gravitational field or another gravity system.Thus, as used herein, the top surface of the DAP 30 is Refers to the surface to which the die 52 is attached, regardless of how the surface may be oriented with respect to any particular gravitational field.) The DAP also has a first longitudinal end 34, a second longitudinal end 36, a first lateral side 38, and a second lateral side 40. . The central longitudinally extending slot 42 divides the DAP into a first lateral segment 37 and a second lateral segment 39, each of which is adapted to support a predetermined size of an integrated circuit die. A first downwardly and inwardly extending DAP tape 44 attaches the first lateral side 38 of the DAP 30 to the lead frame first side rail 18, and a second downwardly and inwardly extending tape 46 will second the DAP 30. The lateral side 40 is attached to the lead frame second side rail 20 . The DAP tape 44, DAP tape 46 is positioned on the lead frame 30 in a mirror image relationship. The distance between the two lateral side edges of the DAP 30 is indicated by "g" in FIG. The longitudinal distance between the two longitudinal end edges 34 and the longitudinal end edges 36 of the DAP is indicated by "h".The first die 52 is mounted on the first lateral section 37 of the DAP 30, and the second die 54 is mounted on the second lateral section 39 of the DAP 30. The predetermined minimum separation distance "s" between the longitudinally extending slots 42 and the two dies 52, 54 is combined to prevent electrical arcing. In one embodiment, the distance g is 3.63 mm, the distance h is 3.20 mm, and the distance "s" is 0.17 mm. The first layer die attach paste 56 and the second layer die attach paste 58 after curing hold a fixed relationship of the die 52, the die 54, and the DAP 30.FIG. 2 is a top isometric view of the lead frame assembly of FIG. 1 in which the left die 52 has been replaced with a larger die 52A. Except for the mark with the added "A", all the structures and reference numerals in FIG. 2 correspond to all the structures and reference numerals in FIG. The problem illustrated by FIG. 2 is that the new, wider die 52A does not have enough space in which the first lateral section 37 of the DAP is properly attached. Die attach paste 56A flows out over the edge of DAP 30, and the outer lateral portion of die 52A hangs over the edge of DAP 30. The edge of the die 52A also rides on the belt 44, causing the bottom surface of the die 52A to be positioned in a non-parallel relationship with the top surface 31 of the DAP 30. If the gap distance "s" is not maintained, moving the die 52A closer to the die 54 will not solve the die mounting problem due to potential arcing. In some cases, it may be possible to replace the larger die 52A with a smaller, more dense die that provides the same circuit capabilities as the full-size new die 52A. However, designing and manufacturing such a die (on the premise that it can be realized) will take a considerable period of time and will significantly increase the cost of the associated integrated circuit package.Applicants have discovered that the above problems can be eliminated by using different DAP tape assemblies. As described in detail below, this different tape assembly enables DAP to be laterally expanded, which in turn allows the DAP to fully support the outer lateral portion of the new, wider die. As a result, the gap distance "s" between the two dies on the DAP can remain unchanged, and the lead frame footprint can remain unchanged. Also, the lead of the new lead frame may have the same lead pattern as the lead of the old lead frame.3 is a top isometric view of a modified version of the lead frame assembly 108 of the lead frame assembly 8 of FIG. This reconfiguration enables the use of die 152 that is wider in lateral direction than die 52 without changing the first lead frame footprint. In one embodiment, the lead configuration in the second lead frame assembly 108 is the same as the lead configuration of the first lead frame assembly 8 . The lead frame assembly 108 includes a lead frame 110 having a top surface 111 and a bottom surface 113 . The lead frame 110 has a generally rectangular peripheral frame 112, and the peripheral frame 112 includes first and second longitudinal end guides 114, 116, a first lateral side guide 118, and a second lateral guide 120. The first end rail 114 has a first plurality of leads 122 extending inwardly therefrom, and the second end rail 116 has a second plurality of leads 128 extending inwardly therefrom. The first plurality of leads and the second plurality of leads can be positioned in a mirror image relationship. The peripheral edge 115 of the generally rectangular peripheral frame 112 defines a lead frame footprint.As further illustrated by FIG. 3, the die attach pad (DAP) 130 is positioned inwardly from the peripheral frame 112 of the lead frame 110. DAP 130 has a top surface 131 and a bottom surface 133 . The DAP 130 also has a first longitudinal end 134, a second longitudinal end 136, a first lateral side 138, and a second lateral side 140. A central longitudinally extending slot (aperture) 142 divides the DAP 130 into a first lateral segment 137 and a second lateral segment 139, each of the first lateral segment 137 and the second lateral segment 139 being adapted to support the integrated circuit die 152 154. The peripheral frame 112 and DAP 130 may be parallel.A first DAP tape assembly 170 (FIG. 3) having two longitudinally spaced downwardly and inwardly extending arm portions 174, arm portions 180 attaches the opposite end of the first side 138 of the DAP 130 to the lead frame first Side guides 118. The first DAP tape assembly 170 in the embodiment of FIG. 3 is a U-shaped belt assembly. (As used herein, the term "U-shape" refers to a shape resembling a letter "U" shape irrespective of its orientation. In other words, the "U" opening may face up, down, left, right, or along some intermediate Directional.) The second side 140 of the DAP 130 may be attached to the lead frame second side rail 120 in the same downwardly and inwardly extending strip 146 as the strip 46 of the lead frame 10 . The lateral distance between the outer edges of the DAP lateral sides 138, DAP lateral sides 140, ie the width of the DAP 130, is indicated in FIG. 3 by “j”. The width "j" is greater than the width "g" described in FIG. The longitudinal dimension measured between the two longitudinal ends 134, longitudinal ends 136 of the DAP 130 is indicated by "k" in FIG. This dimension "k" is the same as the dimension "h" of the DAP 30 of FIG. In an example embodiment, "j" is 3.83 mm, and "k" is 3.20 mm.With continued reference to FIG. 3, the first die 152 is mounted on the first lateral section 137, and the second die 154 is mounted on the second lateral section 139 of the DAP 130. The width and length of the central slot 142 in the DAP 130 and the distance “s” between the two dies 152, 154 may be the same as the corresponding features shown in FIG. 1. The first layer die attach paste 156 and the second layer die attach paste 158 attach the die 152, the die 154 to the associated lateral section 137 of the DAP 130 at the indicated spacing distance "s". Transverse section 139. In one embodiment, the separation distance “s” of the lead frame assembly 8 of FIG. 1 from the lead frame assembly 108 of FIG. 3 is the same. As mentioned above, the width "j" of the DAP 130 is greater than the width "g" of the first DAP 30 of FIG. The enlarged width DAP 130 includes a portion extending laterally between the two arms 178, the arms 180. The enlarged width of the DAP 130 enables full support of the die 152 on the DAP 130 of FIG. 3, which is wider than the die 52 supported by the first DAP 30. In one example embodiment, the die 52 has a width of 1.34 mm and the die 152 has a width of 1.54 mm.As can be seen in FIG. 3, a portion of the die 152 extends laterally between the two arm portions 174 of the U-belt 170, the arm portion 180, such that the die 152 is the first (left) of the DAP 130 over its entire width. The lateral section 137 is supported. Similarly, the die attach paste 156 that attaches the die 152 to the DAP 130 extends laterally between the two arm portions 174, the arm portions 180, and extends laterally slightly further than the die 152 itself. Thus, in this embodiment, the lead frame assembly 108 is substantially the same as the lead frame assembly 8, except that the DAP 130 is laterally expanded such that one lateral side of the DAP 130 is attached to the lead frame rail 118 by the U-shaped strap 170. More specifically, the U-shaped ("longhorn-shaped") strap 170 has a central body portion 172, and two downwardly and inwardly extending arms integrally attached at either end of the central body portion 172. Section 174, arm section 180. Each arm portion 174, arm portion 180 has a proximal end 176 attached to the central body portion 172, a proximal end 182, and a distal end 178 attached to the opposite longitudinal end of the left lateral section 137 of the DAP, the distal end 184. . In some embodiments, arms 174, arms 180 are generally parallel. The center body portion 172 may be attached to the middle portion of the first lead frame side guide rail 118 at the middle portion 173 thereof. In addition, the DAP 130 is laterally larger than the DAP 30 of FIG. This extra lateral width is provided without changing the spacing between the dies. Thus, the spacing "s" between the die 152 and the die 154 is the same as the spacing "s" between the die 52 and the die 54. As illustrated in the embodiment of FIG. 3, the first plurality of leads 122 and the second plurality of leads 128 of the lead frame 110 may be configured to be the same as the first plurality of leads 22 and the second plurality of leads 28 of the lead frame 10. .4 is a top plan view of a DAP 130 attached to a lead frame rail 118 by a U-shaped strap 170 as described with reference to FIG. 3, where like parts are designated by like reference numerals.FIG. 5 is a top plan view of an alternative embodiment of the DAP support assembly shown in FIGS. 3 and 4 . In the embodiment of FIG. 5, DAP 130 is supported by downwardly and laterally extending arms 174A and arms 180A. Arm 180A is connected directly to leadframe guide 118 at proximal end 182A and to DAP 130 at distal end 184A. Similarly, the arm 174A is directly connected to the lead frame rail 118 at the proximal end 176A and to the DAP 130 at the distal end 178A. Thus, in FIG. 5, the transverse center body portion of the U-shaped strap 170 has been dispensed with by using two slightly longer arms 174A, arm 180A, but the functions performed are substantially the same. As can be seen from FIGS. 4 and 5, in these embodiments, the laterally outwardly extending sections of the DAP may be recessed longitudinally inward from the arms 174, the arms 180 or the arms 174A, the arms 180A.FIG. 6 illustrates a method of reconfiguring a lead frame with a die attach pad (DAP) supported on one of its lateral sides by an arm DAP strap. As shown at block 201, the method includes replacing one arm DAP tape with a DAP tape assembly having two generally parallel arms. This method allows the DAP of the lead frame to expand laterally between the two arms to accommodate the larger die without changing the footprint of the lead frame.The lead frame assembly and the method of reconfiguring the lead frame have been described in detail herein, which enables a larger die to be supported on the die attach pad of the lead frame without expanding the lead frame footprint. After reading this disclosure, alternative embodiments of the disclosed lead frame assemblies and methods will occur to those skilled in the art. It is intended that the language of the claims be interpreted broadly so as to cover such alternative embodiments as are limited by the prior art. |
A die assembly is disclosed. The die assembly includes a die, one or more die pads on a first surface of the die and a die attach film on the die where the die attach film includes one or more openings that expose the one or more die pads and that extend to one or more edges of the die. |
1.A die assembly includes:DieOne or more die pads on the first surface of the die; andThe die attach film on the die, wherein the die attach film includes one or more die pads that expose the one or more die pads and extend to one or more edges of the die. Openings.2.The die assembly of claim 1, further comprising one or more die pads attached to the second surface of the die.3.The die assembly of claim 1 or 2, wherein the one or more openings include one or more channels containing underfill material.4.The die assembly of claim 1 or 2, wherein the one or more openings include one or more channels containing an underfill material that is different from the material of the die attach film.5.The die assembly of claim 1 or 2, wherein the one or more die pads are through silicon via (TSV) backside die pads.6.The die assembly of claim 1 or 2, wherein the die assembly is on a first packaging substrate, and the one or more die pads are connected to a single die, the first die, and The second die, the second package substrate or the interposer.7.The die assembly of claim 1 or 2, wherein the die assembly is surrounded by mold material.8.A system including:One or more storage components; andOne or more integrated circuit dies including a die assembly, the die assembly including:The die in the die installation space;One or more die pads attached to the first surface of the die; andThe die attach film on the die, wherein the die attach film includes one or more die pads that expose the one or more die pads and extend to one or more edges of the die. Openings.9.The system of claim 8, further comprising one or more die pads attached to the second surface of the die.10.The system of claim 8 or 9, wherein the one or more openings comprise one or more channels containing underfill material.11.The system of claim 8 or 9, wherein the one or more openings include one or more channels containing an underfill material that is different from the material of the die attach film.12.The system of claim 8 or 9, wherein the one or more die pads are through silicon via (TSV) backside die pads.13.The system of claim 8 or 9, wherein the die assembly is on a first packaging substrate, and the one or more die pads are connected to a single die, the first die, and the second Die, second package substrate or interposer.14.The system of claim 8 or 9, wherein the die assembly is surrounded by mold material.15.One method includes:Forming a wafer having a first surface and a second surface;Forming die pads on the first surface and the second surface;Forming a laminate covering the die pad on the first surface of the wafer;Flip the wafer so that the second surface faces upward;Forming a die attach film on the second surface;Patterning the die attach film;Dividing the wafer to form dies;Mounting the die on the substrate;Encapsulating the die on the substrate with an encapsulating material; andThe encapsulation material is planarized.16.The method of claim 15, wherein the patterning comprises photolithographic patterning.17.The method of claim 15 or 16, wherein the patterning comprises laser patterning.18.The method of claim 15 or 16, wherein the patterning comprises mask etching patterning.19.One method includes:Form a die;Forming one or more die pads on the first surface of the die; andA die attach film is formed on the die, and one that exposes the one or more die pads and extends to one or more edges of the die is formed in the die attach film Or multiple openings.20.The method of claim 19, further comprising mounting the die on a die mount package.21.The method of claim 19 or 20, wherein forming the one or more openings includes photolithography patterning, laser patterning, or mask etching patterning.22.The method of claim 19 or 20, wherein forming the one or more openings includes forming one or more trenches and filling the one or more trenches with an underfill material.23.The method of claim 19 or 20, wherein forming the one or more openings includes forming one or more channels and filling the one with an underfill material that is different from the material of the die attach film Or multiple channels. |
Patternable die attach material and process for patterningTechnical fieldThe embodiments of the present disclosure relate to a die attach film (DAF), and in particular, to a patternable die attach film and a process for patterning it.Background techniqueEmbedded multi-die interconnect bridge (EMIB) technology includes a semiconductor bridge with an ultra-fine line-space structure for die-to-die (D2D) interconnect communication. EMIB technology is useful in heterogeneous chip integration applications. For in-package and high-density interconnection of heterogeneous chips, EMIB packaging technology is an advanced, cost-effective way. It provides extremely high I/O and well-controlled electrical interconnection paths between multiple dies.The die attach film material is used to attach the die (eg, EMIB die) to the package structure. For practical purposes, the die attach film (DAF) material is designed to absorb the coefficient of thermal expansion (CTE) mismatch between the semiconductor die formed on the DAF material and the organic substrate under the DAF material Mechanical stress. The die attach film protects the package from warping and reliability failures. In package architectures involving vertical D2D connection and through silicon via (TSV) die embedding, it is necessary to test the die before embedding to save significant cost, and only allow known good die (KGD) to continue the process flow to The end of the production line.However, the conventional DAF material is not patternable, and the conventional DAF material must be removed by a wet chemical process or dry etching in order to expose the rear copper pads for vertical connection for downstream processing steps. Therefore, the die architecture using such DAF is not suitable for vertical D2D and embedded die TSV methods.Description of the drawingsFIG. 1A shows a semiconductor package substrate having an embedded multi-die interconnect bridge (EMIB) architecture and including EMIB devices.FIG. 1B shows a way of communicatively coupling or connecting the first die and the second die using an EMIB device.Figure 1C shows the use of a die attach film (DAF) to attach the EMIB device to the package substrate.FIG. 2 shows a structure of a package substrate according to an embodiment, the package substrate having a cavity that can be used to mount a through silicon via (TSV) die.FIG. 3 shows a structure of a package substrate according to an embodiment, the package substrate having a cavity that can be used to mount an EMIB bridge die.Figure 4A shows a cross-section of a die assembly including a patterned DAF layer according to an embodiment.Fig. 4B shows a bottom view of the die assembly shown in Fig. 4A according to an embodiment.5A-5L show cross-sections of a die assembly and a package substrate during a process of forming a package structure including a photolithographic patterned DAF according to an embodiment.6A-6K show cross-sections of a die assembly and a package substrate during a process of forming a package structure including a laser drill patterned DAF according to an embodiment.7A-7K show cross-sections of a die assembly and a package substrate during a process of forming a package structure including a mask-patterned DAF according to an embodiment.8A-8L show cross-sections of a die assembly and a package substrate during a process of forming a package structure including a substrate-patterned DAF according to an embodiment.FIG. 9 shows a flowchart of a method for patterning a die attach film on a die according to an embodiment.FIG. 10 shows a flowchart of a method for patterning a die attach film on a substrate and attaching a die according to an embodiment.Fig. 11 shows a computer system according to an implementation of an embodiment.detailed descriptionA patternable die attach film (DAF) is described. It should be appreciated that although the embodiments described herein are described with reference to exemplary patternable die attach film embodiments, the present disclosure is more generally applicable to patternable die attach film embodiments and others Type of patternable die attach film embodiment. In the following description, in order to provide a comprehensive understanding of the embodiments of the present disclosure, many specific details are described, such as specific integration and material systems. It will be obvious to those skilled in the art that the embodiments of the present disclosure can be practiced without these specific details. In other instances, well-known features such as integrated circuit design layout are not described in detail in order to avoid unnecessarily making the embodiments of the present disclosure difficult to understand. In addition, it should be appreciated that the various embodiments shown in the drawings are illustrative representations and are not necessarily drawn to scale.The following description also uses certain terms for reference purposes only, and therefore these terms are not intended to be limiting. For example, terms such as "upper", "lower", "above", and "below" refer to directions in the drawings for which reference is provided. Terms such as "front", "rear", "back" and "side" describe the orientation and/or position of parts of a component in a consistent but arbitrary frame of reference, by reference to the text describing the component in question and the associated With the drawings, the orientation and/or position can be clearly understood. Such terms may include the words specifically mentioned above, their derivatives, and words of similar meaning.In the semiconductor packaging industry, tiling and/or partitioning of packaged components is increasingly used, because this enables heterogeneous die integration, miniaturization of form factors, and improved yields. High performance (smaller packaged components and circuits with higher yield can be produced).In the emerging architecture, in order to increase the communication bandwidth and reduce the semiconductor area of the logic chip, it is necessary to reduce the embedded multi-die interconnect bridge (EMIB) die bump pitch (thus reducing the line pitch structure), which can support the use of The cutting-edge packaging architecture for die-tile splicing and heterogeneous integration applications. In addition to requiring better die bonding accuracy and stricter registration tolerances, the scaling requirements of EMIB technology: (1) Create a mechanically reliable interface without gaps between the backside of the die and the cavity material surface, ( 2) Control the warpage of the die area after die bonding and after encapsulation of the dielectric material, and (3) create a backside opening of the die to realize a through-silicon via D2D vertical connection for 3D packaging applications.Current die bonding processes rely on bonding the die to the cavity interface material through a flush die attach film (DAF). In a package architecture involving vertical D2D connection and TSV die embedding, it is necessary to test the die before embedding to significantly save costs, and only allow known good die (KGD) to continue the process flow to the end of the production line. However, the DAF material may not be patternable, and the DAF material must be removed by a wet chemical process or dry etching in order to expose the backside copper pads used for vertical connection for downstream process steps. Therefore, a die structure using a flush non-patternable DAF is not suitable for vertical D2D and embedded die TSV methods.This article discloses a way to solve the shortcomings of the previous way. For example, as part of the disclosed process, DAF can be patterned on the surface of the die for emerging heterogeneous integration applications that require die embedding, patching, tiled splicing, and the like. Several methods for patterning DAF materials are described. In an embodiment, using any of these methods, the rear side pads of the TSV die can be exposed for easy access, which enables electrical testing for sub-boxing. In addition, the manufacturing process can be significantly simplified to reduce the exposure risk of the wet chemical process, thereby improving the reliability of the product.In the embodiment, a low-cost, easy-to-implement and high-yield method that overcomes the defects of the previous method is provided. In embodiments, the examples described herein may be used to provide high density die-to-die interconnections. In an embodiment, by virtue of realizing the exposed rear copper pads for vertical interconnection of downstream process steps, EMIB and other heterogeneous integration methods for interconnecting modular dies can be more fully utilized. Therefore, a DAF that is convenient for electrical testing and suitable for embedded-die TSV mode is provided.FIG. 1A shows a package substrate 100 having an EMIB architecture and including embedded EMIB devices. In FIG. 1A, the package substrate 100 includes an embedded EMIB device 101, a metal layer 103, a dielectric layer 105, and a pad 107. FIG. 1B shows a way of communicatively coupling or connecting the first die 109 and the second die 111 using the EMIB device 101. 1B, the embedded EMIB device 101 is connected to the first die 109 and the second die 111 through a metal interconnection, which passes from the top surface contact portion of the embedded EMIB device 101 through the package substrate 100. The hole extends upward. The first die 109 and the second die 111 are attached to the top surface of the package substrate 100.Figure 1C shows the attachment of an EMIB device using a die attach film. 1C, the EMIB device 101 is mounted to the package substrate 100 through a die attach film (DAF) 113. It should be recognized that EMIB device 101 does not include a backside pad, which is a cutting-edge package that supports heterogeneous integration applications and die-tile splicing that may involve connections to the die under EMIB device 101 Required for architecture. In addition, for such cutting-edge packaging architectures that can utilize the connection to the die under the EMIB device 101 for heterogeneous integration applications and die-tile splicing, such as the non-patternable DAF 113 shown in FIG. 1C The die attach film may be problematic.Many substrate and interposer processes require embedded dies. The die mounting technology is the key to maximize the function of the substrate or the interposer and ensure the reliability of the substrate or the interposer. Exemplary die mounting architectures (bridge die and TSV die) are shown in FIGS. 2 and 3 below.FIG. 2 shows a structure of a package substrate according to an embodiment, the package substrate having a cavity that can be used to mount TSV dies. 2, the package substrate includes a substrate 201, a metal layer 203, a dielectric layer 205, and interconnects 207. The package substrate includes a cavity 209 on the top portion of the package substrate. In an embodiment, the TSV die may be installed in the cavity 209 and the TSV die may be configured to enable electrical connection to the TSV die from above and below the TSV die.FIG. 3 shows a structure of an EMIB packaging substrate according to an embodiment, the EMIB packaging substrate having a cavity that can be used to mount EMIB devices. 3, the package substrate includes a plurality of dielectric layers 301, a plurality of wiring layers 303, a package core 305, and a cavity 307. The cavity 307 is formed in the top portion of the package substrate and extends from the surface of the package substrate into the package substrate. In an embodiment, the EMIB die may be installed in the cavity 307 and the EMIB die may be configured to enable electrical connection to the EMIB die from above the EMIB die.FIG. 4A shows a cross section of a die assembly 400 including a patterned DAF layer according to an embodiment, and FIG. 4B shows a bottom view of the die assembly 400. 4A and 4B, the die assembly 400 includes a die 401, a dielectric material 403, a top pad 405, a DAF 407, a bottom pad 409, and a DAF channel 411.A bottom view of the die is shown in FIG. 4B, which depicts an exemplary patterning of the DAF channel 411 formed in the DAF 407. Through the opening of the DAF channel 411 in the non-pad area (for example, the non-Cu pad area), the underfill of the DAF channel 411 is realized. In an embodiment, these openings extend to the edge of the die assembly 400. In an embodiment, after the die is installed, the opening may be used to fill the DAF channel 411 with epoxy or molding compound.5A-5L show cross-sections of a die and a package substrate during a process of forming a package structure according to an embodiment. In particular, FIGS. 5A-5L are cross-sections of the package structure 500 during a process including photolithographic patterning of the DAF formed on the die, according to an embodiment.Referring to FIG. 5A, after one or more operations, a die assembly including a die 501, a dielectric laminate 503, a die pad 505, and a die pad 507 is formed. In an embodiment, the dielectric laminate 503 is formed on the first surface of the die 501 and covers the die pad 505. In an embodiment, the die pad 507 is formed on the second surface of the die 501.Referring to FIG. 5B, after obtaining one or more operations of the cross-section of the die assembly shown in FIG. 5A, (after performing dielectric encapsulation on the SiB silicon interconnect bump side) flipping includes the die 501, the dielectric layer A die assembly of a body 503, a die pad 505, and a die pad 507.Referring to FIG. 5C, after one or more operations to obtain the cross-section of the die assembly shown in FIG. 5B, (after dielectric encapsulation on the SiB side) a photoresist 509 is formed on the die 501 and The photoresist 509 covers the die pad 507.Referring to FIG. 5D, after one or more operations to obtain the cross-section of the die assembly shown in FIG. 5C, the photoresist 509 is patterned and developed. In an embodiment, photolithography exposure may be used to form the desired pattern.Referring to FIG. 5E, after one or more operations to obtain the cross-section of the die assembly shown in FIG. 5D, DAF 511 (for example, a wafer coated with a material for forming DAF) is formed on the die.Referring to FIG. 5F, after one or more operations to obtain the cross-section of the die assembly shown in FIG. 5E, the photoresist 509 is removed. In an embodiment, a wet or dry etching process is used to remove the photoresist by stripping. In other embodiments, the photoresist may be removed by other suitable methods of removing the photoresist.Referring to FIG. 5G, after one or more operations to obtain the cross-section of the die assembly shown in FIG. 5F, the wafer is divided into individual dies shown in FIG. 5H and tested. Referring to FIG. 5I, after obtaining one or more operations of the cross section of the die assembly shown in FIG. 5G, the die assembly is turned over.Referring to FIG. 5J, after obtaining one or more operations of the cross-section of the die assembly shown in FIG. 5H, attach the die assembly (if it is determined to be suitable through testing) to the package substrate (mounting the die assembly) ). In an embodiment, the package substrate includes a substrate 513, a metal layer 515, a dielectric layer 517, and interconnects 519.Referring to FIG. 5K, after obtaining one or more operations of the cross-section of the die assembly shown in FIG. 5J, epoxy resin/mold 521 is used to encapsulate the top portion of the package structure including the die assembly and the pair is formed in The openings in DAF 511 are underfilled. In other embodiments, other materials may be used to encapsulate the top portion of the package structure including the die assembly and to underfill the opening. Referring to FIG. 5L, after one or more operations of obtaining the cross-section of the package structure shown in FIG. 5K, the package structure including the mold material 521 is planarized.In an embodiment, the die attach formulation (formulation) may be composed of resin, filler, hardener, catalyst, diluent, thixotropic agent. The photoresist may be composed of resin, filler, sensitizer, and the like. Both formulations can share similar resins such as acrylates, polyimides, epoxy resins and fillers such as silica. Thus, patternable polyimide, acrylate resin components can be used in the die attach formulation formulation in order to provide patternable die attach capability.In an embodiment, DAF may include patternable components (resins, monomers, and sensitizers), epoxy resins/fillers (mechanical properties), and some other additives to control the use of die back side connection applications The thixotropic nature of the patternable die attach material. Exemplary formulations include acrylic oligomers with COOH groups (~10-30%) (which can be dissolved in an alkaline solution during development to create the desired pattern), used to give the desired mechanical strength and adhesion Adhesive epoxy resin (20-50%), 10-30% filler (SiO2 or others) and other additives (0-20%) for controlling the thixotropic properties of the film. In other embodiments, other formulations can be used. It should be recognized that conventional dry film photoresists cannot provide the desired mechanical strength. In addition, solder resists and photoimageable resists cannot meet thixotropic properties and modularity requirements.6A-6K show cross-sections of a die assembly and a package substrate during a process of forming a package structure according to an embodiment. In particular, FIGS. 6A-6K are cross-sections of the package structure during a process including laser drilling patterning of the DAF formed on the die assembly according to an embodiment.6A, after one or more operations, a die assembly including a die substrate 601, a dielectric laminate 603, a die pad 605, and a die pad 607 is formed. In an embodiment, the dielectric laminate 603 is formed on the first surface of the die 601 and covers the die pad 605.6B, after one or more operations to obtain the cross-section of the die assembly shown in FIG. 6A, (after dielectric encapsulation of the die pad 605 on the SiB side of the die) the die assembly is turned over . Referring to FIG. 6C, after one or more operations to obtain the cross-section of the die assembly shown in FIG. 6B, DAF 609 is formed on the die assembly.Referring to FIG. 6D, after obtaining one or more operations of the cross-section of the die assembly shown in FIG. 6C, a desired DAF pattern is formed by laser patterning 611 on the DAF 609. The resulting structure is shown in Figure 6E. Referring to FIG. 6F, after one or more operations of obtaining the cross-section of the die assembly shown in FIG. 6E, the die assembly wafer is divided 613 into individual die assemblies shown in FIG. 6G and tested.Referring to FIG. 6H, after obtaining one or more operations of the cross section of the die assembly shown in FIG. 6G, the die assembly is turned over. Referring to FIG. 6I, after obtaining one or more operations of the cross-section of the die assembly shown in FIG. 6H, the die assembly (if determined to be functioning properly) is attached to the package substrate assembly. In an embodiment, the package substrate assembly includes a substrate 615, a metal layer 617, a dielectric layer 619, and interconnects 621.Referring to FIG. 6J, after one or more operations to obtain the cross-section of the package structure shown in FIG. 6I (after mounting the die assembly), epoxy/mold 623 is used to encapsulate the package structure including the die assembly And underfill the opening formed in DAF 609. Referring to FIG. 6K, after one or more operations of obtaining the cross section of the package structure shown in FIG. 6J, the package structure is planarized.In an embodiment, instead of using photolithography patterning, in the DAF process of laser drilling patterning described with reference to FIGS. 6A-6K, a laser is used to directly drill into the DAF 609 to create a desired pattern. In an embodiment, the process starts at the wafer level. As part of this process, a flush DAF layer is provided for the PSB (package side bump) side. The DAF is then patterned by using a laser to drill openings in the DAF. In an embodiment, after these initial operations, the remaining process flow may be the same as the flow after the patterning process of the lithographic patterning DAF process described with reference to FIGS. 5A to 5L. In an embodiment, the DAF may be covered with a polyethylene terephthalate (PET) film, and laser drilling through the PET film is used to create a pattern. In the examples, the benefit of using PET film is to protect the DAF during testing and transfer. In an embodiment, the PET film may be removed (eg, peeled off) immediately before die mounting.7A-7K show cross-sections of a die and a package substrate during a process of forming a package structure according to an embodiment. In particular, FIGS. 7A-7K are cross-sections of the package structure during a process including mask etching patterning of the DAF formed on the die, according to an embodiment.Referring to FIG. 7A, after a plurality of operations, a die assembly including a die 701, a dielectric laminate 703, a die pad 705, and a die pad 707 is formed. In an embodiment, the dielectric laminate 703 is formed on the die 701. Referring to FIG. 7B, after one or more operations to obtain the cross-section of the die assembly shown in FIG. 7A, (after dielectric encapsulation of the die pad 705 on the SiB side of the die wafer) the tube is turned over Core components.Referring to FIG. 7C, after one or more operations to obtain the cross-section of the die structure shown in FIG. 7B, a DAF layer 709 is formed on the die assembly. Referring to FIG. 7D, after obtaining one or more operations of the cross-section of the die assembly shown in FIG. 7C, mask etching is performed by using a mask 711 to pattern the DAF using radiation 713, thereby forming a desired DAF pattern. The resulting structure is shown in Figure 7E.Referring to FIG. 7F, after one or more operations to obtain the cross-section of the die assembly shown in FIG. 7E, the die assembly wafer is divided into individual die assemblies shown in FIG. 7G and tested. Referring to FIG. 7H, after obtaining one or more operations of the cross section of the die assembly shown in FIG. 7G, the die assembly is turned over.Referring to FIG. 7I, after obtaining one or more operations of the cross-section of the die assembly shown in FIG. 7H, the die assembly (if determined to be properly operated) is attached to the package substrate. In an embodiment, the package substrate includes a substrate 717, a metal layer 719, a dielectric layer 721, and interconnects 723.Referring to FIG. 7J, after one or more operations of obtaining the cross-section of the package structure shown in FIG. 7I (after the die assembly is installed), epoxy/mold 725 is used to encapsulate the die assembly and open the DAF Perform underfill. Referring to FIG. 7K, after one or more operations to obtain the cross section of the package structure shown in FIG. 7J, the mold material 725 is planarized.In an embodiment, the DAF process of mask etching patterning described with reference to FIGS. 7A-7K is started at the wafer level. For example, after laminating the DAF on the die, the DAF is etched through a hard mask to create the desired patterning on the DAF. After the DAF has been patterned, the process flow is the same as the previous DAF process of photolithography patterning. In an embodiment, the DAF may be covered with a protective film during the mask etching, and the protective film may remain until the die is mounted in order to protect the DAF.8A-8L show cross-sections of the package structure during a process of forming the package structure according to an embodiment. 8A-8L are cross-sections of a package structure during a process for forming a package structure including patterning DAF on a package substrate according to an embodiment.Referring to FIG. 8A, after one or more operations, a package substrate structure including a substrate 801, a metal layer 803, a dielectric layer 805, and interconnects 807 is formed.Referring to FIG. 8B, after one or more operations of obtaining the cross-section of the package substrate structure shown in FIG. 8A, DAF lamination is performed. In an embodiment, a DAF material 809 is formed on the surface of the die structure. In an embodiment, the surface is formed by a dielectric layer 805 and interconnects 807.Referring to FIG. 8C, after one or more operations to obtain the cross-section of the package substrate structure shown in FIG. 8B, the DAF material 809 is patterned. In the embodiment of FIG. 8C, the DAF material 809 is patterned by performing laser patterning using a laser 811. Referring to FIG. 8D, after one or more operations to obtain the cross-section of the package substrate structure shown in FIG. 8C, a desired DAF pattern 810 is formed.Referring to FIG. 8E, after one or more operations to obtain the cross-section of the package substrate structure shown in FIG. 8D, a seed layer 812 is formed on the surface of the package substrate structure. In particular, the seed layer 812 is formed on the surface of the remaining DAF material 809 of the package substrate structure and other exposed surfaces.Referring to FIG. 8F, after one or more operations to obtain the cross-section of the package substrate structure shown in FIG. 8E, a laminate material 813 is formed to cover/encapsulate the surface of the package substrate structure. In an embodiment, as part of the encapsulation, the surface of the package substrate structure and the exposed structure thereon are covered with a laminate material 813.Referring to FIG. 8G, after one or more operations to obtain the cross-section of the package substrate structure shown in FIG. 8F, the laminate material 813 is patterned. In an embodiment, the laminate material 813 is patterned to form conductive pillars for electrical connection. Referring to FIG. 8H, after one or more operations to obtain the cross-section of the package substrate structure shown in FIG. 8G, conductive pillars 815 are formed.Referring to FIG. 8I, after one or more operations to obtain the cross-section of the structure shown in FIG. 8H, the portion of the seed layer 812 above the remaining DAF material 809 is removed. Referring to FIG. 8J, after one or more operations to obtain the cross-section of the structure shown in FIG. 8I, the die assembly 816 is installed.Referring to FIG. 8K, after one or more operations to obtain the cross-section of the structure shown in FIG. 8J, epoxy/mold 817 is used to encapsulate the die assembly and also underfill the DAF opening. Referring to FIG. 8L, after one or more operations to obtain the cross-section of the structure shown in FIG. 8K, the mold material 817 is planarized.In the embodiment, instead of patterning the DAF on the die, in the DAF-first and DAF-on-patterned process described with reference to FIGS. 8A-8L, as part of the DAF-first process flow, directly The patterning is completed on the bottom. In this way, DAF is laminated on a substrate manufactured separately from the die. The DAF can be patterned on the substrate using laser drilling or mask etching. Thereafter, a seed layer can be deposited and a dry film resist (DFR) can be used to cover the DAF in the die area during pillar plating. The DFR and seed layer are removed, and then the normal die mounting process can be performed. The patterned DAF is filled while encapsulating the die, and planarization is performed to expose the pillars and silicon interconnect bumps.In an embodiment, the patterned DAF enables the epoxy/mold fill to be under the die. In the conventional flush DAF method, no epoxy/mold filling can be formed under the die. In an embodiment, the mixed DAF and epoxy/mold filling under the die can reduce CTE problems.FIG. 9 shows a flowchart of a method for patterning a die attach film according to an embodiment. The method for patterning the die attach film includes forming a die at 901. At 903, one or more die pads are formed on the first surface of the die. At 905, a die attach film is formed on the die. At 907, one or more openings exposing one or more die pads and extending to one or more edges of the die are formed. At 909, an underfill material is formed in one or more openings. In an embodiment, the formation of one or more openings is performed by photolithography patterning, laser patterning, or mask etching patterning. In an embodiment, the formation of one or more openings includes forming one or more trenches and filling the one or more trenches with an underfill material. In an embodiment, the formation of the one or more openings includes forming one or more channels and filling the one or more channels with an underfill material different from the die attach film material.FIG. 10 shows a flowchart 1000 of a method for patterning a die attach film according to an embodiment. The method includes forming a component including a plurality of wiring elements of a package substrate at 1001. At 1003, a die attach film is formed on the components of the package substrate. At 1005, one or more openings are formed in the die attach film. At 1007, the die is placed on the die attach film, with one or more die pads of the die extending through one or more openings and contacting one or more of the plurality of wiring elements.FIG. 11 is a schematic diagram of a computer system 1100 according to an embodiment of the present invention. According to any of the several disclosed embodiments and the equivalents set forth in this disclosure, the computer system 1100 (also referred to as the electronic system 1100) as shown in the figure can embody a tube including a patterned DAF layer Core assembly (e.g., 400 in Figure 4A). The computer system 1100 may be a mobile device such as a netbook computer. The computer system 1100 may be a mobile device such as a wireless smart phone. The computer system 1100 may be a desktop computer. The computer system 1100 may be a handheld reader. The computer system 1100 may be a server system. The computer system 1100 may be a supercomputer or a high-performance computing system.In an embodiment, the electronic system 1100 is a computer system that includes a system bus 1120 that electrically couples various components of the electronic system 1100. According to various embodiments, the system bus 1120 is a single bus or any combination of buses. The electronic system 1100 includes a voltage source 1130 for powering the integrated circuit 1110. In some embodiments, the voltage source 1130 supplies current to the integrated circuit 1110 through the system bus 1120.According to an embodiment, the integrated circuit 1110 is electrically coupled to the system bus 1120 and includes any circuit or combination of circuits. In an embodiment, the integrated circuit 1110 includes a processor 1112 which may be of any type. As used herein, the processor 1112 may mean any type of circuit, such as, but not limited to, a microprocessor, a microcontroller, a graphics processor, a digital signal processor, or another processor. In an embodiment, as disclosed herein, the processor 1112 includes or is coupled to a die assembly (e.g., 400 in FIG. 4A). In an embodiment, the SRAM embodiment resides in the memory cache of the processor. Other types of circuits that can be included in the integrated circuit 1110 are custom circuits or application specific integrated circuits (ASICs), such as those used in wireless devices such as cellular phones, smart phones, pagers, portable computers, two-way radios, and similar electronic devices. The communication circuit 1114 used in the system, or the communication circuit used in the server. In an embodiment, the integrated circuit 1110 includes on-die memory 1116, such as static random access memory (SRAM). In an embodiment, the integrated circuit 1110 includes embedded on-die memory 1116, such as embedded dynamic random access memory (eDRAM).In an embodiment, the integrated circuit 1110 is supplemented with a subsequent integrated circuit 1111. Useful examples include dual processors 1113 and dual communication circuits 1115 and dual on-die memory 1117, such as SRAM. In an embodiment, the dual integrated circuit 1110 includes embedded on-die memory 1117, such as eDRAM.In an embodiment, the electronic system 1100 further includes an external memory 1140, which in turn may include one or more memory elements suitable for specific applications, such as a main memory 1142 in the form of RAM, one or more hard drives 1144, and/ Or manipulate one or more drives of removable media 1146, such as floppy disks, compact disks (CD), digital versatile disks (DVD), flash memory drives, and other removable media known in the art medium. According to an embodiment, the external memory 1140 may also be an embedded memory 1148, such as the first die in the die stack.In an embodiment, the electronic system 1100 further includes a display device 1150 and an audio output 1160. In an embodiment, the electronic system 1100 includes an input device 1170 such as a controller, which can be a keyboard, a mouse, a trackball, a game controller, a microphone, a voice recognition device, or any other input that inputs information into the electronic system 1100 Device. In an embodiment, the input device 1170 is a camera. In the embodiment, the input device 1170 is a digital recorder. In the embodiment, the input device 1170 is a camera and a digital audio recorder.As shown herein, the integrated circuit 1110 can be implemented in several different embodiments, including an already embodied die assembly according to any of the several disclosed embodiments and their equivalents (eg, 400 in FIG. 4A) The packaging substrates, electronic systems, computer systems, one or more methods of manufacturing integrated circuits, and manufacturing include several disclosures described herein in various embodiments and their prior art equivalents. One or more methods of an electronic assembly with a packaging substrate of a die assembly (e.g., 400 in FIG. 4A) of any of the embodiments. According to any of several disclosed package substrate embodiments with a die assembly (for example, 400 in FIG. 4A) and their equivalents, the components, materials, geometry, dimensions, and order of operations can be changed to suit Specific I/O coupling requirements include the number of array contacts and the array contact structure of the microelectronic die embedded in the processor mounting substrate. As indicated by the dashed line in FIG. 11, a base substrate may be included. As also depicted in FIG. 11, passive devices may also be included.Although specific embodiments have been described above, even if only a single embodiment is described with respect to specific features, these embodiments are not intended to limit the scope of the present disclosure. Unless otherwise stated, the examples of features provided in this disclosure are intended to be illustrative and not restrictive. The above description is intended to cover those alternatives, modifications, and equivalents that will be obvious to those skilled in the art that have the benefits of the present disclosure.The scope of the present disclosure includes any feature or combination of features disclosed herein (explicitly or implicitly), or any generalization thereof, regardless of whether it mitigates any or all of the problems solved herein. Therefore, during the examination of this application (or the application for which priority is claimed), a new claim can be made for any such combination of features. In particular, with reference to the appended claims, the features of the dependent claims may be combined with the features of the independent claims, and the corresponding independent claims may be combined in any appropriate manner rather than only through the specific combinations listed in the appended claims. Features are combined.The following examples refer to other embodiments. Various features of different embodiments can be combined with some included features and excluded other features in various ways to adapt to various applications.Example embodiment 1: A die assembly including a die, one or more die pads attached to a first surface of the die, and a die attach film on the die, wherein the die attach film It includes one or more openings exposing one or more die pads and extending to one or more edges of the die.Example embodiment 2: The die assembly according to example embodiment 1, further including one or more die pads attached to the second surface of the die.Example embodiment 3: The die assembly of example embodiment 1 or 2, wherein the one or more openings include one or more trenches containing an underfill material.Example embodiment 4: The die assembly of example embodiments 1, 2, or 3, wherein the one or more openings include one or more channels including an underfill material that is different from the die attach film material.Example embodiment 5: The die assembly of example embodiments 1, 2, 3, or 4, wherein one or more of the die pads are through silicon via (TSV) backside die pads.Example embodiment 6: The die assembly of example embodiments 1, 2, 3, 4, or 5, wherein the die assembly is on a first packaging substrate, and one or more die pads are connected to a single tube Core, first die and second die, second package substrate or interposer.Example embodiment 7: The die assembly of example embodiments 1, 2, 3, 4, 5, or 6, wherein the die structure is surrounded by a mold compound.Example Embodiment 8: A system including one or more data storage components and one or more integrated circuit dies including a die assembly, the die assembly including a die mount package, and the die mount package includes one or more A wiring layer, one or more dielectric layers, and a die mounting space. The die assembly includes a die in the die mounting space, one or more die pads attached to the first surface of the die, and a tube A die attach film on the core, wherein the die attach film includes one or more openings exposing one or more die pads and extending to one or more edges of the die.Example embodiment 9: The system according to example embodiment 8, further comprising one or more die pads attached to the second surface of the die.Example embodiment 10: The system of example embodiment 8 or 9, wherein the one or more openings include one or more channels containing underfill material.Example embodiment 11: The system of example embodiment 8, 9 or 10, wherein the one or more openings include one or more channels containing an underfill material that is different from the die attach film material.Example embodiment 12: The system of example embodiment 8, 9, 10, or 11, wherein the one or more die pads are through silicon via (TSV) backside die pads.Example embodiment 13: The system of example embodiment 8, 9, 10, 11, or 12, wherein the die assembly is on the first packaging substrate, and one or more die pads are connected to a single die, The first die and the second die, the second package substrate or the interposer.Example embodiment 14: The system of example embodiment 8, 9, 10, 11, 12, or 13, wherein the die assembly is surrounded by a mold compound.Example Embodiment 15: A method comprising: forming a wafer having first and second surfaces, forming die pads on the first and second surfaces, and forming a die bond covering the first surface of the wafer Laminate the disc, turn the wafer so that the second surface faces upward, form a die attach film on the second surface, pattern the die attach film, divide the wafer to form dies, The die is mounted on the upper surface, the die on the substrate is encapsulated with an encapsulating material, and the encapsulating material is planarized.Example embodiment 16: The method of example embodiment 15, wherein patterning comprises photolithographic patterning.Example embodiment 17: The method of example embodiment 15, wherein patterning comprises laser patterning.Example embodiment 18: The method of example embodiment 15, wherein patterning includes mask etch patterning.Example Embodiment 19: A method comprising: forming a die, forming one or more die pads on the first surface of the die, and forming a die attach film on the die, and forming on the die The die attach film includes forming one or more openings exposing one or more die pads and extending to one or more edges of the die.Example embodiment 20: the method according to example embodiment 19, further comprising mounting the die on the die mounting package.Example embodiment 21: The method of example embodiment 19 or 20, wherein forming the one or more openings includes photolithography patterning, laser patterning, or mask etching patterning.Example embodiment 22: The method of example embodiment 19, 20, or 21, wherein forming one or more openings includes forming one or more trenches and filling the one or more trenches with an underfill material.Example embodiment 23: The method of example embodiment 19, 20, 21, or 22, wherein forming one or more openings includes forming one or more channels and using an underfill material different from the die attach film material Fill one or more channels.Example embodiment 24: A method including forming a component including a plurality of wiring elements of a package substrate, forming a die attach film on the component of the package substrate, and forming one or more openings in the die attach film , And placing the die over the die attach film, where one or more die pads of the die extend through the one or more openings and contact one or more of the multiple wiring elements. |
A system enables memory device specific self-refresh entry and exit commands. When memory devices on a shared control bus (such as all memory devices in a rank) are in self-refresh, a memory controller can issue a device specific command with a self-refresh exit command and a unique memory device identifier to the memory device. The controller sends the command over the shared control bus, and only the selected, identified memory device will exit self-refresh while the other devices will ignore the command and remain in self-refresh. The controller can then execute data access over a shared data bus with the specific memory device while the other memory devices are in self-refresh. |
CLAIMSWhat is claimed is:1. A buffer circuit in a memory subsystem, comprising:an interface to a control bus, the control bus to be coupled to multiple memory devices;an interface to a data bus, the data bus to be coupled to the multiple memory devices;control logic to send a device specific self-refresh exit command over the control bus when the multiple memory devices are in self-refresh, the command including a unique memory device identifier to cause only an identified memory device to exit self-refresh while the other memory devices remain in self-refresh, and the control logic to perform data access over the data bus for the memory device caused to exit self-refresh.2. The buffer circuit of claim 1, wherein the control logic is further to select a subset of the multiple memory devices, and send device specific self-refresh exit commands to each of the selected memory devices of the subset.3. The buffer circuit of any of claims 1 to 2, wherein the self-refresh exit command includes a CKE (clock enable) signal.4. The buffer circuit of any of claims 1 to 3, wherein the control logic is further to select the memory devices in turn to cause serial memory access to all of the memory devices.5. The buffer circuit of any of claims 1 to 4, wherein the buffer circuit comprises a registered clock driver (RCD) of an NVDIMM (nonvolatile dual in line memory module), wherein the control logic is further to transfer self-refresh commands to all memory devices to place the memory devices in self-refresh as part of a backup transfer process to transfer memory contents to a persistent storage upon detection of a power failure.6. The buffer circuit of claim 5, wherein the interface to the data bus comprises an interface to an alternate data bus parallel to a primary data bus used by the memory devices in active operation, and wherein the control logic is to cause the memory devices to transfer memory contents via the alternate data bus as part of the backup transfer process.7. The buffer circuit of claim 5, wherein the persistent storage comprises a storage device disposed on the NVDIM M.8. The buffer circuit of claim 5, wherein the second data bus is to couple to a persistent storage device located external to the NVDIMM.9. The buffer circuit of any of claims 1 to 8, wherein the buffer circuit comprises a backup controller of a registered DIMM (RDIMM).10. The buffer circuit of any of claims 1 to 9, wherein after the performance of data access with a selected memory device, the control logic further to send a device specific self-refresh command including a self-refresh enter command and the unique memory device identifier over the control bus to cause the selected memory device to re-enter self- refresh.11. The buffer circuit of any of claims 1 to 10, wherein the memory devices include dual data rate version 4 synchronous dynamic random access memory devices (DDR4-SDRAMs).12. The buffer circuit of any of claims 1 to 11, wherein the memory devices are part of a same memory rank, and the control line comprises a command/address bus for the memory rank.13. A nonvolatile dual inline memory module (NVDIMM), comprising:a first data bus;a second data bus;multiple volatile memory devices coupled to a common control line shared by the memory devices, the memory devices further to couple to a nonvolatile storage via the second data bus; and control logic coupled to the memory devices via the first data bus and via the common control line, the control logic including control logic to send a device specific self- refresh exit command over the control line when the multiple memory devices are in self- refresh, the command including a unique memory device identifier to cause only an identified memory device to exit self-refresh while the other memory devices remain in self- refresh, and the control logic to cause the identified memory device to transfer memory contents via the second memory bus while the other memory devices remain in self-refresh.14. A method for memory management, comprising:selecting for data access one of multiple memory devices that share a control bus, wherein the memory devices are in self-refresh;sending a device specific self-refresh exit command including a self-refresh exit command and a unique memory device identifier over the shared control bus to cause only the selected memory device to exit self-refresh while the others remain in self-refresh; and performing data access over a shared data bus for the memory device not in self- refresh.15. The method of claim 14, wherein selecting comprises selecting each memory device individually to cause serial memory access to the memory devices.16. The method of any of claims 14 to 15, wherein sending the self-refresh exit command comprises sending a CKE (clock enable) signal.17. The method of any of claims 14 to 16, wherein the memory devices comprise memory devices of a registered DIMM (RDIM M).18. The method of any of claims 14 to 17, further comprising:after performing the data access with the selected memory device, sending a device specific self-refresh command including a self-refresh command and the unique memory device identifier over the shared control bus to cause the selected memory device to reenter self-refresh.19. The method of any of claims 14 to 18, wherein the sending the device specific self- refresh command comprises sending a command from a registered clock driver (RCD) of an NVDIMM (nonvolatile dual inline memory module).20. The method of claim 19, wherein performing data access further comprises transferring data contents as part of a backup transfer process to transfer memory contents to a persistent storage upon detection of a power failure.21. The method of claim 19, wherein performing the data access further comprises performing the data access on an alternate data bus parallel to a primary data bus, wherein the primary data bus to is be used by the memory devices in active operation, and wherein the alternate data bus is to be used by the memory devices as part of the backup transfer process.22. The method of claim 21, wherein the persistent storage comprises a storage device located external to the NVDIMM.23. The method of any of claims 14 to 22, wherein the memory devices share the control bus as part of a memory rank that shares a command/address bus.24. An apparatus for memory management, comprising means for performing operations to execute a method in accordance with any of claims 14 to 23. |
MEMORY DEVICE SPECIFIC SELF-REFRESH ENTRY AND EXITRELATED APPLICATIONS[0001] The present patent application is a nonprovisional based on, and claims the benefit of priority of, U.S. Provisional Patent Application No. 62/168,513, filed May 29, 2015. The provisional application is hereby incorporated by reference.[0002] The present patent application is related to the following patent application: Patent Application No. 14/998,141, entitled "POWER PROTECTED MEMORY WITHCENTRALIZED STORAGE," filed concurrently herewith.FIELD[0003] Descriptions herein are generally related to memory subsystems, and more specific descriptions are related to memory device self-refresh commands.COPYRIGHT NOTICE/PERMISSION[0004] Portions of the disclosure of this patent document may contain material that is subject to copyright protection. The copyright owner has no objection to the reproduction by anyone of the patent document or the patent disclosure as it appears in the Patent and Trademark Office patent file or records, but otherwise reserves all copyright rights whatsoever. The copyright notice applies to all data as described below, and in the accompanying drawings hereto, as well as to any software described below: Copyright © 2015, Intel Corporation, All Rights Reserved.BACKGROUND[0005] Memory subsystems store code and data for use by the processor to execute the functions of a computing device. Memory subsystems are traditionally composed of volatile memory resources, which are memory devices whose state is indefinite or indeterminate if power is interrupted to the device. Thus, volatile memory is contrasted with persistent or nonvolatile storage, which has a determinate state even if power is interrupted to the device. The storage technology used to implement the memory device determines if it is volatile or nonvolatile. Typically volatile memory resources have faster access times, and denser (bits per unit area) capacities. While there are emerging technologies that may eventually provide persistent storage having capacities and access speeds comparable with-l- current volatile memory, the cost and familiarity of current volatile memories are very attractive features.[0006] The primary downside of volatile memory is that its data is lost when power is interrupted. There are systems that provide battery-backed memory to continue to refresh the volatile memory from battery power to prevent it from losing state if primary power is interrupted. There are also systems in which memory devices are placed on one side of a DIMM (dual inline memory module), and persistent storage is placed on the other side of the DIMM. The system can be powered by super capacitor or battery that holds enough charge to enable the system to transfer the contents of the volatile memory devices to the persistent storage device(s) if power is interrupted to the memory subsystem. While such systems can prevent or at least reduce loss of data in the event of a loss of power, they take up a lot of system space, and cut the DIM M capacity in half. Thus, such systems are impractical in computing devices with more stringent space constraints. Additionally, lost memory capacity results in either having less memory, or costly solutions to add more hardware.[0007] Currently available memory protection includes Type 1 NVDIMM (nonvolatile DIM M), which is also referred to in industry as NVDIMM-n. Such systems are energy backed byte accessible persistent memory. Traditional designs contain DRAM (dynamic random access memory) devices on one side of the DIMM and one or more NAN D flash devices on the other side of the DIMM. Such NVDIM Ms are attached to a super capacitor through a pigtail connector, and the computing platform supplies 12V to the super capacitor to charge it during normal operation. When the platform power goes down, the capacitor supplies power to the DIMM and the DIMM controller to allow it to save the DRAM contents to the NAN D device on the back of the DIMM. In a traditional system, each super capacitor takes one SATA (serial advanced technology attachment) drive bay of real estate.[0008] Traditionally, RDIM Ms (registered DIM Ms) cannot be used to implement an NVDIMM solution, because there is no buffer between the devices and the nonvolatile storage on the data bus to steer the data between the host and the storage. Thus, more expensive LRDIMMs (load reduced DIMMs) are traditionally used for NVDIM M, which have buffers on the data bus. On a typical DRAM DIMM the devices are organized as ranks, where each rank is comprised of multiple DRAMs. The self-refresh exit command or signal (CKE) is common across all DRAMs in the rank; thus, all devices respond to the command simultaneously. Given this simultaneous response, accessing data from an individual DRAM over a common data bus is not traditionally possible, seeing that the DRAMs contend for the data bus. Thus, when DRAMs share a common command/address (C/A) or control bus, they cannot also share a data bus. DRAMs that share a C/A or control bus traditionally have dedicated data paths to the host memory controller. However, on an NVDIM M, a dedicated data bus or dedicated C/A bus are not practical due to pin count and power constraints.BRIEF DESCRIPTION OF THE DRAWINGS[0009] The following description includes discussion of figures having illustrations given by way of example of implementations of embodiments of the invention. The drawings should be understood by way of example, and not by way of limitation. As used herein, references to one or more "embodiments" are to be understood as describing a particular feature, structure, and/or characteristic included in at least one implementation of the invention. Thus, phrases such as "in one embodiment" or "in an alternate embodiment" appearing herein describe various embodiments and implementations of the invention, and do not necessarily all refer to the same embodiment. However, they are also not necessarily mutually exclusive.[0010] Figure 1 is a block diagram of an embodiment of a system with a controller that can execute device specific self-refresh commands.[0011] Figure 2 is a block diagram of an embodiment of a DIMM (dual inline memory module) for a power protected memory system with centralized storage in which data is transferred via device specific self-refresh commands.[0012] Figure 3 is a block diagram of an embodiment of a DIMM (dual inline memory module) for a power protected memory system with centralized storage in which data is transferred via device specific self-refresh commands.[0013] Figure 4 is a block diagram of an embodiment of a power protected memory system with consolidated storage not on the NVDIMM (nonvolatile DIMM) in which a controller uses device specific self-refresh commands.[0014] Figure 5 is a block diagram of an embodiment of a power protected memory system with centralized storage that uses device specific self-refresh commands to perform data transfer. [0015] Figure 6 is a flow diagram of an embodiment of a process for using device specific self-refresh commands for nonvolatile backup of volatile memory.[0016] Figure 7A is a block diagram of an embodiment of a register that enables a per device self-refresh mode.[0017] Figure 7B is a block diagram of an embodiment of a register that stores a per device identifier for per device self-refresh mode.[0018] Figure 8 is a timing diagram of an embodiment of per device backup to persistent storage.[0019] Figure 9 is a block diagram of an embodiment of a system in which per memory device self-refresh commands can be implemented.[0020] Figure 10 is a block diagram of an embodiment of a computing system in which a device specific self-refresh command can be implemented.[0021] Figure 11 is a block diagram of an embodiment of a mobile device in which a device specific self-refresh command can be implemented.[0022] Descriptions of certain details and implementations follow, including a description of the figures, which may depict some or all of the embodiments described below, as well as discussing other potential embodiments or implementations of the inventive concepts presented herein.DETAILED DESCRIPTION[0023] As described herein, a system enables memory device specific self-refresh entry and exit commands. When all memory devices on a shared control bus (such as all memory devices in a rank) that also share a data bus are in self-refresh, a memory controller can issue a device specific command with a self-refresh exit command and a unique memory device identifier to the memory device. The controller sends the command over the shared control bus, but only the selected, identified memory device will exit self-refresh while the other devices will ignore the command and remain in self-refresh. The controller can then execute data access over the shared data bus with the specific memory device while the other memory devices are in self-refresh.[0024] Reference to memory devices can apply to different memory types. Memory devices generally refer to volatile memory technologies. Volatile memory is memory whose state (and therefore the data stored on it) is indeterminate if power is interrupted to the device. Nonvolatile memory refers to memory whose state is determinate even if power is interrupted to the device. Dynamic volatile memory requires refreshing the data stored in the device to maintain state. One example of dynamic volatile memory includes DRAM (dynamic random access memory), or some variant such as synchronous DRAM (SDRAM). A memory subsystem as described herein may be compatible with a number of memory technologies, such as DDR3 (dual data rate version 3, original release by JEDEC (Joint Electronic Device Engineering Council) on June 27, 2007, currently on release 21), DDR4 (DDR version 4, initial specification published in September 2012 by JEDEC), DDR4E (DDR version 4, extended, currently in discussion by JEDEC), LPDDR3 (low power DDR version 3, JESD209-3B, Aug 2013 by JEDEC), LPDDR4 (LOW POWER DOUBLE DATA RATE (LPDDR) version 4, JESD209-4, originally published by JEDEC in August 2014), WI02 (Wide I/O 2 (Widel02), JESD229-2, originally published by JEDEC in August 2014), HBM (HIGHBAN DWIDTH MEMORY DRAM, JESD235, originally published by JEDEC in October 2013), DDR5 (DDR version 5, currently in discussion by JEDEC), LPDDR5 (currently in discussion by JEDEC), HBM2 (HBM version 2), currently in discussion by JEDEC), and/or others, and technologies based on derivatives or extensions of such specifications.[0025] Descriptions herein referring to a "DRAM" can apply to any memory device that allows random access. The memory device or DRAM can refer to the die itself and/or to a packaged memory product.[0026] A system that enables device specific self-refresh exit (or per device exit from self-refresh) provides more possibilities for NVDIMM (nonvolatile dual inline memory module) implementations. While descriptions below provide examples with respect to DIM Ms, it will be understood that similar functionality can be implemented in whatever type of system includes memory devices that share a control bus and a data bus. Thus, the use of a specific "memory module" is not necessary. In one embodiment, device specific exit from self-refresh enables a controller to cause a single DRAM to exit from self-refresh at a time from a common control bus.[0027] Traditional DIM Ms include RDIMMs (registered DIM Ms) and LRDIMMs (load reduced DIMMs) to try to reduce the loading of the DIM M on a computing platform. The reduced loading can improve signal integrity of memory access and enable higher bandwidth transfers. On an LRDIMM, the data bus and control bus (e.g., command/address (C/A) signal lines) are fully buffered, where the buffers re-time and re-drive the memory bus to and from the host (e.g., an associated memory controller). The buffers isolate the internal buses of the memory device from the host. On an RDIM M, the data bus connects directly to the host memory controller. The control bus (e.g., the C/A bus) is re-timed and re-driven. Thus, the inputs are considered to be registered on the clock edge. In place of a data buffer, RDIMMs traditionally use passive multiplexers to isolate the internal bus on the memory devices from the host control ler.[0028] In contrast to traditional systems, with per device self-refresh commands, an RDIMM can be used for an NVDIMM implementation. Traditional DIM M implementations have a 72-pin data bus interface, which causes too much loading to implement an NVDIMM. LRDIMMs are traditionally used because they buffer the bus. But by allowing only a selected DRAM or DRAMs to exit self-refresh while the other DRAMs remain in self-refresh, the interface can be serialized and the loading significantly reduced on the host. Thus, in one embodiment, an RDIM M can be employed as an NVDIMM.[0029] Figure 1 is a block diagram of an embodiment of a system with a controller that can execute device specific self-refresh commands. System 100 illustrates one embodiment of a system with memory devices 120 that share a control bus (C/A (command/address) bus 112) and a data bus (data bus 114A shared among DRAMs 120 with addresses 0000:0111 and data bus 114B shared among DRAMs 120 with addresses 1000:1111). Memory devices 120 can be individually accessed with device specific self-refresh commands; thus, device specific self-refresh commands can be applied to individual DRAMs 120 and/or with groups of selected DRAMs 120. System 100 illustrates sixteen memory devices (0000:0111 on port A, and 1000:1111 on port B). In one embodiment, DRAMs 120 represent memory devices on a DIMM.[0030] It will be understood that different implementations can have different numbers of memory devices (either more or fewer). In one embodiment, each memory device 120 of system 100 has a unique identifier (ID) or device ID (DID). In one embodiment, each memory device 120 coupled to a separate data bus has a unique DID, which can be the same as a DID of another memory device on a parallel or different memory bus. For example, memory devices 120 coupled to port B of RCD 110, coupled to data bus 114B could be numbered from 0000:0111, similar to memory devices 120 of data bus 114A. As long as each memory device 120 on a common command and address bus or control line, and data bus has a unique ID assigned to it, the system can generate device specific self-refresh commands. With the 4 bit IDs illustrated, there are 16 possible unique IDs, which is one example, and more or fewer bits can be used to address each device, depending on the implementation.[0031] RCD 110 represents a controller for system 100. It will be understood that the controller represented by RCD 110 is different from a host controller or memory controller (not specifically shown) of a computing device in which system 100 is incorporated.Likewise, the controller of RCD 110 is different from an on-chip or on-die controller that is included on the memory devices 120. In one embodiment, RCD 110 is a registered clock driver (which can also be referred to as a registering clock driver). The registered clock driver receives information from the host (such as a memory controller) and buffers the signals from the host to the various memory devices 120. If all memory devices 120 were directly connected to the host, the loading on the signal lines would degrade high speed signaling capability. By buffering the input signals from the host, the host only sees the load of RCD 110, which can then control the timing and signaling to the memory devices 120. In one embodiment, RCD 110 is a controller on a DIMM to control signaling to the various memory devices.[0032] RCD 110 includes interface circuitry to couple to the host and to memory devices 120. While not shown in specific detail, the hardware interface can include drivers, impedance termination circuitry, and logic to control operation of the drivers and impedance termination. The interfaces can include circuitry such as interfaces described below with respect to an interface between a memory device and a memory controller. The interface circuitry provides interfaces to the various buses described with respect to system 100.[0033] In one embodiment, RCD 110 has independent data ports A and B. For example, the memory devices may access independent channels, enabling the parallelcommunication of data on two different data buses 114. In one embodiment, all memory devices 120 in system 100 share the same data bus 114. In one embodiment, memory devices 120 are coupled to parallel data buses for purposes of signaling and loading. For example, a first data bus (e.g., data bus 114) can be the data bus coupled to RCD 110, which provides data from the host. A second data bus (e.g., data bus 116) can be the data bus coupled to a storage device. In one embodiment, the second data bus can be coupled directly to the host. Where data bus 116 is coupled directly to the host, it can provide reduced loading via multiplexers or other circuitry that enables serialization of the data from memory devices 120.[0034] Memory devices 120 are illustrated having an H port coupled to the RCD, which can be a command and/or control driver. Memory devices 120 are also illustrated having an L port coupled for device specific control. The device specific control can serialize the data output, seeing that memory devices 120 can be activated one at a time. In oneembodiment, memory devices 120 are activated one at a time by RCD 110. In one embodiment, RCD 110 activates one memory device 120 per shared control bus and data bus. Thus, to the extent system 100 includes multiple different data buses, multiple memory devices 120 can be activated, with an individual memory device 120 activated on each data bus.[0035] In one embodiment, memory devices 120 includes a register (not specifically shown in system 100) to store the DID. For example, memory devices 120 can store DID information in an M PR (multipurpose register), mode register, or other register. In one embodiment, system 100 assigns a unique ID to each memory device during initialization using PDA (Per DRAM address) mode. In one embodiment, a BIOS (basic input/output system) generates and assigns unique IDs during system initialization. In one embodiment, each memory device 120 of system 100 can be configured and enabled for a new mode, which is the device specific self-refresh control mode. In such a mode, each memory device 120 can match its unique DID to respond to self-refresh commands (such as a self-refresh exit signal (CKE)). In one embodiment, memory devices 120 are configured by the associated host via a mode register for a device specific self-refresh command mode. In such a mode, only the memory device with matching ID will exit self-refresh, and the others will ignore the command and remain in self-refresh.[0036] For example, consider that all memory devices 120 have been placed in self- refresh. RCD 110 can send a device specific SRX (self-refresh exit) command to DRAM 0000. Because C/A bus 112 is shared among memory devices 120, all memory devices sharing the bus will receive the SRX command. However, if they are enabled for device specific self- refresh commands, DRAMs 0001:1111 will ignore the command and remain in self-refresh, while only DRAM 0000 awakes from refresh. In one embodiment, C/A bus 112 is a single bus shared among all memory devices 120. In one embodiment, C/A bus 112 is separated as C/A bus 112A and C/A bus 112B corresponding to the separation of data bus 114. In one embodiment, C/A bus 112 can be a single bus whether data bus 114 is a single bus or separated into A and B ports.[0037] In one embodiment, system 100 includes a common bidirectional 4-bit source synchronous data bus 114 (4 bits of data and matched strobe pair) from RCD 110 to memory devices 120. In one embodiment, system 100 includes multiple common buses to mitigate loading, such as data bus 114A and data bus 114B. System 100 specifically illustrates two buses (A and B) as an example. In one embodiment, data buses 114 are terminated at either end of the bus segment to avoid signal reflections. In one embodiment, RCD 110 is a controller and a command issuer. In one embodiment, RCD 110 functions as a C/A register. RCD 110 can forward commands from the host. In one embodiment, RCD 110 can initiate sending of device specific self-refresh commands, without a direct command from the host.[0038] In one embodiment, RCD 110 will drive a unique 4 bit ID on C/A bus 112, while issuing a self-refresh command. In one embodiment, RCD 110 will drive a unique 4 bit ID on data bus 114, while issuing a self-refresh command on C/A bus 112. It will be understood that for data transfer to/from a nonvolatile memory (e.g., "storage" as illustrated in system 100), the self-refresh command is a self-refresh exit to select a memory device for data access. Once the transfer is complete, RCD 110 can place the memory device back into self- refresh with a device specific self-refresh enter command (e.g., a self-refresh command with a DID). RCD 110 could alternatively place the memory device back into self-refresh with a general self-refresh enter command. In one embodiment, RCD 110 can retrieve the data to transfer to/from the nonvolatile storage for each volatile memory device 120 in succession by applying unique IDs while placing the memory devices with completed transactions back into self-refresh.[0039] In one embodiment, when system 100 is implemented as an NVDIM M, the operation flow can occur in accordance with the following. In one embodiment, during platform initialization, BIOS code programs the unique DIDs into each memory device using PDA (per DRAM addressability) mode commands. In one embodiment, to save data in response to detection of a power supply interruption, a memory controller (e.g., such an integrated memory controller (iMC)) of the host can issue commands to cause the memory devices to flush I/O buffers into memory arrays of the memory device, and place all memory devices in self-refresh. An iMC is a memory controller that is integrated onto the same substrate as the host processor or CPU (central processing unit). [0040] In one embodiment, RCD 110 selects an LDQ nibble of the memory device (e.g., a segment of data or DQ bits via the L port), and programs a per device self-refresh exit mode (which can be via command, via a mode register, or via other operation). In oneembodiment, RCD 110 issues a self-refresh exit command with a target DID on the LDQ nibble. Only the memory device with the matching DID will exit self-refresh, and all other memory devices 120 on the same data bus 114 with remain in self-refresh. In one embodiment, RCD 110 issues read and/or write commands to the selected memory device 120 to execute the data transfer for the data access operation. In response to a detection of power failure, the operations will primarily be read operations to read data from memory devices 120 to write to storage. When power is restored, the operations may be primarily write operations to restore the data from storage to memory devices 120.[0041] In one embodiment, when the read or write transaction(s) are complete or finished, RCD 110 places the selected memory device 120 back into self-refresh. RCD 110 can then repeat the process of selecting a specific memory device, causing it to exit from self-refresh, executing the data access operation(s), and putting the device back into self- refresh, until all data transfers are complete. Thus, the per device self-refresh control can enable NVDIMMs with native interfaces to have a pin, component count, and power efficient multi-drop bus to move data from memory devices 120 to nonvolatile memory or nonvolatile storage.[0042] Traditionally only LRDIMMs can be used as NVDIMMs. DIMMs presently are designed with a 72 bit data bus. Connecting the 72 bit data bus to a single nonvolatile storage interface is very inefficient and not practical due to pin count and loading. Thus, RDIMMs, which are not buffered, are impractical for traditional NVDIM M implementations. In contrast, in an LRDIMM the bus goes through the buffer, and the buffer can gate the data transfer to and/or from the host, which reduces loading, and can enable a narrower interface. Alternatively, the buffer can serialize the data transfer or I/O (input/output) into an independent bus connecting to a nonvolatile storage subsystem. Traditionally, during a power failure the 72 bit memory data bus is isolated from the system and connected to the nonvolatile storage (which can also be referred to as a nonvolatile memory (NVM)) subsystem.[0043] In accordance with system 100, RDIMMs can provide a sub-bus such as data buses 114 and 116 where the devices can be addressed and accessed serially via device specific commands. The ability to selectively, device by device, cause memory devices 120 to enter and exit self-refresh allows the use of a serialized bus interface to storage from memory devices 120. Such a sub-bus is more pin efficient than trying to route each bit of the 72 bit data bus. Once the data is serialized, the data transfer can be transferred to nonvolatile storage, with functionality that is not generally distinguishable between an RDIMM or LRDIMM NVDIMM implementation.[0044] Thus, as described herein, NVDIMMs can have a shared local data bus, where the data is accessed from each memory device (e.g., DRAM (dynamic random access memory)) individually. Addressing each device in sequence serializes the data on the data bus, which allows the efficient storing and restoring the contents of the volatile memory devices to/from the nonvolatile storage media. In one embodiment, device specific self-refresh control allows individual control over memory devices on a DIMM, which allows data access operations (e.g., read, write) to be targeted to a single memory device, while keeping the other memory devices in a self-refresh state to avoid data contention on the data bus. Additionally, the fact that all memory devices are in a low power state except the one or ones transferring data to/from the nonvolatile storage, such an implementation improves power savings.[0045] In one embodiment, the device specific self-refresh control leverages existing PDA mode commands available in certain memory technology implementations. Such PDA modes are not necessarily required. The memory devices can be addressed in another way, such as preconfiguring the devices or setting a DID based on location in the memory module. In one embodiment, the computing platform (e.g., via BIOS or other control) can assign a unique identifier (e.g., a unique device identifier or DID) to each memory device. In one embodiment, self-refresh commands (e.g., SRE (self-refresh entry), SRX (self-refresh exit)) can be issued with a specific DID. In one embodiment, such commands can be considered PDA SR (per DRAM addressability self-refresh) commands. When the memory devices are configured in PDA mode, they will only execute on commands with their specific DID. Thus, only the memory device that matches the unique DID will respond to the self- refresh entry/exit command/signal, and the other devices will remain in self-refresh. With a single device per bus active, the controller can control the exchange of data with nonvolatile storage while avoiding contention on the shared data bus. [0046] On a typical DRAM DIM M implementation of system 1007memory devices 120 would be organized as ranks, where each rank includes multiple DRAMs 120. Traditionally, each rank shares a control bus and a data bus. Thus, self-refresh exit commands or signals (e.g., CKE) are common across all the memory devices 120 in the rank, and all memory devices 120 will respond to the command simultaneously. Given this simultaneous response, accessing data from an individual DRAM over a common data bus is not traditionally possible due to bus contention. However, in accordance with system 100, memory devices 120 can be organized in a traditional implementation, but the individual DRAMs can be accessed one at a time without bus contention.[0047] Figure 2 is a block diagram of an embodiment of a DIMM (dual inline memory module) for a power protected memory system with centralized storage in which data is transferred via device specific self-refresh commands. System 200 provides one example of an NVDIMM in accordance with an embodiment of system 100. In one embodiment, NVDIMM side 204 is a "front" side of NVDIMM 202, and NVDIM M side 206 is a "back" side of NVDIMM 202. In one embodiment, front side 204 includes multiple DRAM devices 220. It will be understood that the layout is for illustration only, and is not necessarilyrepresentative of an actual implementation. In one embodiment, back side 206 includes NAN D storage device 230 to provide nonvolatile storage for backing up DRAMs 220, and FPGA (field programmable gate array) 240 to control transfer of data for backup to nonvolatile storage 230. In one embodiment, NVDIMM 202 is an RLDIMM (buffers not specifically illustrated). In one embodiment NVDIMM 202 is an RDIMM.[0048] In one embodiment, NVDIM M 202 includes controller 222, which can be or include an RCD in accordance with RCD 110 of system 100. In one embodiment, FPGA 240 can be programmed to perform at least some of the functions of an RCD in accordance with system 100. FPGA 240 primarily implements data transfer logic for NVDIMM 202. In one embodiment, with an RDIMM, the transfer logic can serially transfer the contents of DRAMs 220 to backup NAN D 230. Back side 206 of NVDIM M 202 illustrates battery connector 250 to interface with a super capacitor or battery to remain powered when power supply power is interrupted. The external supply can provide sufficient time to transfer data from DRAMs 220 to NAN D 230 and/or to maintain the DRAMs powered in self-refresh when power to NVDIMM 202 is interrupted. [0049] NVDIMM 202 includes connector 210 to couple to a host. For example, NVDIMM 202 can interface through a memory expansion slot that matches with connector 210. Connector 210 can have specific spacing of pins to match with an interface on a computing device motherboard. While not specifically shown, it will be understood that NVDIMM 202 includes signal lines routed from connector 210 to DRAMs 220 and controller 222 to interconnect controller 222 and DRAMs 220 to the host.[0050] NVDIMM 202 can include multiple parallel data buses as illustrated in system 100. DRAMs 220 share a control line and data bus. DRAMs 220 couple to NAND 230 via at least one data bus, to enable transfer of memory contents. Controller 222 couples to the control line and shared data bus. In one embodiment, controller 222 and/or FPGA 240 includes logic or circuitry to send device specific self-refresh commands, such as an SRX command, including a command and a device specific identifier. The device specific self- refresh command causes only a specified DRAM 220 to respond to the command, while the other DRAMs ignore the command. System 200 specifically illustrates an embodiment wherein nonvolatile storage is disposed on or located directly on the NVDIMM. In response to detection of power interruption, in one embodiment, controller 222 serially selects DRAMs 220 in turn to transfer data to NAND 230. Controller 222 can place DRAMs 220 in self-refresh and individually wake them from refresh in turn with device specific refresh commands.[0051] Figure 3 is a block diagram of an embodiment of a DIMM (dual inline memory module) for a power protected memory system with centralized storage in which data is transferred via device specific self-refresh commands. System 300 provides one example of an NVDIMM in accordance with an embodiment of system 100. In one embodiment, NVDIMM side 304 is a "front" side of NVDIMM 320 and NVDIMM side 306 is a "back" side of NVDIMM 320. Front side 304 is illustrated to include multiple DRAM devices 320. Back side 306 also includes DRAM devices 320, in contrast to traditional protection systems such as illustrated in the configuration of system 200.[0052] NVDIMM 302 can be an LRDIMM (buffers not specifically illustrated) or an RDIMM. By removing the persistent storage from NVDIMM 302 itself, and centralizing the storage device in centralized storage 350, system 300 enables the backing storage media or storage device 350 to be shared across multiple NVDIMMs. It wil l be understood that centralized storage 350 for backup can be any nonvolatile media. One common medium in use is NAN D flash, which can be contained on the platform or stored as a drive in a drive bay, for example.[0053] As shown in system 300, side 306 includes an I/O (input/output) initiator 330, which can represent a microcontroller and/or other logic on NVDIMM 302. In one embodiment, I/O initiator 330 manages I/O to transfer the contents of DRAM devices 320 from NVDIMM 302 to centralized storage 350. Side 306 also illustrates connector 340 to interface with super capacitor 344 to remain powered by the super-cap when power supply power is interrupted.[0054] Connector 310 of NVDIMM 302 represents a connector to enable NVDIMM 302 to connect to a system platform, such as a DIMM slot. In one embodiment, centralized storage 350 includes connector 352, which enables the centralized storage to connect to one or more I/O interfaces or I/O buses that connect to DRAMs 320. More particularly, centralized storage 350 can include interfaces to one or more data buses coupled to DRAMs 320 of NVDIMM 302. Thus, DRAMs 320 can transfer their contents to centralized storage 350 on detection of a power failu re. In one embodiment, super-cap 344 includes connector 342 to interface super-cap 344 to connector 340 of NVDIM M 302 and any other PPM (power protected memory) DIMMs in system 300. In one embodiment, I/O initiator 330 is control logic on NVDIMM 302 that coordinates the transfer of data from DRAMs 320 to centralized storage 350 in conjunction with operation by a microcontroller. In one embodiment, I/O initiator 330 is incorporated in one or more controllers 322 or 324.[0055] Controllers 322 and 324 represent examples of logic or circuitry to manage the transfer of data between DRAMs 320 and centralized storage 350. In one embodiment, NVDIMM 302 only includes a single controller 322. In one embodiment, memory devices 320 on front side 304 are controlled by controller 322, and memory devices 320 on back side 306 are controlled by controller 324. Controllers 322 and 324 can represent RCDs. In an embodiment where multiple controllers 322 and 324 are used, each DRAM side can have multiple parallel data paths to centralized storage 350. It will be understood that fewer paths involve less cost and less routing and other hardware, while more paths can increase the bandwidth and/or throughput capacity of NVDIMM 302, such as enabling faster transfer from memory devices 320 in the event of a power failure.[0056] NVDIMM 302 can include multiple parallel data buses as illustrated in system 100. DRAMs 320 share a control line and data bus. DRAMs 320 cou ple to external centralized storage 350 via at least one data bus, to enable transfer of memory contents to nonvolatile storage. Controllers 322 and/or 324 couple to the control line and shared data bus of DRAMs 320. In one embodiment, controller 322 and/or controller 324 includes logic or circuitry to send device specific self-refresh commands, such as an SRX command, including a command and a device specific identifier. The device specific self-refresh command causes only a specified DRAM 320 to respond to the command, while the other DRAMs ignore the command. System 300 specifically illustrates an embodiment wherein nonvolatile storage is disposed on or located off the NVDIMM. In response to detection of power interruption, in one embodiment, controller 322 and/or controller 324 serially selects DRAMs 320 in turn to transfer data to centralized storage 350. Controller 322 and/or controller 324 can place DRAMs 320 in self-refresh and individually wake them from refresh in turn with device specific refresh commands.[0057] Figure 4 is a block diagram of an embodiment of a power protected memory system with consolidated storage not on the NVDIMM (nonvolatile DIMM) in which a controller uses device specific self-refresh commands. System 400 provides one example of a system in accordance with system 100, and can use NVDIMMs in accordance with an embodiment of systems 200 and/or 300. System 400 includes centralized or consolidated storage 450. By moving the storage media off the NVDIMM (e.g., DIMMs 422 and 424), multiple NVDIMMs can share storage capacity, which lowers the overall cost of theNVDIMM solution.[0058] In one embodiment, DIMMs 422 and 424 are NVDIMMs, or DIM Ms selected for power protection. DIMMs 422 and 424 include SATA ports 432 to couple to mux 442 for transferring contents to storage 450 in the event of a power failure. In one embodiment, SATA ports 432 couple to data buses on the DIMMs that are shared among multiple memory devices in accordance with what is described above. In one embodiment, SATA ports 432 also enable storage 450 to restore the image on DIMMs 422 and 424 when power is restored. In one embodiment, system 400 includes SPC (storage and power controller) 440 to control the copying of contents from NVDIMMs 422 and 424 to storage 450 on power failure, and to control the copying of contents from storage 450 back to NVDIM Ms 422 and 424 upon restoration of power. In one embodiment, SPC 440 can represent a storage controller with storage media behind it to act as off-NVDIMM storage. [0059] SPC 440 includes mux controller 444 and mux 442 to provide selective access by the NVDIMMs to storage 450 for purposes of backup and restoration of the backup. In one embodiment, SPC 440 is implemented on DIM Ms 422 and 424. In one embodiment, SPC 440 is or includes an RCD or comparable control logic (not specifically shown) to enable the use of device specific self-refresh commands to individual memory devices on DIMMs 422 and 424. It will be understood that the pathway to transfer the data from DIMMs 422 and 424 to storage 450 can be a separate connection than a connection typically used on the platform to access the storage in the event of a page fault at a memory device. In one embodiment, the pathway is a separate, parallel pathway. In one embodiment, the memory can be restored when power is returned via the standard pathway. In one embodiment, the memory is restored from storage by the same pathway used to back the memory up. For example, CPU 410 represents a processor for system 400, which accesses memory of DIMMs 422 and 424 for normal operation via DDR (dual data rate) interfaces 412. Under normal operating conditions, a page fault over DDR 412 would result in CPU 410 accessing data from system nonvolatile storage, which can be the same or different storage from storage 450. The pathway to access the system storage can be the same or different from the pathway from DIMMs 422 and 424 to storage 450 for backup.[0060] System 400 includes super-cap 460 or comparable energy storage device to provide temporary power when system power is lost. Super-cap 460 can be capable of holding an amount of energy that will enable the system to hold a supply voltage at a sufficient level for a sufficient period of time to allow the transfer of contents from the volatile memory on a system power loss condition. The size will thus be dependent on system configuration and system usage. System 400 includes a centralized storage 450, which is powered by super-cap 460 for backup.[0061] In one embodiment, mux 442 of SPC 440 is multiplexing logic to connect multiple different channels of data to storage 450. In one embodiment, the selection of mux 442 operates in parallel to the device specific ID of each memory device, and can thus select each memory device that has been awoken from self-refresh to provide access to the shared data bus for transfer while the other memory devices remain in self-refresh. In one embodiment, mux controller 444 includes a sequencer or sequencing logic that allows multiple DIMMs 422 and 424 to share the storage media. In one embodiment, sequencing logic in an SPC controller ensures that only one DIMM is able to write to the storage media at a given time.[0062] In one embodiment, on system power failure, SPC 440 receives a signal indicating power failure, such as via a SAV signal. In response to the SAV signal or power failure indication, in one embodiment, SPC 440 arbitrates requests from I/O initiator circuitry on the DIMMs to gain access to the storage controller to start a save operation to transfer memory contents to storage 450. In one embodiment, sequencing logic of mux controller 444 provides access to one DIMM at a time. Where arbitration is used, the DIMM that wins arbitration starts its save operation.[0063] In one embodiment, once a DIMM completes its save, it relinquishes access to mux 442, which allows a subsequent DIM M to win its arbitration. Super-cap 460 provides sufficient power to allow all provisioned DIMMs 422 and 424 to complete their save operations. In one embodiment, each DIMM save operation is tagged with metadata that allows SPC 440 to associate the saved image with the corresponding DIMM. In one embodiment, on platform power on, DIMMs 422 and 424 can again arbitrate for access to storage 450 to restore their respective saved images. The flow of transferring the data from DIMMs 422 and 424 can be in accordance with an embodiment of what is described above with respect to system 100. Namely, each memory device of the DIMM can be individually awoken from self-refresh to perform data access over a shared data bus, and then put back into self-refresh. With device specific self-refresh control, the controller can serialize the data from the memory devices to the nonvolatile storage media.[0064] The centralized storage with the controller enables Type 1 compliant NVDIMM (nonvolatile dual inline memory module) designs (energy backed byte accessible persistent memory) with standard DIMM capacity, and reduced footprint on the computing system platform. It will be understood that super capacitor (which may be referred to herein as a "super-cap") footprint does not increase linearly with increased energy storage capacity. Thus, double the capacitor capacity does not double the capacitor in size. Therefore, a protection system with a centralized larger capacity super-cap can provide an overall reduction in protection system size. Additionally, centralized persistent storage can allow the DIMMs to have standard memory device (such as DRAM (dynamic random access memory)) configurations, which can allow for NVDIMMs that have standard DIMM capacities. In one embodiment, the centralized storage can be implemented in SATA storage that would already be present in the system (e.g., by setting aside a protection partition equal to the size of volatile memory desired to be backed up). The amount of memory to be backed up can then be programmable.[0065] When power supply power goes down or is lost or interrupted, a protection controller can selectively connect the memory portion(s) selected for backup, and transfer their contents while the super-cap charges the memory subsystem (and the storage used for persistent storage of the memory contents) during the data transfer. In one embodiment, the backup storage is a dedicated SATA SSD (solid state storage) on the platform. In one embodiment, the backup storage is part of SATA storage already available on the platform.[0066] In one embodiment, the controller is a controller on each DIM M. In one embodiment, the controller is coupled to a programmable SATA multiplexer, which can selectively connect multiple DRAMs or other memory devices to one or more SATA storage devices (e.g., there can be more than one storage pathway available to transfer data). In one embodiment, the controller couples to each memory device via an l2C (inter-integrated circuit) interface. The controller is coupled to the central super-cap logic to receive indication of when power supply power is interrupted. The controller includes logic to control a programming interface to implement the power protected memory functionality. The programming interface can couple to the memory devices to select them for transfer. In one embodiment, the programming interface enables the controller to cause the memory devices to select a backup port for communication. In one embodiment, the programming interface connects to the programmable SATA multiplexer to select how and when each memory device(s) connect. The controller can be referred to as a PPM-SPC (power protected memory storage and power controller).[0067] Figure 5 is a block diagram of an embodiment of a power protected memory system with centralized storage that uses device specific self-refresh commands to perform data transfer. In one embodiment, system 500 illustrates a controller architecture to provide NVDIMM functionality or an equivalent or derivative of NVDIMM. For purposes of simplicity herein, NVDIMM functionality refers to the capability to back up volatile memory devices. Controller 510 represents an SPC or PPM-SPC. In one embodiment, controller 510 implements PDA self-refresh control to individual DRAMs of power protected DIMMs.[0068] In one embodiment, controller 510 includes microcontroller 512, programmable multiplexer (mux) logic 514, super capacitor charging and charging level check logic 520, regulator 516, and l2C controllers or other communication controllers (which can be part of microcontroller 512). System 500 includes centralized super capacitor (super-cap) 522 to provide power when platform power from a power supply is interrupted. The power supply is illustrated as the line coming into controller 510 that is labeled "power supply 12V." Controller 510 can charge super-cap 522 from the power supply while the power supply power is available. It will be understood that while shown as a 12V power supply, it is one example illustration and the power supply can provide any voltage level appropriate for charging a backup energy source. Logic 520 enables controller 510 to charge super-cap 522 and monitor its charge level. Logic 520 can detect when there is an interruption in power supply power, and allow energy from super-cap 522 to flow to regulator 516. Thus, super- cap 522 provides power in place of the power supply when power is interrupted to system 500.[0069] Regulator 516 can provide power to controller 510 and to the connectedDIM Ms. Regulator 516 can provide such power based on power supply power when available, and based on energy from super-cap 522 when power supply power is not available, or falls below a threshold input used for regulation. The power supply power is power provided by a hardware platform in which system 500 is incorporated. As illustrated, regulator 516 provides power to microcontroller 512 (and to the rest of controller 510), as well as providing auxiliary power to DIMMs. In one embodiment, the auxiliary power to the DIM Ms is only used by the DIMMs when power supply power is interrupted. While not specifically shown in system 500, SATA drives 532 and 534 can likewise be powered from power supply power when available, and are powered from super-cap 522 when power supply power is interrupted. In one embodiment, SATA drives 532 and 534 are charged directly from super- cap 522, and not through regulator 516. In one embodiment, regulator 516 powers the SATA drives.[0070] When the hardware platform in which system 500 is a part provides power via power supply 12V, controller 510 and microcontroller 512 can be powered by the platform. In one embodiment, microcontroller 512 monitors the charging level of super-cap 522. In one embodiment, the platform BIOS (basic input/output system) can check the super capacitor charge level by reading microcontroller 512 through an l2C bus or other suitable communication connection. In one embodiment, the BIOS can check the charging level and report to the host OS (operating system) that controls the platform operation. The BIOS can report to the host OS through an ACPI interface (advanced configuration and power interface) mechanism to indicate to the OS if the NVDIMM has enough charge to save the data on power failure.[0071] In one embodiment, the controller system of system 500 can be implemented in accordance with RCD 110 of system 100. For example, microcontroller 512 can implement the RCD functionality. The SATA muxes 514 can be connected to the RCD to provide access to the SATA SSDs 532 and 534 from the memory devices. Microcontroller 512 can send device specific self-refresh commands in one embodiment.[0072] In one embodiment, the system platform for system 500 provides a power supply monitoring mechanism, by which controller 510 receives an indication of whether the power supply power is available. Microcontroller 512 can control the operation of logic 520 based on whether there is system power. In one embodiment, microcontroller 512 receives a SAV# signal asserted from the host platform when power supply power fails. In one embodiment, if the platform generates a SAV# signal assertion, the PPM DIMMs that receive the signal can enter self-refresh mode. In one embodiment, when controller 510 (e.g., a PPM-SPC) receives the SAV# assertion, microcontroller 512 can select a DIMM port (e.g., P[l:7]) in SATA mux 514. Microcontroller 512 can also inform the selected PPM DIMM through l2C (e.g., C[l:3]) to start saving its memory contents. In one embodiment, controller 510 includes one l2C port per memory channel (e.g., CI, C2, C3). Other configurations are possible with different numbers of l2C ports, different numbers of channels, or a combination. In one embodiment, controller 510 includes a LBA (logical block address) number of an SSD to store to. In one embodiment, the PPM DIMM saves the memory contents to a SATA drive, e.g., SATA SSD 532 or SATA SSD 534, connected to SI and S2, respectively, of SATA mux 514. In one embodiment, controller 510 polls the PPM DIMM to determine if the transfer is completed.[0073] In one embodiment, programmable SATA mux 514 allows mapping of DIMM channels to SATA drives 532 and 534 in a flexible way. When SATA mux 514 includes flexible mux logic, it can be programmed or configured based on how much data there is to transfer from the volatile memory, and how much time it will take to transfer. Additionally, in one embodiment, controller 512 can control the operation of SATA mux 514 based on how much time is left to transfer (e.g., based on determining the count of a timer started when power supply power was detected as interrupted). Thus, mux 514 can select DIMMs based on how much data there is to transfer and how much time there is to transfer it. As illustrated, SATA mux 514 includes 7 channels. There can be multiple DIM Ms per channel. The size of the bus can determine how many devices can transfer concurrently. While SATA storage devices 532 and 534 are illustrated, in general there can be a single storage device, or two or more devices. In one embodiment, SATA storage devices 532 and 534 include storage resources that are dedicated to memory backup, such as configured to be part of a PPM system.[0074] SATA storage devices 532 and 534 include centralized storage resources, rather than a storage resource available for only a single DIMM. Wherever located, multiple DIM Ms can store data to the same storage resources in system 500. In one embodiment, SATA storage devices 532 and 534 include storage resources that are part of general purpose storage in the computing system or hardware platform in which system 500 is incorporated. In one embodiment, SATA storage devices 532 and 534 include nonvolatile storage resources built into a memory subsystem. In one embodiment, SATA storage devices 532 and 534 include nonvolatile storage resources outside of the memory subsystem.[0075] Additional flexibility can be provided through the use of device specific self- refresh commands to individual DRAMs or memory devices on a DIMM or other memory module. With device specific commands, system 500 can cause memory devices to exit self- refresh while other devices remain in self-refresh. In addition to controlling data bus collisions, such an operation keeps all memory devices in a low power self-refresh state unless they are transferring data. Thus, the data transfer is more power efficient because only selected memory device(s) will be active at a time. The waking and transfer operations can be in accordance with any embodiment described herein.[0076] Once the transfer is completed from volatile memory to nonvolatile storage, in one embodiment, controller 510 informs the selected power protected DIMM(s) to power down. In one embodiment, only one PPM DIMM is powered up at a time, and controller 510 can select each DIMM in sequence to start saving its contents. The process can continue until PPM DIMM contents are saved. In one embodiment, microcontroller 512 can be programmed during boot which DIMMs to power protect and which DIMMs will not be saved. Thus, system can provide flexibility to allow for optimizing the storage as well as the power and time spent transferring contents. Programming in the host OS can save more critical elements to the DIMMs selected for backup, assuming not all memory resources will be backed up.[0077] As illustrated in system 500, a PPM memory system can include super-cap 522 as a backup energy source coupled in parallel with the platform power supply. Super-cap 522 can provide a temporary source of energy when power from the platform power supply is interrupted. In one embodiment, super-cap 522 is a centralized energy resource, which can provide backup power to multiple DIM Ms, instead of being to a single DIMM. System 500 includes one or more SATA storage devices (such as 532 and 534). Controller 510 interfaces with a memory network of volatile memory devices. Controller 510 can detect that the platform power supply is interrupted, which would otherwise power the memory devices. In response to detection of the power interruption, controller 510 can selectively connect the memory devices to storage devices 532 and/or 534 to transfer contents of selected memory devices to the nonvolatile storage.[0078] In one embodiment, SATA mux 514 can enable controller 510 to selectively connect memory devices in turn to SATA storage devices 532 and 534. Thus, for example, each memory device may be provided a window of time dedicated to transferring its contents to the centralized storage. In one embodiment, the order of selection is predetermined based on system configuration. For example, the system can be configured beforehand to identify which memory resources hold the most critical data to back up, and order the backup based on such a configuration. Each memory device may be selectively able to enter and exit self-refresh with device specific commands. Such a configuration allows the host OS to store data in different memory locations based on whether it will be backed up or not.[0079] Figure 6 is a flow diagram of an embodiment of a process for using device specific self-refresh commands for nonvolatile backup of volatile memory. Process 600 illustrates operations for providing device specific self-refresh control, and can be in accordance with embodiments of systems described above. In one embodiment, a system includes an RCD or controller or other control logic to provide device specific commands to the memory devices.[0080] In one embodiment, during initialization of a memory subsystem on a computing platform, a computing platform assigns a unique device ID to memory devices that share a control bus and a data bus, 602. The assignment of the unique device ID enables device specific self-refresh commands to the device. In one embodiment, the unique device ID can be in accordance with an ID assigned for other PDA operations. A computing system detects a loss of system power supplied from a power supply, 604. Without power, the system will shut down. In one embodiment, the loss of system power causes a controller on the computing system platform to initiate a timer and power down platform subsystems. In one embodiment, a controller places all memory devices in self-refresh, 606. In oneembodiment, in conjunction with the placing of all memory devices in self-refresh, the controller can place the memory devices in PDA mode. In one embodiment, the system flushes I/O buffers of the memory devices back to the memory core, 608.[0081] In one embodiment, a controller selects a memory device port that has a common data bus connected to the memory devices to use for transferring data from the volatile memory devices to nonvolatile storage, 610. The controller identifies a memory device for nonvolatile storage transfer, 612. The transfer can be to read out data contents in the example illustrated to write to nonvolatile storage, when system power loss is detected. It will be understood that upon detection of restoring system power, a similar process can be executed to write data contents back to the volatile memory device from nonvolatile storage. In one embodiment, the controller selects the memory devices in order of device ID. Other orders can be used. In one embodiment, identifying the memory device for nonvolatile storage transfer can include selecting a subset of memory devices, such as devices on different data buses. In one embodiment, the same controller controls operations on multiple parallel buses. In one embodiment, different controllers control operations on separate parallel buses.[0082] The controller sends a device specific ID and a self-refresh command on a shared bus, 614. The selected memory device identifies its device ID and exits self-refresh, while the other memory devices remain in self-refresh, 616. The controller manages the transfer of data contents between the selected volatile memory device and nonvolatile storage, 618. In one embodiment, when the data access transfer operation(s) are complete, the controller can place the selected memory device back in self-refresh, 620. In one embodiment, placing the selected memory device back in self-refresh includes sending a general self-refresh command to the memory devices. In one embodiment, placing the selected memory device back in self-refresh includes sending a device specific self-refresh entry command to the selected memory device. [0083] When the data access operation transfer is complete, the controller can determine if there are additional memory devices to back up or restore, 622. If there are more devices, 624 YES branch, the controller selects the next memory device and repeats the process. The controller can select through every device to transfer contents in turn. If there are no more devices, 624 NO branch, the controller can power down the memory subsystem in the case of power loss, 626, or restore standard operation in the case of restoring data contents. In one embodiment, the operations of process 600 occur in parallel on parallel data buses.[0084] Figure 7A is a block diagram of an embodiment of a register that enables a per device self-refresh mode. Register 710 illustrates one example of a mode register (MRx) or a multipurpose register (M PRy) to store a setting that enables per bank self-refresh commands. Thus, address Az represents one or more bits to set to enable the per bank self- refresh commands. In one embodiment, Az represents a bit that enables per DRAM addressability (PDA). Thus, a system can leverage existing PDA configuration to also enable PDA mode self-refresh, with different IDs assigned to memory devices that share a data bus and control bus. When not enabled (e.g., Az=0), all memory devices can respond to self- refresh commands. When enabled (e.g., Az=l), only the memory device identified by an ID will respond to the self-refresh command(s), and other memory devices will ignore the commands.[0085] While shown as a register setting, it will be understood that in one embodiment, per device self-refresh can be accomplished with command encoding, such as by providing address information with the command. A self-refresh command (e.g., SRE and SRX for DDR DRAMs) may not include address information. However, a control bit enabled with the self- refresh command can trigger a memory device to decode address information to determine if it is selected for the command or not.[0086] Figure 7B is a block diagram of an embodiment of a register that stores a per device identifier for per device self-refresh mode. Register 720 illustrates one example of a mode register (MRx) or a multipurpose register (MPRy) to store a device specific ID (DID). The DID can enable per bank self-refresh commands. Thus, address bits for Az (illustrated as bits Az[3:0]) can represent bits to store an address for the memory device. In one embodiment, addresses can be assigned in the range of [0000:1111]. Other numbers of bits and address ranges can be used, depending on the configuration of the system. In one embodiment, a memory device tests a DID received with a self-refresh command against the identifier stored in register 720 to determine whether the self-refresh command applies to the memory device or not. The memory device can ignore commands that have an identifier different from what is stored in register 720.[0087] Figure 8 is a timing diagram of an embodiment of per device backup to persistent storage. Timing diagram 800 provides one example illustration of a possible flow of operation. Diagram 800 is to be understood as a general example, and is not necessarily representative of a real system. It will also be understood that a clock signal is intentionally left off from diagram 800. The timing diagram is intended to show a relationship between operations, more than specific or relative timing of operations or events. The transfer times will be understood to be much longer than the command timings. Also, it will be understood that data transfers will correspond to commands, which are not specifically shown.[0088] Power signal 810 represents system power to the memory subsystem. At some point in time, power is interrupted, and a detection signal, detect 820, can be triggered. In one embodiment, detect 820 is set as a pulse. In another embodiment, detect 820 can be asserted for as long as the power is interrupted and before the system is powered down. In response to detecting the interruption of power 810, backup power can be provided (not specifically shown).[0089] C/A signal 830 represents a command/address signal line or bus. DRAM 000 signal 840 represents the operation of DRAM 000. DRAM 001 signal 850 represents the operation of DRAM 001. DRAM 010:111 signal 860 represents the operation of other DRAMs 000:111. Data signal 870 represents activity on a data bus shared among DRAMs 000:111. It will be understood that while only 8 DRAMs are represented in diagram 800, more or fewer DRAMs could share a data bus. For all of signals 830, 840, 850, 860, and 870, that state of the signal lines is not considered relevant to the discussion of device specific self-refresh commands, and is illustrated as a Don't Care. There may or may not be activity on the signal lines, but when power 810 is interrupted, the operations will change to a backup state.[0090] In one embodiment, at some point after detect 820 indicates the power loss, a controller (e.g., an RCD or other controller) can send a self-refresh entry (SRE) command to the DRAMs. In response to the SRE command, all DRAMs are illustrated as entering self- refresh, as shown in signals 840, 850, and 860. The controller may or may not perform other backup operations, and the state of the signal line is illustrated as Don't Care. In one embodiment, the controller will wake one DRAM at a time when the memory devices are in self-refresh. For purposes of example, it will be assumed that DRAMs will be caused to exit from self-refresh in order of unique ID.[0091] Thus, in one embodiment, C/A signal 830 includes a self-refresh exit (SRX) command for DRAM 000. In response to the SRX command, DRAM 000 exits self-refresh, as illustrated in signal 840. In response to the SRX command, DRAMs 001:111 remain in self- refresh. With DRAM 000 out of self-refresh, C/A signal 830 provides commands related to data transfer for DRAM 000, and DRAM 000 performs data transfer in response to the commands. In one embodiment, C/A signal 830 illustrates that the controller places DRAM000 back in self-refresh after the data transfer with SRE (self-refresh entry) command for DRAM 000. In one embodiment, the command is a device specific self-refresh command. In response to the SRE command, DRAM 000 goes back into self-refresh as illustrated in signal 840.[0092] After some period of time, which may be immediately after placing DRAM 000 back in self-refresh, C/A signal illustrates an SRX command for DRAM 001. In response to the command, DRAM 001 exits self-refresh, while DRAMs 000 and 010:111 remain in self- refresh. With DRAM 001 out of self-refresh, C/A signal 830 provides commands related to data transfer for DRAM 001, and DRAM 001 performs data transfer in response to the commands. In one embodiment, C/A signal 830 illustrates that the controller places DRAM001 back in self-refresh after the data transfer with SRE (self-refresh entry) command for DRAM 001. In response to the SRE command, DRAM 001 goes back into self-refresh as illustrated in signal 850. The process can be repeated for the other DRAMs. It will be seen that shared data bus 870 will first transfer data for DRAM 000, then for DRAM 001, and so forth until all data transfer operations are completed. It will be understood that in this way there are not collisions on the data bus.[0093] Figure 9 is a block diagram of an embodiment of a system in which per memory device self-refresh commands can be implemented. System 900 includes elements of a memory subsystem in a computing device. Processor 910 represents a processing unit of a host computing platform that executes an operating system (OS) and applications, which can collectively be referred to as a "host" for the memory. The OS and applications execute operations that result in memory accesses. Processor 910 can include one or more separate processors. Each separate processor can include a single and/or a multicore processing unit. The processing unit can be a primary processor such as a CPU (central processing unit) and/or a peripheral processor such as a GPU (graphics processing unit). System 900 can be implemented as an SOC, or be implemented with standalone components.[0094] Memory controller 920 represents one or more memory controller circuits or devices for system 900. Memory controller 920 represents control logic that generates memory access commands in response to the execution of operations by processor 910. Memory controller 920 accesses one or more memory devices 940. Memory devices 940 can be DRAMs in accordance with any referred to above. In one embodiment, memory devices 940 are organized and managed as different channels, where each channel couples to buses and signal lines that couple to multiple memory devices in parallel. Each channel is independently operable. Thus, each channel is independently accessed and controlled, and the timing, data transfer, command and address exchanges, and other operations are separate for each channel. In one embodiment, settings for each channel are controlled by separate mode register or other register settings. In one embodiment, each memory controller 920 manages a separate memory channel, although system 900 can be configured to have multiple channels managed by a single controller, or to have multiple controllers on a single channel. In one embodiment, memory controller 920 is part of host processor 910, such as logic implemented on the same die or implemented in the same package space as the processor.[0095] Memory controller 920 includes I/O interface logic 922 to couple to a system bus. I/O interface logic 922 (as well as I/O 942 of memory device 940) can include pins, connectors, signal lines, and/or other hardware to connect the devices. I/O interface logic 922 can include a hardware interface. As illustrated, I/O interface logic 922 includes at least drivers/transceivers for signal lines. Typically, wires within an integrated circuit interface with a pad or connector to interface to signal lines or traces between devices. I/O interface logic 922 can include drivers, receivers, transceivers, termination, and/or other circuitry to send and/or receive signal on the signal lines between the devices. The system bus can be implemented as multiple signal lines coupling memory controller 920 to memory devices 940. In one embodiment, the system bus includes clock (CLK) 932, command/address (CMD) 934, data (DQ) 936, and other signal lines 938. The signal lines for CMD 934 can be referred to as a "C/A bus" (or ADD/CMD bus, or some other designation indicating the transfer of commands and address information) and the signal lines for DQ 936 be referred to as a "data bus." In one embodiment, independent channels have different clock signals, C/A buses, data buses, and other signal lines. Thus, system 900 can be considered to have multiple "system buses," in the sense that an independent interface path can be considered a separate system bus. It will be understood that in addition to the lines explicitly shown, a system bus can include strobe signaling lines, alert lines, auxiliary lines, and other signal lines. In one embodiment, one CMD bus 934 can be shared among devices having multiple DQ buses 936.[0096] It will be understood that the system bus includes a data bus (DQ 936) configured to operate at a bandwidth. Based on design and/or implementation of system 900, DQ 936 can have more or less bandwidth per memory device 940. For example, DQ 936 can support memory devices that have either a x32 interface, a xl6 interface, a x8 interface, a x4 interface, or other interface. The convention "xN," where N is a binary integer refers to an interface size of memory device 940, which represents a number of signal lines DQ 936 that exchange data with memory controller 920. The interface size of the memory devices is a controlling factor on how many memory devices can be used concurrently per channel in system 900 or coupled in parallel to the same signal lines.[0097] Memory devices 940 represent memory resources for system 900. In one embodiment, each memory device 940 is a separate memory die, which can include multiple (e.g., 2) channels per die. Each memory device 940 includes I/O interface logic 942, which has a bandwidth determined by the implementation of the device (e.g., xl6 or x8 or some other interface bandwidth), and enables the memory devices to interface with memory controller 920. I/O interface logic 942 can include a hardware interface, and can be in accordance with I/O 922 of memory controller, but at the memory device end. In one embodiment, multiple memory devices 940 are connected in parallel to the same data buses. For example, system 900 can be configured with multiple memory devices 940 coupled in parallel, with each memory device responding to a command, and accessing memory resources 960 internal to each. For a Write operation, an individual memory device 940 can write a portion of the overall data word, and for a Read operation, an individual memory device 940 can fetch a portion of the overall data word.[0098] In one embodiment, memory devices 940 are disposed directly on amotherboard or host system platform (e.g., a PCB (printed circuit board) on which processor 910 is disposed) of a computing device. In one embodiment, memory devices 940 can be organized into memory modules 930. In one embodiment, memory modules 930 represent dual inline memory modules (DIMMs). In one embodiment, memory modules 930 represent other organization of multiple memory devices to share at least a portion of access or control circuitry, which can be a separate circuit, a separate device, or a separate board from the host system platform. Memory modules 930 can include multiple memory devices 940, and the memory modules can include support for multiple separate channels to the included memory devices disposed on them.[0099] Memory devices 940 each include memory resources 960. Memory resources 960 represent individual arrays of memory locations or storage locations for data. Typically memory resources 960 are managed as rows of data, accessed via cacheline (rows) and bitline (individual bits within a row) control. Memory resources 960 can be organized as separate channels, ranks, and banks of memory. Channels are independent control paths to storage locations within memory devices 940. Ranks refer to common locations across multiple memory devices (e.g., same row addresses within different devices). Banks refer to arrays of memory locations within a memory device 940. In one embodiment, banks of memory are divided into sub-banks with at least a portion of shared circuitry for the sub- banks.[00100] In one embodiment, memory devices 940 include one or more registers 944. Registers 944 represent storage devices or storage locations that provide configuration or settings for the operation of the memory device. In one embodiment, registers 944 can provide a storage location for memory device 940 to store data for access by memory controller 920 as part of a control or management operation. In one embodiment, registers 944 include Mode Registers. In one embodiment, registers 944 include multipurpose registers. The configuration of locations within register 944 can configure memory device 940 to operate in different "mode," where command and/or address information or signal lines can trigger different operations within memory device 940 depending on the mode. Settings of register 944 can indicate configuration for I/O settings (e.g., timing, termination or ODT (on-die termination), driver configuration, self-refresh settings, and/or other I/O settings).[00101] In one embodiment, memory device 940 includes ODT 946 as part of the interface hardware associated with I/O 942. ODT 946 can be configured as mentioned above, and provide settings for impedance to be applied to the interface to specified signal lines. The ODT settings can be changed based on whether a memory device is a selected target of an access operation or a non-target device. ODT 946 settings can affect the timing and reflections of signaling on the terminated lines. Careful control over ODT 946 can enable higher-speed operation with improved matching of applied impedance and loading.[00102] Memory device 940 includes controller 950, which represents control logic within the memory device to control internal operations within the memory device. For example, controller 950 decodes commands sent by memory controller 920 and generates internal operations to execute or satisfy the commands. Controller 950 can be referred to as an internal controller. Controller 950 can determine what mode is selected based on register 944, and configure the access and/or execution of operations for memory resources 960 based on the selected mode. Controller 950 generates control signals to control the routing of bits within memory device 940 to provide a proper interface for the selected mode and direct a command to the proper memory locations or addresses.[00103] Referring again to memory controller 920, memory controller 920 includes command (CMD) logic 924, which represents logic or circuitry to generate commands to send to memory devices 940. Typically, the signaling in memory subsystems includes address information within or accompanying the command to indicate or select one or more memory locations where the memory devices should execute the command. In one embodiment, controller 950 of memory device 940 includes command logic 952 to receive and decode command and address information received via I/O 942 from memory controller 920. Based on the received command and address information, controller 950 can control the timing of operations of the logic and circuitry within memory device 940 to execute the commands. Controller 950 is responsible for compliance with standards or specifications.[00104] In one embodiment, memory controller 920 includes refresh (REF) logic 926. Refresh logic 926 can be used where memory devices 940 are volatile and need to be refreshed to retain a deterministic state. In one embodiment, refresh logic 926 indicates a location for refresh, and a type of refresh to perform. Refresh logic 926 can trigger self- refresh within memory device 940, and/or execute external refreshes by sending refresh commands. For example, in one embodiment, system 900 supports all bank refreshes as well as per bank refreshes, or other all bank and per bank commands. All bank commands cause an operation of a selected bank within all memory devices 940 coupled in parallel. Per bank commands cause the operation of a specified bank within a specified memory device 940. In one embodiment, refresh logic 926 and/or logic in controller 932 on memory module 930 supports the sending of a per device self-refresh exit command. In one embodiment, system 900 support the sending of a per device self-refresh enter command. In one embodiment, controller 950 within memory device 940 includes refresh logic 954 to apply refresh within memory device 940. In one embodiment, refresh logic 954 generates internal operations to perform refresh in accordance with an external refresh received from memory controller 920. Refresh logic 954 can determine if a refresh is directed to memory device 940, and what memory resources 960 to refresh in response to the command.[00105] In one embodiment, memory module 930 includes controller 932, which can represents an RCD or other controller in accordance with an embodiment described herein. In accordance with what is described, system 900 supports an operation where individual memory devices 940 can be selectively caused to enter and exit self-refresh, independent of whether other memory devices 940 are entering or exiting self-refresh. Such operations can enable system 900 to place all memory devices 940 in low power self-refresh state, and individually bring a memory device 940 out of self-refresh to perform access operations, while other memory devices 940 remain in self-refresh. Such operation can be useful to allow memory devices 940 to share a common data bus.[00106] Figure 10 is a block diagram of an embodiment of a computing system in which a power protected memory system can be implemented. System 1000 represents a computing device in accordance with any embodiment described herein, and can be a laptop computer, a desktop computer, a server, a gaming or entertainment control system, a scanner, copier, printer, routing or switching device, or other electronic device. System 1000 includes processor 1020, which provides processing, operation management, and execution of instructions for system 1000. Processor 1020 can include any type of microprocessor, central processing unit (CPU), processing core, or other processing hardware to provide processing for system 1000. Processor 1020 controls the overall operation of system 1000, and can be or include, one or more programmable general- purpose or special-purpose microprocessors, digital signal processors (DSPs), programmable controllers, application specific integrated circuits (ASICs), programmable logic devices (PLDs), or the like, or a combination of such devices. [00107] Memory subsystem 1030 represents the main memory of system 1000, and provides temporary storage for code to be executed by processor 1020, or data values to be used in executing a routine. Memory subsystem 1030 can include one or more memory devices such as read-only memory (ROM), flash memory, one or more varieties of random access memory (RAM), or other memory devices, or a combination of such devices. Memory subsystem 1030 stores and hosts, among other things, operating system (OS) 1036 to provide a software platform for execution of instructions in system 1000. Additionally, other instructions 1038 are stored and executed from memory subsystem 1030 to provide the logic and the processing of system 1000. OS 1036 and instructions 1038 are executed by processor 1020. Memory subsystem 1030 includes memory device 1032 where it stores data, instructions, programs, or other items. In one embodiment, memory subsystem includes memory controller 1034, which is a memory controller to generate and issue commands to memory device 1032. It will be understood that memory controller 1034 could be a physical part of processor 1020.[00108] Processor 1020 and memory subsystem 1030 are coupled to bus/bus system 1010. Bus 1010 is an abstraction that represents any one or more separate physical buses, communication lines/interfaces, and/or point-to-point connections, connected by appropriate bridges, adapters, and/or controllers. Therefore, bus 1010 can include, for example, one or more of a system bus, a Peripheral Component Interconnect (PCI) bus, a HyperTransport or industry standard architecture (ISA) bus, a small computer system interface (SCSI) bus, a universal serial bus (USB), or an Institute of Electrical and Electronics Engineers (IEEE) standard 1394 bus (commonly referred to as "Firewire"). The buses of bus 1010 can also correspond to interfaces in network interface 1050.[00109] System 1000 also includes one or more input/output (I/O) interface(s) 1040, network interface 1050, one or more internal mass storage device(s) 1060, and peripheral interface 1070 coupled to bus 1010. I/O interface 1040 can include one or more interface components through which a user interacts with system 1000 (e.g., video, audio, and/or alphanumeric interfacing). Network interface 1050 provides system 1000 the ability to communicate with remote devices (e.g., servers, other computing devices) over one or more networks. Network interface 1050 can include an Ethernet adapter, wireless interconnection components, USB (universal serial bus), or other wired or wireless standards-based or proprietary interfaces. [00110] Storage 1060 can be or include any conventional medium for storing large amounts of data in a nonvolatile manner, such as one or more magnetic, solid state, or optical based disks, or a combination. Storage 1060 holds code or instructions and data 1062 in a persistent state (i.e., the value is retained despite interruption of power to system 1000). Storage 1060 can be generically considered to be a "memory," although memory 1030 is the executing or operating memory to provide instructions to processor 1020.Whereas storage 1060 is nonvolatile, memory 1030 can include volatile memory (i.e., the value or state of the data is indeterminate if power is interrupted to system 1000).[00111] Peripheral interface 1070 can include any hardware interface not specifically mentioned above. Peripherals refer generally to devices that connect dependently to system 1000. A dependent connection is one where system 1000 provides the software and/or hardware platform on which operation executes, and with which a user interacts.[00112] In one embodiment, memory subsystem 1030 includes self-refresh (SR) control 1080, which can be control within memory controller 1034 and/or memory 1032 and/or can be control logic on a memory module. SR control 1080 enables system 1000 to individually address specific memory devices 1032 for self-refresh. The device specific SR control enables memory subsystem 1030 to individually address and cause a specific memory device (such as a single DRAM) to enter and/or exit self-refresh. It will be understood that a "single DRAM" can refer to memory resources that are independently addressable to interface with a data bus, and therefore certain memory die can include multiple memory devices. SR control 1080 can enable memory subsystem 1030 to implement an NVDIM M implementation for memory devices that share a control bus and a data bus, in accordance with any embodiment described herein.[00113] Figure 11 is a block diagram of an embodiment of a mobile device in which a power protected memory system can be implemented. Device 1100 represents a mobile computing device, such as a computing tablet, a mobile phone or smartphone, a wireless- enabled e-reader, wearable computing device, or other mobile device. It will be understood that certain of the components a re shown generally, and not all components of such a device are shown in device 1100.[00114] Device 1100 includes processor 1110, which performs the primary processing operations of device 1100. Processor 1110 can include one or more physical devices, such as microprocessors, application processors, microcontrollers, programmable logic devices, or other processing means. The processing operations performed by processor 1110 include the execution of an operating platform or operating system on which applications and/or device functions are executed. The processing operations include operations related to I/O (input/output) with a human user or with other devices, operations related to power management, and/or operations related to connecting device 1100 to another device. The processing operations can also include operations related to audio I/O and/or display I/O.[00115] In one embodiment, device 1100 includes audio subsystem 1120, which represents hardware (e.g., audio hardware and audio circuits) and software (e.g., drivers, codecs) components associated with providing audio functions to the computing device. Audio functions can include speaker and/or headphone output, as well as microphone input. Devices for such functions can be integrated into device 1100, or connected to device 1100. In one embodiment, a user interacts with device 1100 by providing audio commands that are received and processed by processor 1110.[00116] Display subsystem 1130 represents hardware (e.g., display devices) and software (e.g., drivers) components that provide a visual and/or tactile display for a user to interact with the computing device. Display subsystem 1130 includes display interface 1132, which includes the particular screen or hardware device used to provide a display to a user. In one embodiment, display interface 1132 includes logic separate from processor 1110 to perform at least some processing related to the display. In one embodiment, display subsystem 1130 includes a touchscreen device that provides both output and input to a user. In one embodiment, display subsystem 1130 includes a high definition (H D) display that provides an output to a user. High definition can refer to a display having a pixel density of approximately 100 PPI (pixels per inch) or greater, and can include formats such as full HD (e.g., 1080p), retina displays, 4K (ultra high definition or UHD), or others.[00117] I/O controller 1140 represents hardware devices and software components related to interaction with a user. I/O controller 1140 can operate to manage hardware that is part of audio subsystem 1120 and/or display subsystem 1130. Additionally, I/O controller 1140 illustrates a connection point for additional devices that connect to device 1100 through which a user might interact with the system. For example, devices that can be attached to device 1100 might include microphone devices, speaker or stereo systems, video systems or other display device, keyboard or keypad devices, or other I/O devices for use with specific applications such as card readers or other devices. [00118] As mentioned above, I/O controller 1140 can interact with audio subsystem 1120 and/or display subsystem 1130. For example, input through a microphone or other audio device can provide input or commands for one or more applications or functions of device 1100. Additionally, audio output can be provided instead of or in addition to display output. In another example, if display subsystem includes a touchscreen, the display device also acts as an input device, which can be at least partially managed by I/O controller 1140. There can also be additional buttons or switches on device 1100 to provide I/O functions managed by I/O controller 1140.[00119] In one embodiment, I/O controller 1140 manages devices such asaccelerometers, cameras, light sensors or other environmental sensors, gyroscopes, global positioning system (GPS), or other hardware that can be included in device 1100. The input can be part of direct user interaction, as well as providing environmental input to the system to influence its operations (such as filtering for noise, adjusting displays for brightness detection, applying a flash for a camera, or other features). In one embodiment, device 1100 includes power management 1150 that manages battery power usage, charging of the battery, and features related to power saving operation.[00120] Memory subsystem 1160 includes memory device(s) 1162 for storing information in device 1100. Memory subsystem 1160 can include nonvolatile (state does not change if power to the memory device is interrupted) and/or volatile (state is indeterminate if power to the memory device is interrupted) memory devices. Memory 1160 can store application data, user data, music, photos, documents, or other data, as well as system data (whether long-term or temporary) related to the execution of the applications and functions of system 1100. In one embodiment, memory subsystem 1160 includes memory controller 1164 (which could also be considered part of the control of system 1100, and could potentially be considered part of processor 1110). Memory controller 1164 includes a scheduler to generate and issue commands to memory device 1162.[00121] Connectivity 1170 includes hardware devices (e.g., wireless and/or wired connectors and communication hardware) and software components (e.g., drivers, protocol stacks) to enable device 1100 to communicate with external devices. The external device could be separate devices, such as other computing devices, wireless access points or base stations, as well as peripherals such as headsets, printers, or other devices. [00122] Connectivity 1170 can include multiple different types of connectivity. To generalize, device 1100 is illustrated with cellular connectivity 1172 and wireless connectivity 1174. Cellular connectivity 1172 refers generally to cellular network connectivity provided by wireless carriers, such as provided via GSM (global system for mobile communications) or variations or derivatives, CDMA (code division multiple access) or variations or derivatives, TDM (time division multiplexing) or variations or derivatives, LTE (long term evolution - also referred to as "4G"), or other cellular service standards. Wireless connectivity 1174 refers to wireless connectivity that is not cellular, and can include personal area networks (such as Bluetooth), local area networks (such as WiFi), and/or wide area networks (such as WiMax), or other wireless communication. Wireless communication refers to transfer of data through the use of modulated electromagnetic radiation through a non-solid medium. Wired communication occurs through a solid communication medium.[00123] Peripheral connections 1180 include hardware interfaces and connectors, as well as software components (e.g., drivers, protocol stacks) to make peripheral connections. It will be understood that device 1100 could both be a peripheral device ("to" 1182) to other computing devices, as well as have peripheral devices ("from" 1184) connected to it. Device 1100 commonly has a "docking" connector to connect to other computing devices for purposes such as managing (e.g., downloading and/or uploading, changing, synchronizing) content on device 1100. Additionally, a docking connector can allow device 1100 to connect to certain peripherals that allow device 1100 to control content output, for example, to audiovisual or other systems.[00124] In addition to a proprietary docking connector or other proprietary connection hardware, device 1100 can make peripheral connections 1180 via common or standards- based connectors. Common types can include a U niversal Serial Bus (USB) connector (which can include any of a number of different hardware interfaces), DisplayPort including MiniDisplayPort (MDP), High Definition Multimedia Interface (HDMI), Firewire, or other type.[00125] In one embodiment, memory subsystem 1160 includes self-refresh (SR) control 1190, which can be control within memory controller 1164 and/or memory 1162 and/or can be control logic on a memory module. SR control 1190 enables system 1100 to individually address specific memory devices 1162 for self-refresh. The device specific SR control enables memory subsystem 1160 to individually address and cause a specific memory device (such as a single DRAM) to enter and/or exit self-refresh. It will be understood that a "single DRAM" can refer to memory resources that are independently addressable to interface with a data bus, and therefore certain memory die can include multiple memory devices. SR control 1190 can enable memory subsystem 1160 to implement an NVDIM M implementation for memory devices that share a control bus and a data bus, in accordance with any embodiment described herein.[00126] In one aspect, a buffer circuit in a memory subsystem includes: an interface to a control bus, the control bus to be coupled to multiple memory devices; an interface to a data bus, the data bus to be coupled to the multiple memory devices; control logic to send a device specific self-refresh exit command over the control bus when the multiple memory devices are in self-refresh, the command including a unique memory device identifier to cause only an identified memory device to exit self-refresh while the other memory devices remain in self-refresh, and the control logic to perform data access over the data bus for the memory device caused to exit self-refresh.[00127] In one embodiment, the control logic is further to select a subset of the multiple memory devices, and send device specific self-refresh exit commands to each of the selected memory devices of the subset. In one embodiment, the self-refresh exit command includes a CKE (clock enable) signal. In one embodiment, the control logic is further to select the memory devices in turn to cause serial memory access to all of the memory devices. In one embodiment, the buffer circuit comprises a registered clock driver (RCD) of an NVDIM M (nonvolatile dual inline memory module), wherein the control logic is further to transfer self-refresh commands to all memory devices to place the memory devices in self-refresh as part of a backup transfer process to transfer memory contents to a persistent storage upon detection of a power failure. In one embodiment, the interface to the data bus comprises an interface to an alternate data bus parallel to a primary data bus used by the memory devices in active operation, and wherein the control logic is to cause the memory devices to transfer memory contents via the alternate data bus as part of the backup transfer process. In one embodiment, the persistent storage comprises a storage device disposed on the NVDIMM. In one embodiment, the second data bus is to couple to a persistent storage device located external to the NVDIM M. In one embodiment, the buffer circuit comprises a backup controller of a registered DIMM (RDIM M). In one embodiment, after the performance of data access with a selected memory device, the control logic further to send a device specific self-refresh command including a self-refresh enter command and the unique memory device identifier over the control bus to cause the selected memory device to re-enter self-refresh. In one embodiment, the memory devices include dual data rate version 4 synchronous dynamic random access memory devices (DDR4-SDRAMs). In one embodiment, the memory devices are part of a same memory rank, and the control line comprises a command/address bus for the memory rank.[00128] In one aspect, a nonvolatile dual inline memory module (NVDIMM) includes: a first data bus; a second data bus; multiple volatile memory devices coupled to a common control line shared by the memory devices, the memory devices further to couple to a nonvolatile storage via the second data bus; and control logic coupled to the memory devices via the first data bus and via the common control line, the control logic including control logic to send a device specific self-refresh exit command over the control line when the multiple memory devices are in self-refresh, the command including a unique memory device identifier to cause only an identified memory device to exit self-refresh while the other memory devices remain in self-refresh, and the control logic to cause the identified memory device to transfer memory contents via the second memory bus while the other memory devices remain in self-refresh.[00129] In one embodiment, the memory devices include dual data rate version 4 synchronous dynamic random access memory devices (DDR4-SDRAMs). In one embodiment, the nonvolatile storage comprises a storage device disposed on the NVDIM M. In one embodiment, the second data bus is to couple to a nonvolatile storage device located external to the NVDIMM. In one embodiment, the control logic is further to selectively cause one memory device at a time to exit self-refresh, transfer memory contents to the nonvolatile storage, and then return to self-refresh, repeating for all memory devices in turn in response to detection of a power failure. In one embodiment, after the performance of data access with a selected memory device, the control logic further to send a device specific self-refresh command including a self-refresh enter command and the unique memory device identifier over the control bus to cause the selected memory device to reenter self-refresh. In one embodiment, the memory devices are part of a same memory rank, and the control line comprises a command/address bus for the memory rank. In one embodiment, the control logic comprises a registered clock driver (RCD). In oneembodiment, the buffer circuit comprises a backup controller of a registered DIMM (RDIMM). In one embodiment, the control logic is further to select a subset of the multiple memory devices, and send device specific self-refresh exit commands to each of the selected memory devices of the subset. In one embodiment, the self-refresh exit command includes a CKE (clock enable) signal.[00130] In one aspect, a method for memory management includes: selecting for data access one of multiple memory devices that share a control bus, wherein the memory devices are in self-refresh; sending a device specific self-refresh exit command including a self-refresh exit command and a unique memory device identifier over the shared control bus to cause only the selected memory device to exit self-refresh while the others remain in self-refresh; and performing data access over a shared data bus for the memory device not in self-refresh.[00131] In one embodiment, selecting comprises selecting a subset of memory devices, and sending the device specific self-refresh exit command comprises sending device specific commands to each memory device of the selected subset. In one embodiment, selecting comprises selecting each memory device individually to cause serial memory access to the memory devices. In one embodiment, sending the self-refresh exit command comprises sending a CKE (clock enable) signal. In one embodiment, the memory devices comprise memory devices of a registered DIMM (RDIM M). In one embodiment, further comprising: after performing the data access with the selected memory device, sending a device specific self-refresh command including a self-refresh command and the unique memory device identifier over the shared control bus to cause the selected memory device to re-enter self- refresh. In one embodiment, the sending the device specific self-refresh command comprises sending a command from a registered clock driver (RCD) of an NVDIM M(nonvolatile dual inline memory module). In one embodiment, performing data access further comprises transferring data contents as part of a backup transfer process to transfer memory contents to a persistent storage upon detection of a power failure. In one embodiment, performing the data access further comprises performing the data access on an alternate data bus parallel to a primary data bus, wherein the primary data bus to is be used by the memory devices in active operation, and wherein the alternate data bus is to be used by the memory devices as part of the backup transfer process. In one embodiment, the persistent storage comprises a storage device disposed on the NVDIMM. In oneembodiment, the persistent storage comprises a storage device located external to the NVDIMM. In one embodiment, the memory devices share the control bus as part of a memory rank that shares a command/address bus. In one embodiment, the memory devices include dual data rate version 4 synchronous dynamic random access memory devices (DDR4-SDRAMs).[00132] Flow diagrams as illustrated herein provide examples of sequences of various process actions. The flow diagrams can indicate operations to be executed by a software or firmware routine, as well as physical operations. In one embodiment, a flow diagram can illustrate the state of a finite state machine (FSM), which can be implemented in hardware and/or software. Although shown in a particular sequence or order, unless otherwise specified, the order of the actions can be modified. Thus, the illustrated embodiments should be understood only as an example, and the process can be performed in a different order, and some actions can be performed in parallel. Additionally, one or more actions can be omitted in various embodiments; thus, not all actions are required in every embodiment. Other process flows are possible.[00133] To the extent various operations or functions are described herein, they can be described or defined as software code, instructions, configuration, and/or data. The content can be directly executable ("object" or "executable" form), source code, or difference code ("delta" or "patch" code). The software content of the embodiments described herein can be provided via an article of manufacture with the content stored thereon, or via a method of operating a communication interface to send data via the communication interface. A machine readable storage medium can cause a machine to perform the functions or operations described, and includes any mechanism that stores information in a form accessible by a machine (e.g., computing device, electronic system, etc.), such as recordable/non-recordable media (e.g., read only memory (ROM), random access memory (RAM), magnetic disk storage media, optical storage media, flash memory devices, etc.). A communication interface includes any mechanism that interfaces to any of a hardwired, wireless, optical, etc., medium to communicate to another device, such as a memory bus interface, a processor bus interface, an Internet connection, a disk controller, etc. The communication interface can be configured by providing configuration parameters and/or sending signals to prepare the communication interface to provide a data signal describing the software content. The communication interface can be accessed via one or more commands or signals sent to the communication interface. [00134] Various components described herein can be a means for performing the operations or functions described. Each component described herein includes software, hardware, or a combination of these. The components can be implemented as software modules, hardware modules, special-purpose hardware (e.g., application specific hardware, application specific integrated circuits (ASICs), digital signal processors (DSPs), etc.), embedded controllers, hardwired circuitry, etc.[00135] Besides what is described herein, various modifications can be made to the disclosed embodiments and implementations of the invention without departing from their scope. Therefore, the illustrations and examples herein should be construed in an illustrative, and not a restrictive sense. The scope of the invention should be measured solely by reference to the claims that follow. |
Techniques directed to realizing and verifying a logic model design are provided by first dividing the logic model design into two or more logic portions. The various model portions can then realized to form various realized logic portions. A first realized logic portion can then be wrapped and formally verified against it's respective model. The wrapper can then be verified by first applying the wrapper to a second logic model portion and a second realized logic portion, then formally verifying them against each other. The resulting output can then be prove wrapper correctness. |
A method for realizing a logic model design, comprising:determining a plurality of logic model portions from a logic model design (120) and dividing the logic model design into two or more logic model portions (310, 320, 330);performing a realization step on a first logic model portion (320) to produce a first realized logic portion (320');applying a first wrapper (530) to the first realized logic portion; andverifying the functionality of the first wrapped realized logic portion (520).The method of claim 1, wherein verifying the functionality of the first realized logic portion (320') includes performing a verification operation on the first wrapped realized logic portion (520) to produce a first realized output.The method of claim 1, wherein verifying the functionality of the first realized logic portion includes:performing a verification operation on the first wrapped realized logic portion (520) to produce a first realized output.performing a verification operation on the first logic model portion (320) to produce a first model output; andcomparing the first realized output to the first model output.The method of claim 3, further comprising:performing a realization step on a second logic model portion (330) to produce a second realized logic portion (330'); andperforming a verification operation on the second realized logic portion to produce a second realized output.The method of claim 4, further comprising:applying the first wrapper (530) to the second logic model portion (330);applying the first wrapper (530) to the second logic model portion (330);performing a verification operation on the second wrapped logic model portion to produce a second model output; and comparing the second realized output to the second model output.The method of claim 3, further comprising verifying the functionality of the first wrapper (530) using an associativity-based technique.The method of claim 6, wherein verifying the functionality of the first wrapper (530) includes:applying the first wrapper (530) to a second logic model portion (330), andperforming a verification operation on the second wrapped logic model portion toproduce a second model output.The method of claim 7, wherein verifying the functionality of the first wrapper (530) further includes:performing a verification operation on the second realized logic portion (330') toproduce a second realized output, andcomparing the second realized output to the second model output.The method of claim 7, wherein the first wrapper (530) is applied to the output of the first realized logic portion (320') and further applied to the input of the second logic model portion (330).The method of claim 8, wherein an input of the second logic model portion (330) is logically linked to an output of the first logic model portion (320).A machine-readable medium including instructions for realizing a logic model design and being arranged to cause a machine to perform the steps of:determining a plurality of logic model portions (1004) from a logic model design anddividing (1006) the logic model design into two or more logic model portions;performing a realization step (1008, 1010)on a first logic model portion to produce a first realized logic portion;applying (1012) a first wrapper to the first realized logic portion; andverifying the functionality (1014, 1016, 1018, 1020) of the first wrapped realized logic portion.The machine-readable medium of claim 11, wherein verifying the functionality of the first realized logic portion includes performing a verification operation (1014, 1020) on the first wrapped realized logic portion to produce a first realized output.The machine-readable medium of claim 11, wherein verifying the functionality of the first realized logic portion includes:performing a verification operation (1014, 1020) on the first wrapped realized logic portion to produce a first realized output;performing a verification operation (1016, 1020) on the first logic model portion to produce a first model output; andcomparing (1018) the first realized output to the first model output.The machine-readable medium of claim 13, further including the steps of:performing a realization (1040) on a second logic model portion to produce a second realized logic portion; andperforming a verification operation (1044), 1050) on the second realized logic portion to produce a second realized output.The machine-readable medium of claim 14, further including the steps of:applying (1042) the first wrapper to the second logic model portion;performing a verification operation (1046, 1050) on the second wrapped logic model portion to produce a second model output; andcomparing (1048) the second realized output to the second model output.The machine-readable medium of claim 13, further including the step of verifying the functionality of the first wrapper (530) using an associativity-based technique.The machine-readable medium of claim 16, wherein verifying the functionality of the first wrapper (530) includes:applying the first wrapper to a second logic model portion (330), andperforming a verification operation on the second wrapped logic model portion to produce a second model output.The machine-readable medium of claim 17, wherein verifying the functionality of the first wrapper (530) further includes:performing a verification operation on the second realized logic portion (330') to produce a second realized output, andcomparing the second realized output to the second model output.The machine-readable medium of claim 18, wherein the first wrapper (530) is applied to the output of the first realized logic portion (320') and then further applied to the input of the second logic model portion (330).The machine-readable medium of claim 18, wherein an input of the second logic model portion (330) is logically linked to an output of the first logic model portion (320).An apparatus for realizing a logic model design (900), comprising:logic development circuitry (930) operable to determine a plurality of logic model portions from a logic model design and to divide said logic model design into two or more logic model portions (310, 320, 330), said development circuitry being further operable to realize at least one logic model portion to form a first realized logic portion (320');a wrapping device (950) that applies a first wrapper (530) to the first realized logic portion; andone or more second devices (960, 970) that verify the functionality of the first wrapped realized logic portion.The apparatus of claim 21, wherein the one or more second devices includes a verification device (960) that performs a verification operation on the first wrapped realized logic portion to produce a first realized output.The apparatus of claim 21, wherein the one or more second devices includes:a verification device (960) that performs a verification operation on the first wrapped realized logic portion (520) to produce a first realized output, and wherein the verification device further performs a verification operation on the first logic model portion to produce a first model output; anda comparing device (970) that compares the first realized output to the first model output.The apparatus of claim 23, wherein the verification device (960) further performs a verification operation on a second realized logic portion (330') to produce a second realized output, the second realized logic portion being realized based on a second logic model portion (330).The apparatus of claim 24, wherein the wrapping device (950) further applies the first wrapper (530) to the second logic model portion (330), wherein the verification device further performs a verification operation on the second wrapped logic model portion (830) to produce a second model output and wherein the comparing device (970) further compares the second realized output to the second model output.The apparatus of claim 23, wherein the apparatus (900) verifies the functionality of the first wrapper (530) using an associativity-based technique.The apparatus of claim 26, wherein the wrapping device (950) further applies the first wrapper (530) to a second logic model portion (330), and wherein the verification device (960) further performs a verification operation on the second wrapped logic model portion (830) to produce a second model output.The apparatus of claim 27, wherein the verification device (960) further performs a verification operation on the second realized logic portion (330') to produce a second realized output, and the comparing device (970) further compares the second realized output to the second model output.The apparatus of claim 27, wherein the first wrapper (530) is applied to the output of the first realized logic portion (320') and the further applied to the input of the second logic model portion (330').The apparatus of claim 28, wherein an input of the second logic model portion (330) is logically linked to an output of the first logic model portion (320).A method for realizing a logic model design, comprising:determining plurality of logic model portions from a logic model design and dividing the logic model design into two or more logic portions (310, 320, 330);performing realization steps on first (320) and second (330) logic model portions to produce respective first (320') and second (330') realized logic portions;applying a first wrapper (530) to the first realized logic portion and performing a verification operation to verify the first realized logic portion has correct functionality;applying the first wrapper to the second logic model portion and performing a verification operation to verify the first wrapper.A computer program product including program code for realizing a logic model design and arranged to cause performance of the steps of any of claims 1 to 10 or 31.A method for realizing a logic model design divided into two or more logic model portions (310, 320, 330), comprising:realizing a first logic model portion (320) to produce a first realized logic portion (320'); andformally verifying the functionality of the first wrapped realized logic portion (520) using a first wrapper (530) applied to the first realized logic portion (320').The method of claim 33, further comprising verifying the first wrapper (530).The method of claim 34, wherein verifying the first wrapper (530) includes the step of:applying the first wrapper to a second logic model portion (330); andperforming a formal verification on the second wrapped logic model portion (330, 330').The method of claim 34, wherein verifying the first wrapper (530) uses an associativity-based technique. |
FIELD OF THE INVENTIONThis invention relates to methods and systems for realizing logic model designs and particularly to development and verification of integrated circuit designs.BACKGROUND OF THE INVENTIONIntegrated circuit designers have an array of modem tools, such as schematic entry programs and various descriptive languages, e.g., Verilog and VHDL (Very High-Level Descriptive Language), to facilitate the creation of various logic designs. Once a particular logic design is initially created, a designer may wish to compile the logic design to remove various errors. For example, compiling a VHDL file may reveal any of (1) minor syntax errors, (2) potential problems, i.e., warnings, that may or may not be of consequence and (3) major design errors of functional consequence. Typically, a designer can modify the entry file of a logic design until a compiler indicates that all design errors are apparently removed and any residual warnings are either removed or otherwise deemed harmless by the designer. The resulting compiled design can then be considered a model of the logic design.Once a logic design is compiled, the integrated circuit designer may then wish to "realize" the resulting logic model. That is, a designer may wish to perform a number of operations on the logic model to convert the logic model from abstract mathematical and functional relationships to a more low-level form consisting of a description of various logic circuits and interconnecting pins, e.g. a gate level Verilog netlist.Once the logic model is realized, the integrated circuit designer may wish to verify the functionality of the realized logic design. Conventional verification approaches include applying a "wrapper" to the realized logic design and then performing a simulation on the wrapped realized logic design. A wrapper is a software construct that enables a designer to interact with a logic design on a pin (nodal) level, i.e., feed simulated electrical signals to the realized logic design and/or estimate/measure the resulting signals produced by the realized logic design.Unfortunately, verification techniques based on simulations of wrapped logic designs can require large amounts of computer processing power and memory. To complicate this issue, it should be appreciated that modem day integrated circuits have dramatically increased in size and capacity. That is, modem electronic technology has made it possible to put large numbers of increasingly complex electronic circuits on practicable-sized silicon dies. As a result, verifying such large logic designs can require impracticable amounts of computer resources.SUMMARY OF THE INVENTIONEmbodiments of the invention provide techniques directed to realizing and verifying a logic model design by first dividing the logic model design into two or more logic portions. A realization of a first logic model portion is then performed to produce a first realized logic portion. In this fashion, the logic design can be piecemeal realized.Subsequently a first wrapper can be applied to the first realized logic portion and the functionality of the first wrapped realized logic portion can be verified against the model of itself, using formal verification methods. Second and further realized portions can be treated in a similar manner until the entire circuit design is verified.By piecemeal realizing portions of a design's logic model, as opposed to realizing the entire logic model, realization can be performed using relatively modest computer resources and in a more timely manner. Furthermore, by applying wrappers specific to each realized logic portion, verification of the individual realized portions can also be achieved using relatively modest computer resources.As it is known that the wrappers used to verify a realized logic portion are themselves subject to error, the described embodiment of present invention provides an approach to verifying wrappers by first applying the same wrapper used with the first realized logic portion to a second portion of the logic model design and performing formal verification on the second wrapped logic model portion with a second realized logic portion. Accordingly, integrated circuits can be both realized and verified using relatively modest computer resources as compared to realizing and verifying a whole logic design at once. Others features and advantages will become apparent in part from the following descriptions and accompanying figures and in part by performing the invention.BRIEF DESCRIPTION OF THE DRAWINGSEmbodiments of the invention will now be described by way of example only with regard to the following figures, wherein like numerals reference like elements, and wherein:Figure 1 depicts a progressive development of a logic design for an integrated circuit;Figure 2 is a block diagram of an exemplary logic model design;Figure 3 depicts the logic model design of Fig. 2 conceptually divided into three logic portions;Figure 4 depicts the realization of a second logic portion of the logic model of Fig. 3;Figure 5 depicts the second realized logic portion of Fig. 4 with a wrapper applied to it;Figure 6 schematically depicts the verification of the wrapped second realized logic portion of Fig. 5;Figure 7 depicts the realization of the third logic portion of the logic model of Fig. 3;Figure 8 depicts the verification of the third realized logic portion of Fig. 5;Figure 9 is a block diagram of an apparatus capable of realizing and verifying a logic design according to the present invention; andFigure 10 is a flowchart outlining an exemplary operation for realizing and verifying a logic design embodying the present invention.DETAILED DESCRIPTIONFigure 1 depicts the progressive development of a logic design for an integrated circuit. As shown in fig. 1, an integrated circuit's logic design can start with a source file 110. The exemplary source file 110 is a very high-level descriptive language (vhdl) based text file. However, it should be appreciated that the source file 110 alternatively can be based on any number of text-based languages, such as vhdl, verilog, any number of schematic entry based tools or any other known or later developed paradigm useful for entering and/or designing logic circuits without departing from the present invention as defined in the claims.Once the source file 110 is initially created, a logic designer may wish to compile and optionally simulate the source file 110 to create the design's logic model 120. As discussed above, compiling a source file may reveal any number of errors and warnings. That is, each time the designer compiles the source file 110, the designer can receive feedback from a compiler that can help eliminate design errors as well as eliminate or understand any residual warnings. Accordingly, developing the logic model 120 can involve any number of iterations of compiling/analyzing feedback until the designer is reasonably confident of the source file's correctness. The resulting logic model 120 can then be realized to produce the design's realized logic 130.As discussed above, the process of realizing a logic model involves converting the logic model from abstract mathematical and functional relationships, e.g., VHDL equations, to a more low-level form consisting of a description of various logic circuits and interconnecting pins. Realizing a particular logic model can be accomplished using various computer-based tools that can automatically determine/generate the logic resources necessary to provide the functionality of the logic model. Alternatively, realization can be performed manually, either in it's entirety or in part, by a designer. However, the particular processes used to realize a particular logic model can vary as required or otherwise desired by a designer without departing from the present invention as defined in the claims.The exemplary realized logic 130 may consist of a netlist of representative logic gates, buffers and other components plus a number of representative interconnecting nodes (pins) as well as representative nodes (pins) that interface the various internal logic components to the outside world, for example. However, it should be appreciated that the composition of the realized logic 130 can include any combination of known or later developed components configured according to any known or later developed technology capable of receiving, processing, transmitting or otherwise manipulating logic signals without departing from the present invention as defined in the claims.Figure 2 is a diagram of an exemplary block of a logic model 200. The logic model 200 includes a model pre-fetch device 210, a model read-only memory (ROM) 220, a model multiplier 230 and a model accumulator/cache 240. In operation, the model pre-fetch device 210 can receive a stream of numbers via link 202 and provide the numbers to the model multiplier 230 via link 212. The model ROM 220 can receive a stream of addresses via link 204 and produce a stream of numbers stored internally to the model ROM 220 indexed according to the received addresses. The stream of indexed numbers can then be provided to the model multiplier 230 via link 222.The model multiplier 230 can receive the streams of numbers, multiply the numbers to produce a stream of products and provide the stream of products to the model accumulator/cache 240 via binary link 232.The model accumulator/cache 240 can receive the stream of products, store the products in an internal buffer (not shown) and produce a running accumulation of the last N products in the product stream using an internal accumulator (also not shown). The model accumulator/cache 240 can then provide the running product accumulation to a first external device via link 242, and further provide the buffered products to a second external device via link 244.The particular composition of the logic model 200 is not important and the exemplary logic configuration is provided only as a functional reference. Accordingly, the particular configuration/composition of the logic model 200 can vary as appropriate to the specific requirements of a logic design without departing from the present invention as defined in the claims.Figure 3 depicts the logic model design of Fig. 2 conceptually divided into three logic portions 310, 320 and 330. As shown by Fig. 3, the first portion 310 of the logic model includes the model pre-fetch device 210 and model ROM 220, the second portion 320 of the logic model includes the model multiplier 230 and the third portion 330 of the logic model includes the model multiplier/cache 240. The exemplary division of the various model components 210-240 within the various portions 310-330 is made for illustrative purposes only and it should be appreciated that the exemplary logic model 200, as well as any other logic model design, can be conceptually divided along any practicable lines without departing from the present invention as defined in the claims.Figure 4 depicts the second logic model portion 320 along with a respective second realized logic portion 320' consisting of a realized multiplier 230', which can be fed streams of numbers via links 212' and 222' and provide streams of product information via links 232' and 234', where the combination streams of product information can contain the same information as the model output 232. As discussed above, a logic model can consist of a number of abstract equations and functional relationships while a respective realized logic can consists of any number of known or later developed logic components and interconnects capable of providing the functionality required by the equations and functional relationships of the logic model. However, it should be appreciated that a realized logic portion may vary significantly in various embodiments as long as the realized logic portion provides all basic functionality of its respective logic model portion. For example, while the exemplary second logic model portion 320 has output 232, the second realized logic portion 320' may alternatively provide two outputs 232' or 234', that provide the same product information as the model output 232, but in a different numerical format.Alternatively, the basic logic model may have two links 232 and 234 that provide all of the required information on one pin while grounding (providing a logic 0) to the other pin as functionality may be preserved if the second realized logic portion 320' provides the necessary product information on one of its output links 232' or 234' while providing a ground (logic zero) on the other link 234' or 232'.Furthermore, it should be appreciated that output links 232' or 234' can provide product information in any number of unique forms. For example, in various embodiments links 232' and 234' may both be N-bit busses carrying different numbers that must necessarily be added together to represent a single stream of product information. In other embodiments, links 232' or 234' may carry portions of information that must be added/combined or otherwise manipulated according to any useful approach that can be matched to the model format via a set of mathematical manipulations.Still further, links 232' and 234' may provide product information in any combination of forms. For example, in various other embodiments link 232' may provide product information in one's-compliment form while link 234' may provide product information in two's-compliment form.Thus, as demonstrated above, it should be appreciated that any number of variances between logic model and realized logic may occur as design choices subject to the restriction that the basic functionality of the realized logic must be preserved.Figure 5 depicts a wrapped realized logic block 520. The wrapped realized logic block 520 includes the second realized logic portion 230' of Fig. 4 with an appropriately designed wrapper 530. As shown in Fig. 5, the realized multiplier 230' can receive streams of numbers via links 212' and 222' from a simulation or verification source (not shown), and provide a stream of products to the wrapper 530 via links 232' and 234'. The wrapper 530, in turn, can receive the product stream, measure, record, manipulate or otherwise process the received product stream and pass the received product stream to an external device (also not shown) via links 232" and 234".As discussed above, a wrapper can be a software construct that enables a designer to interact with a logic design on a pin (nodal) level, e.g., tie off a set of pins and add together two buses to make the results identical, etc. As various realized logic portions may vary in their particular form as long as basic functionality is preserved, it should be appreciated that an appropriately designed wrapper should account for these variances. For example, a realized multiplier that produces one's compliment data or redundant data would need a different wrapper than a realized multiplier that produces two's compliment data. Similarly, a realized multiplier that provided product information in alternative cycles between links 232' and 234' would require a third wrapper unique to the particular realization.Figure 6 depicts a verification process on the realized logic design and it's respective wrapper against it's model. Generally, verification is performed according to a known "formal verification" process that can include any of several well-known approaches where a logic design is reduced to its essential equations and fed various information necessary to prove that the logic design operates as expected. Formal verification tools generally verify a realized design against a model design according to those known processes by, among other means, providing the inputs for both a model and realized design and checking that the same results are produced by both the model and the realized design. While the exemplary verification process uses an array of known formal verification techniques, it should be appreciated that in other embodiments, verification can be performed using simulation techniques (at the expense of time and computer resources) or any other known or later developed technique useful for verifying a logic design.As discussed above, a wrapper must not only be tailored to a particular logic realization, but a every wrapper is subject to being erroneously implemented. That is, a wrapper may inadvertently affect the performance of a realized logic portion. Accordingly, it may be necessary to validate the wrapper independently.The exemplary approach of the present embodiment can verify a wrapper used on the output of a particular logic portion by applying the same wrapper to the input of a subsequent logic portion and similarly verifying the subsequently wrapped logic portion.Through the rule of associativity, the wrapper can be proved no have no influence on the functionality of the first wrapped realized logic portion.This wrapper verification technique starts with realizing another portion of the logic model. As depicted in Fig. 7, the third logic model portion 330, which follows the second logic model portion 320, can be realized to produce a respective third realized logic portion 330'. The third realized logic portion 330' can receive a stream of product information via links 232' and 234' and provide various other signals to external devices via links 242' and 244'.Figure 8 depicts the next steps for validating the wrapper 530. As shown in Fig. 8, the wrapper 530 can be applied to the third logic model portion 330 and the respective realized logic portion 330' . A verification can then be performed between the model and wrapper, and the realized logic portion and wrapper. If the model and realized outputs are identical or otherwise sufficiently similar, the wrapper 530 may be deemed valid.Figure 9 is a block diagram of a design/verification apparatus 900 capable of realizing and verifying a logic design according to the present invention. As shown in Fig. 9, the verification/verification apparatus 900 includes a controller device 910, a memory device 920, a logic development device 930, a logic compiling device 940, a wrapping device 950, a verification device 960, a comparing device 970 and an input/output 990 coupled together via control/data bus 902.While the exemplary apparatus 900 uses a bussed architecture, it should be appreciated that the apparatus 900 can be implemented using any number of architectures, such as an architecture based on fixed electronic circuits, programmable logic and the like without departing from invention as defined in the claims. Similarly, two or more of the devices 910-970 may be combined in one device or distributed among different devices in a manner other than that shown. For example, any or all of devices 930-970 may be implemented as software modules residing in memory 920 and executed by means of the processor 910.In operation, the controller 910 can receive a set of first commands and data directed to logic entry and development via the input/output device 990 and link 992 and store the set of first commands and data in the memory 920. The memory 920, in turn, can provide the set of first commands and data to the logic development device 930 and the logic compiling device 940 in such a fashion that a logic source file and respective logic model design can be derived according to a particular set of predetermined rules and syntax, such as that provided by VHDL or Verilog.Once a logic model is initially developed, the logic development device 930 can conceptually divide the logic model into two or more logic model portions and then realize each model portion to provide a number of realized logic portions. The exemplary development/verification apparatus 900 uses a number of automatic software tools to realize a logic model portion. However, as discussed above, a logic model portion can be derived automatically or manually by a designer in whole or in part without departing from the present invention as defined in the claims.The logic development device 930 can then provide a first realized logic portion to the wrapping device 950. The wrapping device 950, in turn, can receive the first realized logic portion and apply a specially crafted wrapper to the first realized logic portion. The wrapping device 950 can then provide the wrapped first realized logic portion to the verification device 960.The verification device 960 can receive the wrapped first realized logic portion along with the corresponding logic model portion and perform a verification process, e.g., a formal verification, by operating on both logic portions, i.e., feeding the logic portions various input information and receiving/storing resultant output data. The resultant output data can then be provided to the comparing device 970 where the output data can be compared for correctness.If the output data for both model and realized design are identical or otherwise sufficiently similar, the simulation device 970 can alert a logic designer of the realized portions apparent correctness via the input/output device and link 992. Otherwise, the verification device 970 can alert the designer of any discrepancy indicating that at least one of the first realized logic portion or its wrapper is problematic.Assuming that there are no problems indicated, the development/verification apparatus 900 can then undertake to verify the wrapper. Accordingly, the logic development device 930 can provide any logic model portion that follows the logic model portion just tested to the wrapping device 950. The wrapping device 950, in turn, can apply the wrapper to this second logic model portion and provide the wrapped second logic model portion to the verification device 960.The verification device 960, in turn, can receive the wrapped second logic model portion along with the corresponding second realized logic portion and perform verification operations on both logic portions. The resultant outputs can then be provided to the comparing device 970 where the outputs can be compared for correctness.If the outputs are identical or otherwise sufficiently similar, the wrapper is verified, i.e., proved to be correct through the associativity rule, and the comparing device 970 can alert a logic designer that the wrapper is apparently correct in its design. Otherwise, the comparing device 970 can alert the designer of any discrepancy.Figure 10 is a flowchart outlining an exemplary operation for realizing and verifying a logic design embodying the present invention. The process starts in step 1002 where a logic design is initially developed. Next, in step 1004, logic model is derived from the logic design of step 1002. Then, in step 1006, the logic model is divided into two or more portions. As discussed above, the particular divisions that separate the various model portions can vary along any practicable lines without departing from the invention as defined in the claims. Control continues to step 1008.In step 1008, a first logic model portion is selected. Next, in step 1010, the selected logic model portion is realized according to known or later developed technique. Then, in step 1012, a wrapper is applied to the selected realized logic portion. As discussed above, a wrapper can be specific to the particular design choices of a designer and accordingly, different embodiments of a realized logic portion will require different wrappers. Control continues to step 1014.In step 1014, a verification operation is performed on the wrapped realized logic portion to produce a realized output. As discussed above, a verification operation can be part of an overall formal verification process and can include providing a logic portion with various input information/stimulus and recording/measuring the resultant output information. Next, in step 1016, a similar verification operation is performed on the respective logic model portion to produce a model output. Then, in step 1018, the realized output and model output generated from steps 1014 and 1016 respectively are compared to verify the realized logic portion for correctness. Control continues to step 1020.In step 1020, a determination is made as to whether the first realized logic portion is verified, i.e., whether the realized output and model output generated and compared in steps 1014-1018 match or are deemed sufficiently similar. If the first realized logic model portion is verified, control jumps to step 1038; otherwise, control continues to step 1052.In step 1052, at least one of the logic model and wrapper are modified to troubleshoot the logic design and control jumps back to step 1004 (or optionally step 1010 to avoid some steps that may not be necessary in some circumstances).Otherwise, in step 1038, a second logic model portion is selected. As discussed above, the second logic model portion should follow the first logic model portion, i.e., the first portion should feed at least one signal directly to the second logic model portion. Next, in step 1040, the second selected model portion is realized. Control continues to step 1042.In step 1042, the wrapper applied in step 1012 is applied to the second logic model portion. Next, in step 1044, a verification operation is performed on the second realized logic portion to produce a second realized output. Then, in step 1046, a similar verification operation is performed on the respective wrapped second logic model portion to produce a second model output. Control continues to step 1048.In step 1048, the second realized output and second model output are compared. Next, in step 1050, a determination is made as to whether the wrapper is verified, i.e., whether the second realized output and second model output match or are deemed sufficiently similar. If the wrapper is verified, control jumps to step 1054 where the process stops; otherwise, control jumps to step 1052 where at least one of the logic model and wrapper are modified to troubleshoot the logic design. Control then jumps back to step 1004 where a new logic model is derived based on the updates of step 1052. Control can continue to loop along steps 1004-1052 until at least both the first realized logic portion and respective wrapper are verified.In various embodiments where the above-described systems and/or methods are implemented using a programmable device, such as a computer-based system or programmable logic, it should be appreciated that the above-described systems and methods can be described by any of various known or later developed programming languages, such as "C", "C++", "FORTRAN", Pascal", "VHDL" and the like.Accordingly, various storage media, such as magnetic computer disks, optical disks, electronic memories and the like, can be prepared that can contain information that can direct a device to implement the above-described systems and/or methods. Once an appropriately capable device has access to the information contained on the storage media, the storage media can provide the information to the device, thus enabling the device to perform the above-described systems and/or methods.For example, if a computer disk containing the appropriate information, such as a source file, an object file, an executable file or the like, were provided to a computer, the computer could receive the information, appropriately configure itself and perform the functions of the various elements of Figs. 1-9 and/or the flowchart of Fig. 10 to implement the various realization and/or verification functions. That is, the computer could receive various portions of information from the disk relating to different elements of the above-described systems and/or methods, implement the individual systems and/or methods and coordinate the functions of the individual systems and/or methods to realize and/or verify a logic portion and/or wrapper.In still other embodiments, rather than providing a fixed storage media, such as a magnetic-disk, information describing the above-described systems and methods can be provided using a communication system, such as a network or dedicated communication conduit. Accordingly, it should be appreciated that various programs, executable files or other information embodying the above-described systems and methods can be downloaded to a programmable device using any known or later developed communication technique.As shown in Figs. 1-10, the systems and methods of this invention are preferably implemented using a general purpose computer having various complimentary components and peripherals. However, the systems and methods can also be implemented using any combination of one or more general purpose computers, special purpose computers, program microprocessors or microcontroller and peripheral integrating circuit elements, hardware electronic or logic circuits such as application specific integrated circuits (ASICs), discrete element circuits, programmable logic devices such as PLAs, FPGAs, PALs or the like. In general, any device on which exists a finite state machine capable of implementing the various elements of Figs 1-9 and/or the flowchart of Fig. 10 can be used to implement the training sequence functions.While this invention has been described in conjunction with the specific embodiments thereof, it is evident that many alternatives, modifications, and variations will be apparent to those skilled in the art. Accordingly, preferred embodiments of the invention as set forth herein are intended to be illustrative, not limiting. There are changes that may be made without departing from the invention as defined in the claims. |
Embodiments of the invention describe semiconductor devices with high aspect ratio fins and methods for forming such devices. According to an embodiment, the semiconductor device comprises one or more nested fins and one or more isolated fins. According to an embodiment, a patterned hard mask comprising one or more isolated features and one or more nested features is formed with a hard mask etching process. A first substrate etching process forms isolated and nested fins in the substrate by transferring the pattern of the nested and isolated features of the hard mask into the substrate to a first depth. A second etching process is used to etch through the substrate to a second depth. According to embodiments of the invention, the first etching process utilizes an etching chemistry comprising HBr, O<sub>2</sub> and CF<sub>4</sub>, and the second etching process utilizes an etching chemistry comprising Cl<sub>2</sub>, Ar, and CH<sub>4</sub>. |
A semiconductor structure, comprising:a monocrystalline silicon substrate;a nested grouping of silicon fins extending from the monocrystalline silicon substrate through an isolation layer, the nested grouping of silicon fins comprising:a first silicon fin having a top and laterally opposite sidewalls and having a shape, a width, a height, and a height to width aspect ratio, wherein the width is less than 15 nanometers, the height is greater than 100 nanometers, and wherein the height to width aspect ratio is greater 10:1;a second silicon fin having a top and laterally opposite sidewalls and having the shape, the width, the height and the height to width aspect ratio;a third silicon fin having a top and laterally opposite sidewalls and having the shape, the width, the height and the height to width aspect ratio; anda fourth silicon fin having a top and laterally opposite sidewalls and having the shape, the width, the height and the height to width aspect ratio, wherein the fourth silicon fin is laterally directly adjacent the third silicon fin at a first spacing, wherein the third silicon fin is laterally directly adjacent the second silicon fin at the first spacing, and wherein the second silicon fin is laterally directly adjacent the first silicon fin at the first spacing; andan isolated silicon fin extending from the monocrystalline silicon substrate through the isolation layer, the isolated silicon fin having the shape, the width, the height and the height to width aspect ratio, and the isolated silicon fin laterally directly adjacent the first silicon fin at a second spacing greater than 1.5 times the first spacing.A semiconductor device comprising:a silicon substrate;an isolation layer disposed on the silicon substrate;one or more nested silicon fins having a first width extending from the silicon substrate through the isolation layer; andone or more isolated silicon fins having a second width extending from the silicon substrate through the isolation layer, wherein the second width is equal to the first width.The semiconductor device of claim 2, wherein the aspect ratio of the isolated and nested silicon fins is greater than 10:1.The semiconductor device of claim 2, wherein the nested silicon fins have a pitch of 42 nm or less.The semiconductor device of claim 2, wherein the first width and second width are less than 15 nm.A method for forming high aspect ratio fins comprising:forming a patterned hard mask with a hard mask etching process, wherein the patterned hard mask comprises one or more isolated features and one or more nested features;etching through a substrate disposed below the patterned hard mask to a first depth with a first substrate etching process, wherein the first substrate etching process transfers the isolated features and the nested features of the patterned hard mask into the substrate to form one or more isolated fins and one or more nested fins, wherein a width of one or more isolated fins is greater than a width of one or more nested fins; andetching through the substrate to a second depth with a second substrate etching process that is different than the first substrate etching process.The method of claim 6, wherein a first substrate etch chemistry utilized in the first substrate etching process provides a greater lateral passivation rate for the isolated fins than for the nested fins, and wherein a second substrate etch chemistry utilized in the second substrate etching process provides a greater lateral etch rate for the isolated fins than for the nested fins.The method of claim 7, wherein the first etch chemistry comprises HBr, O2 and CF4.The method of claim 7, wherein the second etch chemistry comprises Cl2, Ar, and CH4.The method of claim 6, wherein the hard mask etching process further utilizes a chemistry comprising a greater concentration of hydrogen than a concentration of oxygen.The method of claim 10, wherein the chemistry utilized for the hard mask etching process comprises a hydrogen to oxygen ratio between approximately 2.5:1 and 3.5:1.The method of claim 10, wherein the hard mask etching process utilizes a chemistry comprising CH3F.The method of claim 6, wherein the hard mask etching process further comprises varying a flow rate of the gases used in the hard mask etching process across the surface of the hard mask layer, wherein the flow rate of the gases used in the hard mask etching process is lower proximate to an edge of the hard mask layer relative to the flow rate of the gases used in the hard mask etching process proximate the center of the hard mask layer.The method of claim 6, wherein the hard mask etching process further comprises maintaining a total pressure inside a processing chamber between 24 mTorr and 28 mTorr.The method of claim 6, wherein the first depth is between 70 nm and 100 nm and the second depth is between 130 nm and 170 nm. |
FIELD OF THE INVENTIONEmbodiments of the present invention relate generally to the manufacture of semiconductor devices. In particular, embodiments of the present invention relate to methods for forming high aspect ratio fin-based structures.BACKGROUND AND RELATED ARTSAs microprocessors become faster and smaller, integrated circuitry (IC) becomes more complex and components become more densely packed. The use of non-planar fin based transistor devices has enabled increased performance with a smaller device footprint. Fins that are substantially rectangular in shape have improved short channel effects compared to fins with trapezoidal or triangular shapes. This leads to higher performance for a given voltage overdrive. Rectangular fins also enable consistent device performance across the fin height with no degradation in current.However, as the aspect ratio of transistor devices continues to increase, the challenge of maintaining uniform widths and rectangular cross-sections of the fins across the substrate becomes more difficult. Specifically, when the critical dimension (CD) and pitch of the devices decrease, micro loading effects become a significant problem. Micro loading effects occur when the CD and pitch of the fins is small enough to create different active ion accessibility at the surface of the substrate during an etching process. This results in a structurally dependent etch bias due to localized enhanced etching or plasma deposition. Additionally, the micro loading effect becomes a more significant problem when the pitch between fin based structures is non-uniform. As an example, when nested fins and isolated fins are formed with a single etching process, the widths of the nested fins will not be equal to the widths of the isolated fins, because the micro loading effect will be different for each type of fin. Accordingly, it becomes increasingly difficult to design circuitry that includes fin based transistor devices that require non-uniform spacing. As a result of the different pitches, nested fins will have different metrics, such as leakage current and threshold voltage, than isolated fins, even though both fins are designed to perform equivalently.BRIEF DESCRIPTION OF THE DRAWINGSFigure 1 illustrates a flow diagram of a method of forming high aspect ratio fin based semiconductor devices according to an embodiment of the invention.Figures 2A-2D illustrate cross-sectional views of a high aspect ratio fin based semiconductor device after different processes according to an embodiment of the invention.Figure 3A illustrates a cross-sectional view of a high aspect ratio fin based semiconductor device according to an embodiment of the invention.Figure 3B illustrates a cross-sectional view of a high aspect ratio fin based semiconductor device comprising transistor devices according to an embodiment of the invention.Figure 4 illustrates a schematic view of a computing device that may utilize a high aspect ratio fin based semiconductor device according to embodiments of the invention.DETAILED DESCRIPTIONEmbodiments of the invention prevent micro loading effects from causing a significant difference in the widths of isolated fins and nested fins. Embodiments of the invention utilize multiple substrate etching processes to produce uniform fin widths with rectangular cross sections in both nested and isolated fin structures formed on the same substrate. Uniform fin width allows for the use of multi-fin devices that have uniform metrics, such as threshold voltage and leakage current, in the nested and isolated fin structures. Furthermore, uniform width in isolated and nested fins allows for the use of isolated fins in circuitry, such as an IC device.Embodiments of the invention include a hard mask patterning process that transfers the fin shapes formed in a dummy hard mask into a hard mask layer. In order to maintain uniform fin widths between isolated and nested fins while transferring the shape of the fins into the hard mask, the hard mask etching process utilizes an etching chemistry with a high ratio of hydrogen to oxygen. According to an embodiment, the increased hydrogen concentration is obtained by utilizing an etching chemistry comprising CH3F. After the hard mask layer is patterned, embodiments of the invention utilize a breakthrough etch in order to remove portions of an etchstop layer above the substrate in which the fins will be formed.Embodiments of the invention may also include multiple substrate etching processes in order to provide uniform fin width for the high aspect ratio fins. A first substrate etching process etches the substrate to a first depth. Embodiments of the invention include fin based devices with a first depth between 80 nm and 90 nm. Embodiments of the first etching process utilize a chemistry that passivates the sidewalls to preserve the fin width. By way of example, the first etching process may utilize a chemistry comprising HBr, 02 and CF4. In an embodiment, the first substrate etching process may have a lateral passivation rate that is greater for isolated fins than the lateral passivating rate for nested fins. As such, embodiments of the invention include a first substrate etching process that may result in the nested fins having a smaller width than the width of the isolated fins. Accordingly, embodiments of the invention may utilize a second etching process to equalize the widths of the isolated fins and the nested fins. The second etching process may equalize the widths of the fins by utilizing an etching chemistry that has a lateral etch rate that is greater for isolated fins than the lateral etch rate for nested fins. Embodiments of the invention utilize a chemistry comprising Cl2, Ar, and CH4 for the second substrate etching process. During the second etching process, the substrate is etched to a second depth.Embodiments of the invention may include a second depth that is between 120 nm and 160 nm.According to embodiments of the invention, the aspect ratio of the fins is greater than 10: 1. Furthermore, the high aspect ratio fins of certain embodiments of the present invention include fins that have a pitch of 42 nm and below and a CD of 15 nm and below. Additionally, embodiments include fin based devices that have one or more nested fins and one or more isolated fins.Figure 1 is a flow diagram that illustrates a method 140 of forming high aspect ratio fins with uniform widths according to an embodiment of the invention. Cross-sectional views of the fin based device 100 shown in Figures 2A-2D are used in conjunction with Figure 1 to illustrate a method of forming uniform high aspect ratio fins according to an embodiment of the invention.Referring now to Figure 1 , the method of forming high aspect ratio fins 140 may begin at block 150 according to an embodiment. At block 150 a masking stack 110 is formed over a semiconductor substrate. Figure 2A is a cross-sectional view of substrate 101 after a masking stack 110 has been disposed over its top surface. According to embodiments, the masking stack 110 may comprise a dummy hard mask 104, a hard mask layer 103, and an etchstop layer 102, as shown in Figure 2A .According to an embodiment, dummy hard mask 104 may include one or more isolated features 105 and one or more nested features 106. Isolated features 105 are disposed above portions of the substrate 101 where isolated fins 11 li will be formed during subsequent processing, and nested features 106 are disposed above portions of the substrate 101 where nested fins 11 I will be formed during subsequent processing. According to an embodiment, the dummy hard mask 104 may be composed of a typical masking material, such as an oxide.According to embodiments of the invention, the width WD of the isolated and nested features 105, 106 are chosen such that they are larger than the desired fin widths of the nested and isolated fins. Forming isolated and nested features 105, 106 with a width WD greater than the desired width of the fins 111 allows for subsequent etching processes to have a non-zero lateral etch rate that reduces the width of the fins. According to an embodiment of the invention, the width WD of the features 105, 106 are less than 20 nm. Embodiments of the invention may also include a dummy hard mask 104 with features 105, 106 that have a width WD less than 15 nm.According to embodiments, a multiple patterning process may be used to form the dummy hard mask 104. A multiple patterning process may be desirable when the pitches P and Pi between features are sufficiently small, such that the resolution of lithography techniques are insufficient to pattern the dummy hard mask. Embodiments of the invention include a double patterning process in which spacers are formed on the sidewalls of pre-patterned features, as is known in the art. According to an embodiment, the spacers may be an oxide material and the pre-patterned features may be a polysilicon material. According to an embodiment, the pre-patterned features may be formed with a lithography process known in the art, such as photolithography. The spacers may be formed by disposing a layer of material, such as an oxide, over the pre-patterned features and the exposed surfaces of the hard mask layer 103. An anisotropic spacer etching process may then be used to remove the oxide material disposed on the horizontal surfaces of the exposed hard mask layer 103 and the pre-patterned features, leaving only spacers disposed on the sidewalls of the pre-patterned features. The pre-patterned features may be selectively removed, thereby leaving only the spacers behind. The pitch between each of the spacers may be adjusted by changing the width of the pre-patterned material.According to an embodiment, the remaining spacers may be used as the isolated features 105 and the nested features 106 that form the dummy hard mask 104. According to an additional embodiment, the double patterning process may be repeated one or more times, with the final remaining set of spacers being utilized as the isolated and nested features 105, 106 of the dummy hard mask 104.According to an embodiment, the dummy hard mask 104 is formed from a material that is resistant to an etching process that will selectively etch through the hard mask layer 103 that is disposed below it, as shown in Figure 2A . According to an embodiment, the dummy hard mask 104 may be an oxide material, such as silicon dioxide. In an embodiment, the hard mask layer 103 is a material that is resistant to an etchant that will selectively etch the substrate 101.According to an embodiment, the hard mask layer 103 is a nitride. Certain embodiments include a hard mask layer 103 that is a thermally grown nitride, such as S13N4. Embodiments of the invention have a hard mask layer 103 that has a thickness between 40 nm and 60 nm. Additional embodiments of the invention include forming the hard mask layer 103 with processes such as, chemical vapor deposition (CVD), physical vapor deposition (PVD), or atomic layer deposition (ALD).As shown in Figure 2A , embodiments of the invention may include a hard mask layer 103 that is disposed above an etch stop layer 102. The etch stop layer may be a suitable oxide layer, such as a silicon dioxide layer. Embodiments of the invention may include a thermally grown oxide layer that is less than 10 nm thick. Additional embodiments have an etchstop layer 102 that is a silicon dioxide layer that is thermally grown and approximately 7 nm thick.Embodiments of the invention may also include forming the etch stop layer 102 with processes such as, CVD, PVD, or ALD.According to an embodiment, the etch stop layer 102 is disposed on a top surface of the semiconductor substrate 101, as shown in Figure 2A . According to an embodiment of the invention, semiconductor substrate 101 may be composed of a material suitable for semiconductor device fabrication, such as a monocrystalline silicon substrate or a SOI substrate.Referring back to Figure 1 , the method of forming high aspect ratio fins 140 proceeds to block 160. At block 160, a hard mask etching process is implemented to etch through the hard mask layer 103. According to an embodiment of the invention, the hard mask etching process utilizes the dummy hard mask 104 as a mask in order to transfer the isolated and nested features 105, 106 into the hard mask layer 103 to form isolated hard mask features 107 and nested hard mask features 108. Accordingly, the isolated hard mask features 107 and the nested hard mask features 108 are aligned with the isolated and nested dummy hard mask features 105 and 106, respectively. Figure 2B is an illustration of the hard mask layer 103 after it has been patterned with a hard mask etching process in order to form the isolated hard mask features 107 and the nested hard mask features 108 according to an embodiment of the invention.Due to the variability in the micro loading effects resulting from the non-uniform pitch, the hard mask etching process must be controlled to ensure that the lateral etching rate of the isolated features 107 and the nested features 108 are uniform. The lateral etching rate of the hard mask etching process is dependent on the passivation of the sidewalls and the rate at which the active species from the plasma can etch away the hard mask material. The variable pitch across the substrate 101 results in there being fins that are more accessible to the active species thereby causing these fins to etch faster. Additionally, the polymer deposition rate along the sidewalls of the fins is also dependent on pitch. Accordingly, without control of the polymer deposition, the width of the isolated features and nested features may be non-uniform as a result of different lateral etch rates.In a fluorine based plasma, increases in the concentration of hydrogen in the plasma result in an increase in the rate of polymerization. Increased polymerization improves the passivation of the sidewalls of the hard mask fins that are formed during the hard mask etching process. The additional hydrogen present in the plasma scavenges fluorine from the plasma and results in a more carbon-rich plasma. The excess carbon in the plasma is able to form nonvolatile molecules that passivate the surfaces and prevent etching. The passivation layer forms primarily on the sidewalls because the portions of the passivation layer that are disposed on horizontal surfaces are removed by ion bombardment. Accordingly, the increase in polymerization will increase the sidewall passivation and improve the anisotropic nature of the etching chemistry. The improvement in the anisotropic nature of the etching process improves the uniformity in the width of the isolated hard mask features WHM-I and the nested hard mask features WHM-N-However, increases in the concentration of hydrogen in the plasma also results in a decrease in the etch selectivity of the hard mask layer 103 over the dummy hard mask 104 according to embodiments with a nitride hard mask layer 103 and an oxide dummy hard mask 104. Since the presence of excess hydrogen scavenges fluorine, the fluorine concentration drops. At lower concentrations of fluorine, the etch rates of the nitride hard mask 103 and the oxide dummy hard mask 104 become less selective to each other. Accordingly, oxygen can be added into the plasma to counteract this effect. When there is an increase in the oxygen content of the plasma, the oxygen scavenges carbon atoms to produce volatile CO and C02 which can be pumped out of the chamber. As such, the fluorine concentration of the plasma is increased and the additional reactive ions increase the etch rate of the nitride hard mask layer 103 greater than they increase the etch rate of the oxide dummy hard mask 104. Therefore, in order to transfer the pattern of the dummy hard mask 104 into the hard mask layer 103 without causing the micro loading effects to result in different widths of the isolated and nested features, a proper ratio of hydrogen to oxygen must be maintained within the plasma.Under typical etching conditions, such as an etching chemistry that utilizes CHF3 as the fluorine source, the micro loading effects generally cause the width of nested hard mask fins 108 to be smaller than the width of the isolated hard mask fins 107. Accordingly, the amount of passivation on the sidewalls of the nested hard mask fins is less than the amount of passivation on the sidewalls of the isolated hard mask fins. This problem may be overcome by providing an etching chemistry that increases the sidewall passivation. Therefore, embodiments of the invention utilizes an etching chemistry comprising a higher concentration of hydrogen than the concentration oxygen. Embodiments may utilize gases such as CH F or CH2F2 in order to increase the hydrogen concentration of the plasma relative to etching chemistries that utilize CHF3 as the fluorine source. As explained above, the increase in hydrogen causes fluorine to be scavenged from the plasma and allows for an increase in the carbon concentration. The increased carbon concentration increases the amount of passivation on the sidewalls.However, it should be noted that if the hydrogen concentration is increased too much, then the opposite effect on the widths of features 107, 108 will be seen. In these instances, the nested features 108 will have a lower lateral etch rate than the lateral etch rate of the isolated features 107, because the passivation rate of the nested features will increase. This will result in thicker nested features 108 and thinner isolated features 107. Therefore, in order to balance the etching rates and produce uniform widths WHM-I and WHM-N, it is desirable to balance the increase in the hydrogen content by also incorporating oxygen into the plasma. According to embodiments of the invention, uniform widths WHM-I and WHM-N for isolated and nested features 107, 108 may be obtained when the ratio of hydrogen to oxygen (H:0) in the plasma is maintained between approximately 2.5: 1 and 3.5: 1. In order to achieve the hydrogen to oxygen ratios described by embodiments of the invention, a gas mixture including 02, Ar, and CH3F may be used where the flow rate of the 02 is between approximately 70 seem and1OO seem, the flow rate of the CH3F is between approximately 150 seem and 200 seem, and the flow rate of the Ar is between approximately 50 seem and 150 seem. Embodiments of the invention utilize a total pressure between 24 mTorr and 28 mTorr in the processing chamber during the hard mask etching process. Additional embodiments of the invention may utilize a total pressure of approximately 26 mTorr in the processing chamber during the hard mask etching process.Embodiments also include utilizing different process gas flow rates across the surface of the substrate during processing. Embodiments include a process gas flow rate that is higher proximate to the center of the substrate relative to the flow rate proximate to the edge of the substrate. According to an embodiment of the invention, the ratio of the center gas flow rate to the edge gas flow rate is approximately 60%. By way of example, and not by way of limitation, if the 02 flow rate is 100 seem total, then the center 02 flow rate may be 60 seem and the edge 02 flow rate may be 40 seem.Additional embodiments of the invention also control the widths WHM-I and WHM-N of the hard mask features 107, 108 by controlling the temperature of the chuck that supports the substrate during the hard mask etching process. Embodiments of the invention include maintaining the temperature of the chuck between 35°C and 40°C during the hard mask etching process. Additional embodiments include maintaining the temperature of the chuck at approximately 37°C during the hard mask etching process.Referring back to Figure 1 , the method of forming high aspect ratio fins 140 proceeds to block 170 where a break through etching process is performed according to embodiments of the invention. The break through etching process selectively removes portions of etch stop layer 102 between the hard mask features 107, 108 in order to expose the top surface of the semiconductor substrate 101. According to an embodiment of the invention, the break through etching process may include a chemistry comprising CF4>Cl2, and an Ar-CH4 mixture. By way of example, and not by way of limitation, the CF4 may have a flow rate of approximately 15 seem, the Cl2 may have a flow rate of approximately 65 seem, and the Ar-CH4 mixture may be approximately 4% CH4 and have a flow rate of approximately 70 seem. According to an embodiment, the total pressure during the break through etching process may be approximately 4.5 mTorr.After the break through etching process has been performed the method of forming the high aspect ratio fins 140 proceeds to block 180 where a first substrate etching process is performed to etch into the substrate 101 to a first depth Di according to an embodiment of the invention. As shown in Figure 2C , the first depth Di is measured from the top surface of the substrate 101 to the bottom of the trench between each of the fins 111. Embodiments of the invention include a first depth Di that is between 70 nm and 100 nm. Embodiments of the invention also include a first depth Di that is between 80 nm and 90 nm. According to an embodiment of the invention, the etching process is highly anisotropic and the widths of the isolated and nested fins Wi and W are substantially preserved. However, micro loading effects present due to the smaller pitch in the nested fins 11 IN may produce differences in the fin widths WN and Wi between the nested fins 11 IN and the isolated fins 11 li. Therefore, embodiments of the invention utilize an etching chemistry comprising HBr, 02 and CF4 to minimize this effect. According to an embodiment of the invention the HBr may have a flow rate of approximately 200 seem, the 02 may have a flow rate of approximately 3.3 seem, and the CF4 may have a flow rate of approximately 15 seem. According to an embodiment of the invention the total pressure of during the first substrate etching process may be approximately 3.1 mTorr. The 02 functions as a passivating agent that improves the polymerization of the sidewalls. Even though the sidewalls are passivated by the 02, the sidewalls of the nested fins etch at a faster rate than the sidewalls of the isolated fins, because the lateral passivation rate is greater for isolated fins 11 li than the lateral passivating rate for nested fins 11 IN. By way of example, and not by way of limitation, the isolated fins may be approximately 3 nm thicker after the first substrate etching process.Referring back to Figure 1 , after the first depth Di has been reached, the method for forming high aspect ratio fins 140 then proceeds to block 190 where a second substrate etching process is implemented according to an embodiment of the invention. According to an embodiment, the second substrate etching process etches through the substrate 101 to a second depth D2 from the top surface of the substrate, as shown in Figure 2D . Embodiments of the invention include a second depth that is between 130 nm and 170 nm. Embodiments of the invention also include a second depth that is between 140 nm and 160 nm. In addition to providing the desired depth, the second substrate etching process also equalizes the widths WN, Wi of the nested fins 11 IN and the isolated fins 11 li. According to embodiments, the second substrate etching process equalizes the widths WN and Wi by utilizing an etching chemistry that has a slower lateral etch rate for the nested fins 11 IN than the lateral etch rate for the isolated fins 11 li. Embodiments of the invention utilize an etching chemistry comprising Cl2, Ar, and CH4. Embodiments of the invention utilize a process gas flow rate that provides a greater concentration of Cl2 compared to the concentration of the Ar and CH4 in order to ensure that the sidewalls of the nested fins 11 I are etched at a slower rate than the sidewalls of the isolated fins 11 li. The isolated fins 11 li are more accessible to the chlorine species, and as such, they have a greater lateral etch rate. Embodiments of the invention utilize a flow rate of approximately 100 seem for the Cl2 and approximately 28 seem for the combination of Ar and CH4 in order to maintain the proper ratio of Cl2 to Ar/CH4. The total pressure of the processing chamber may be maintained between approximately 1 mTorr and 2 mTorr.As noted above, the first substrate etching process may passivate the sidewalls of the isolated fins 11 li faster than the sidewalls of the nested fins 11 IN, and the second etching process may etch the sidewalls of the isolated fins 11 li faster than the sidewalls of the nested fins 11 IN-Accordingly, if the first depth D is chosen too shallow, then the fins may have an undercut, because the second substrate etching process will etch the sidewalls for a longer period before the second depth D2 is reached. Alternatively, if the first depth Di is chosen to be too deep, then the fins may have a footing. The presence of a footing may result from there not being sufficient time to allow the fins 111 to have their sidewalls etched to the proper thickness before the second depth D2 is reached. Therefore, according to various embodiments, the first depth Di is chosen to be between 70 nm and 100 nm in order to ensure that the fins 111 have widths Wi and WN that are substantially equal to each other.An additional embodiment of the invention further controls the uniformity of widths Wi and WN of the high aspect ratio fins by controlling the RF power source of the plasma etching chamber during the first and second substrate etching processes. According to an embodiment, the RF power source is pulsed during the first and second substrate etching processes. Pulsing the RF power source allows for improved control of the desired anisotropic behavior of the etching processes. During the formation of high aspect ratio fins 111, the reactive etchant species may be quickly depleted at the bottom of the trenches between the fins 111. Pulsing the RF power source allows for more reactive etchant species to reach the bottom of the trench and prevents micro-trenching. The etchant species are drawn down into the trench when the RF power source is on. When the RF power source is off, the bi-products from the etching process are able to escape from the trench. Accordingly, reactant species at the bottom surface of the trench do not become depleted. According to an embodiment of the invention, the RF power is pulsed with a duty cycle that includes the RF power being on between 7- 13% of the time and off for the remainder of the time, and at frequency between approximately 100 Hz and 500 Hz. According to an embodiment of the invention, the duty cycle and frequency used for the first substrate etching process may be different than the duty cycle and frequency used for the second substrate etching process.According to another embodiment of the invention, the temperature of the chuck supporting the substrate may also be controlled during the first and second substrate etching processes of the embodiment in order to improve the uniformity in the width of the fins across the surface of the substrate. The fins that are proximate to the edge of the substrate typically experience different etch rates than the fins proximate to the center of the substrate. Accordingly, the temperature across the substrate may be varied to account for these differences. According to an embodiment of the invention, the temperature of the chuck supporting the substrate is maintained at a higher temperature proximate to the center of the substrate relative to the temperature of the chuck proximate to the edge of the substrate. According to an embodiment, the temperature of the chuck proximate to the center of the substrate may be maintained at a temperature that is approximately 20°C greater than the temperature of the chuck proximate to the edge of the substrate. According to an embodiment of the invention, the chuck may be maintained at approximately 30°C proximate to the center of the substrate, and the chuck may be maintained at approximately 10°C proximate to the edge of the substrate.In an additional embodiment of the invention, the uniformity of the fins formed across a substrate are further improved by controlling the plasma density during the first and second substrate etching processes. As used herein, plasma density refers to the density of the ions and radicals present in the plasma. By way of example, a high density plasma would have a greater concentration of ions and radicals per unit area than a low density plasma. In order to account for differences in the etch rates across the surface of the substrate, the plasma density may be varied above different portions of the substrate. The plasma density may be varied by altering the magnetic field of the plasma processing chamber. According to an embodiment of the invention, the plasma density above the center of the substrate may be higher than a plasma density above the edge of the substrate. According to an embodiment of the invention the plasma density may be approximately 5% to 8% higher above the center of the substrate.Referring now to Figure 3A , a cross-sectional view of a high aspect ratio fin based semiconductor device 100 formed in accordance with embodiments of the invention is shown. Fin based device 100 includes a plurality of fins 111 formed on a semiconductor substrate 101. According to embodiments of the invention, semiconductor substrate 101 may be composed of a material suitable for semiconductor device fabrication. In an embodiment, the semiconductor substrate 101 is a monocrystalline silicon substrate. In an embodiment, the structure is formed using a bulk semiconductor substrate. Substrate 101 may also be, but is not limited to, germanium, silicon-germanium, or a III-V compound semiconductor material. In another embodiment, the structure is formed using a silicon-on-insulator (SOI) substrate.Fins 111 are high aspect ratio fins. According to an embodiment, the high aspect ratio fins may have a height to width aspect ratio of 5: 1 or greater. According to additional embodiments of the invention, the aspect ratio may be 10: 1 or greater. Embodiments of the invention may include fins with heights H that extend 100 nm or more above the substrate 101. Further embodiments of the invention may include fins with heights H that are 150 nm or greater. Additional embodiments of the invention include fin widths W that are less than 25 nm. Embodiments of the invention further include fin widths that are less than 15 nm.As shown in Figure 3A , embodiments of the invention include one or more isolated fins 111i and one or more nested fins 111N. According to embodiments of the invention, a nested fin 11I is a fin that has neighboring fins 111 that are formed close enough to have an effect on the etching rate (in the lateral and/or vertical direction) of the nested fin 11I. By way of example, and not by way of limitation, neighboring fins may alter the etch rate of a fin by producing different active ion accessibility at the surface of the substrate during an etching process, or by changing the polymer deposition rate along the sidewalls of the fin. According to an embodiment of the invention, a group of nested fins may have a uniform pitch. Alternatively, a group of nested fins may have a non-uniform pitch, so long as the fins are spaced close enough together to effect the etching rate of neighboring fins. According to embodiments of the invention, an isolated fin 11 li is a fin that does not have neighboring fins formed close enough to have an effect on the etching rate of the isolated fin 11li. As shown in the embodiment depicted in Figure 3A , nested fins are formed with a pitch PN, and the isolated fin is formed with a pitch Pi.According to an embodiment of the invention, Pi is at least one and a half times as large as PN. By way of example, and not by way of limitation, PN may be approximately 40 nm and Pi may be approximately 120 nm. According to embodiments of the invention, the outermost fins of a set of nested fins, such as fin 113 in Figure 3 A , may be considered semi-nested. As such, the sidewall proximate to the nested fins 11IN has similar etching characteristics to the nested fins, and the sidewall proximate to the isolated fin 11li has similar etching characteristics to the isolated fins.According to embodiments of the invention, isolated fins 11 li and nested fins 11 IN are substantially similar to each other, with the exception of their spacing from adjacent fins 111. As such, the heights H of isolated and nested fins may be substantially similar according to an embodiment of the invention. Furthermore, the widths of the isolated fins Wi are substantially similar to the widths of the nested fins WN- The uniform shape and width of the isolated and the nested fins 1111, 11IN allows for the use of multi-fin devices that have uniform metrics, such as threshold voltage and leakage current. As such, uniform width in nested and isolated fins 11IN, llli allows for the use of isolated fins 1111 in circuitry, such as an IC device.Referring now to Figure 3B , an embodiment of the invention including one or more transistor devices formed on the isolated and nested fins 11 li and 11 I is shown. According to an embodiment of the invention, the transistor devices may include fin-FET devices, such as a trigate device, formed on the fins 111. As shown in Figure 3B , a shallow trench isolation (STI) layer 130 is disposed above the substrate 101 and between the fins 111. According to an embodiment of the invention, the STI layer 130 may be a silicon dioxide, or the like, as is known in the art. A gate dielectric 131 may be disposed over the portions of the fins 111 that extend above the STI layer 130. According to an embodiment, a gate metal 132, may be disposed over each fin 111. As shown in in Figure 3B , an embodiment of the invention may include a single block of gate metal 132 disposed over the nested fins 11 I - The gate metal 132 over the isolated fin 11 liis isolated from other gates according to an embodiment of the invention. Therefore, the transistor device formed on the isolated fin 11 li can be controlled independent of the nested fins according to an embodiment of the invention. Though not shown in the cross-sectional view of Figure 3B , those skilled in the art will recognize that source/drain (S/D) regions may be formed in the fins 111 on opposing sides of the gate metal (i.e., into the plane of the page and out of the plane of the page). According to an embodiment, the fins 111 may be suitably doped with n-type and/or p-type dopants in order to form n-MOS and/or P-MOS devices.Furthermore, those skilled in the art will recognize that high aspect ratio fins described according to embodiments of the present invention are not limited to use with electrical devices and may also be utilized in nanostructures such as those used in nanoelectromechanical systems (NEMS).Figure 4 illustrates a computing device 400 in accordance with one implementation of the invention. The computing device 400 houses a board 402. The board 402 may include a number of components, including but not limited to a processor 404 and at least one communication chip 406. The processor 404 is physically and electrically coupled to the board 402. In some implementations the at least one communication chip 406 is also physically and electrically coupled to the board 402. In further implementations, the communication chip 406 is part of the processor 404.Depending on its applications, computing device 400 may include other components that may or may not be physically and electrically coupled to the board 402. These other components include, but are not limited to, volatile memory (e.g., DRAM), non-volatile memory (e.g., ROM), flash memory, a graphics processor, a digital signal processor, a crypto processor, a chipset, an antenna, a display, a touchscreen display, a touchscreen controller, a battery, an audio codec, a video codec, a power amplifier, a global positioning system (GPS) device, a compass, an accelerometer, a gyroscope, a speaker, a camera, and a mass storage device (such as hard disk drive, compact disk (CD), digital versatile disk (DVD), and so forth).The communication chip 406 enables wireless communications for the transfer of data to and from the computing device 400. The term "wireless" and its derivatives may be used to describe circuits, devices, systems, methods, techniques, communications channels, etc., that may communicate data through the use of modulated electromagnetic radiation through a non-solid medium. The term does not imply that the associated devices do not contain any wires, although in some embodiments they might not. The communication chip 406 may implement any of a number of wireless standards or protocols, including but not limited to Wi-Fi (IEEE 802.11 family), WiMAX (IEEE 802.16 family), IEEE 802.20, long term evolution (LTE), Ev-DO, HSPA+, HSDPA+, HSUPA+, EDGE, GSM, GPRS, CDMA, TDMA, DECT, Bluetooth, derivatives thereof, as well as any other wireless protocols that are designated as 3G, 4G, 5G, and beyond. The computing device 400 may include a plurality of communication chips 406. For instance, a first communication chip 406 may be dedicated to shorter range wireless communications such as Wi-Fi and Bluetooth and a second communication chip 406 may be dedicated to longer range wireless communications such as GPS, EDGE, GPRS, CDMA, WiMAX, LTE, Ev-DO, and others.The processor 404 of the computing device 400 includes an integrated circuit die packaged within the processor 404. In some implementations of the invention, the integrated circuit die of the processor includes one or more devices, such as MOS-FET transistors formed on high aspect ratio fins formed in accordance with implementations of the invention. The term "processor" may refer to any device or portion of a device that processes electronic data from registers and/or memory to transform that electronic data into other electronic data that may be stored in registers and/or memory.The communication chip 406 also includes an integrated circuit die packaged within the communication chip 406. In accordance with another implementation of the invention, the integrated circuit die of the communication chip includes one or more devices, such as MOS-FET transistors formed on high aspect ratio fins formed in accordance with implementations of the invention.In further implementations, another component housed within the computing device 400 may contain an integrated circuit die that includes one or more devices, such as MOS-FET transistors formed on high aspect ratio fins formed in accordance with implementations of the invention.In various implementations, the computing device 400 may be a laptop, a netbook, a notebook, an ultrabook, a smartphone, a tablet, a personal digital assistant (PDA), an ultra mobile PC, a mobile phone, a desktop computer, a server, a printer, a scanner, a monitor, a set-top box, an entertainment control unit, a digital camera, a portable music player, or a digital video recorder.In further implementations, the computing device 400 may be any other electronic device that processes data.An embodiment of the invention includes a method for forming high aspect ratio fins comprising, forming a patterned hard mask with a hard mask etching process, wherein the patterned hard mask comprises one or more isolated features and one or more nested features, etching through a substrate disposed below the patterned hard mask to a first depth with a first substrate etching process, wherein the first substrate etching process transfers the isolated features and the nested features of the patterned hard mask into the substrate to form one or more isolated fins and one or more nested fins, and etching through the substrate to a second depth with a second substrate etching process that is different than the first substrate etching process. An additional embodiment of the invention includes a method wherein a first substrate etch chemistry utilized in the first substrate etching process provides a greater lateral passivation rate for the isolated fins than for the nested fins, and wherein a second substrate etch chemistry utilized in the second substrate etching process provides a greater lateral etch rate for the isolated fins than for the nested fins. An additional embodiment of the invention includes a method wherein the first etch chemistry comprises HBr, 02 and CF4. An additional embodiment of the invention includes a method wherein the second etch chemistry comprises Cl2, Ar, and CH4. An additional embodiment of the invention includes a method wherein the hard mask etching process further utilizes a chemistry comprising a greater concentration of hydrogen than a concentration of oxygen. An additional embodiment of the invention includes a method wherein the chemistry utilized for the hard mask etching process comprises a hydrogen to oxygen ratio between approximately 2.5: 1 and 3.5: 1. An additional embodiment of the invention includes a method wherein the hard mask etching process utilizes a chemistry comprising CH3F. An additional embodiment of the invention includes a method further comprising varying a flow rate of the gases used in the hard mask etching process across the surface of the hard mask layer, wherein the flow rate of the gases used in the hard mask etching process is lower proximate to an edge of the hard mask layer relative to the flow rate of the gases used in the hard mask etching process proximate the center of the hard mask layer. An additional embodiment of the invention includes a method wherein the hard mask etching process further comprises maintaining a total pressure inside a processing chamber between 24 mTorr and 28 mTorr. An additional embodiment of the invention includes a method wherein the first depth is between 70 nm and 100 nm. An additional embodiment of the invention includes a method wherein the second depth is between 130 nm and 170 nm. An additional embodiment of the invention includes a method wherein the hard mask etching process further comprises maintaining a chuck that supports the semiconductor substrate at a temperature between 35°C and 40°C during the hard mask etching process. An additional embodiment of the invention includes a method wherein the first and second substrate etching processes further comprise, maintaining a chuck that supports the semiconductor substrate at a variable temperature across the substrate, wherein a temperature of the chuck proximate to the center of the semiconductor substrate is higher than a temperature of the chuck proximate to the edge of the semiconductor substrate. An additional embodiment of the invention includes a method wherein the temperature of the chuck proximate to the center of the semiconductor substrate is maintained at 30°C and the temperature of the chuck proximate to the edge of the semiconductor substrate is maintained at 10°C. An additional embodiment of the invention includes a method wherein the first and second substrate etching processes further comprise, pulsing an RF power source. An additional embodiment of the invention includes a method wherein pulsing the RF power source comprises pulsing the RF power with a duty cycle that is on for 10% of the time and off for 90% of the time. An additional embodiment of the invention includes a method wherein the first and second substrate etching processes further comprise controlling a plasma density across the surface of the substrate such that a plasma density proximate to an edge of the substrate is lower than a plasma density proximate to the center of the substrate. An additional embodiment of the invention includes a method wherein forming the patterned hard mask comprises a multiple patterning process.An embodiment of the invention includes a method for forming high aspect ratio fins comprising forming a dummy hard mask over a hard mask layer, wherein the dummy hard mask defines a plurality of features having one or more isolated features and one or more nested features, wherein the hard mask layer is disposed above an etch stop layer, and wherein the etch stop layer is disposed above a semiconductor substrate, performing a hard mask etching process to etch through the hard mask layer, wherein the nested and isolated features in the dummy hard mask are transferred into the hard mask layer, performing a break through etching process to etch through the etch stop layer, etching through the substrate to a first depth with a first substrate etching process, and etching through the substrate to a second depth with a second substrate etching process that is different from the first substrate etching process. An additional embodiment of the invention includes a method wherein the first substrate etching process utilizes a chemistry comprising HBr, 02 and CF4 and wherein the second substrate etching process utilizes a chemistry comprising Cl2, Ar, and CH4. An additional embodiment of the invention includes a method wherein a first substrate etch chemistry utilized in the first substrate etching process provides a greater lateral passivation rate for the isolated fins than for the nested fins, and wherein a second substrate etch chemistry utilized in the second substrate etching process provides a greater lateral etch rate for the isolated fins than for the nested fins.An embodiment of the invention includes a semiconductor device comprising, one or more nested high aspect ratio features having a first width, and one or more isolated high aspect ratio features having a second width, wherein the second width is equal to the first width. An additional embodiment of the invention includes a semiconductor device wherein the aspect ratio of the isolated and nested fins is greater than 10: 1. An additional embodiment of the invention includes a semiconductor device wherein the nested fins have a pitch of 42 nm or less. An additional embodiment of the invention includes a semiconductor device wherein the first width and second width are less than 15 nm.Reference throughout this disclosure to "one embodiment" or "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment of the present invention. The appearance of the phrases "in one embodiment" or "in an embodiment" in various places throughout this disclosure are not necessarily all referring to the same embodiment. Furthermore, the particular features, structures, or characteristics may be combined in any suitable manner in one or more embodiments.In the foregoing Detailed Description, various features are grouped together in a single embodiment for the purpose of streamlining the disclosure. This method of disclosure is not to be interpreted as reflecting an intention that the claimed embodiments of the invention require more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive subject matter lies in less than all features of a single disclosed embodiment. Thus the following claims are hereby incorporated into the Detailed Description, with each claim standing on its own as a separate embodiment.It will be readily understood to those skilled in the art that various other changes in the details, material, and arrangements of the parts and method stages which have been described and illustrated in order to explain the nature of this invention may be made without departing from the principles and scope of the invention as expressed in the subjoined claims.Features of embodiments of different aspects of the invention:1. A method for forming high aspect ratio fins comprising:forming a patterned hard mask with a hard mask etching process, wherein the patterned hard mask comprises one or more isolated features and one or more nested features;etching through a substrate disposed below the patterned hard mask to a first depth with a first substrate etching process, wherein the first substrate etching process transfers the isolated features and the nested features of the patterned hard mask into the substrate to form one or more isolated fins and one or more nested fins; andetching through the substrate to a second depth with a second substrate etching process that is different than the first substrate etching process.2. The method of claim 1, wherein a first substrate etch chemistry utilized in the first substrate etching process provides a greater lateral passivation rate for the isolated fins than for the nested fins, and wherein a second substrate etch chemistry utilized in the second substrate etching process provides a greater lateral etch rate for the isolated fins than for the nested fins.3. The method of claim 2, wherein the first etch chemistry comprises HBr, 02 and CF4.4. The method of claim 2, wherein the second etch chemistry comprises Cl2, Ar, and CH4.5. The method of claim 1, wherein the hard mask etching process further utilizes a chemistry comprising a greater concentration of hydrogen than a concentration of oxygen.6. The method of claim 5, wherein the chemistry utilized for the hard mask etching process comprises a hydrogen to oxygen ratio between approximately 2.5: 1 and 3.5: 1.7. The method of claim 5, wherein the hard mask etching process utilizes a chemistry comprising CH3F.8. The method of claim 1, wherein the hard mask etching process further comprises varying a flow rate of the gases used in the hard mask etching process across the surface of the hard mask layer, wherein the flow rate of the gases used in the hard mask etching process is lower proximate to an edge of the hard mask layer relative to the flow rate of the gases used in the hard mask etching process proximate the center of the hard mask layer.9. The method of claim 1, wherein the hard mask etching process further comprises maintaining a total pressure inside a processing chamber between 24 mTorr and 28 mTorr.10. The method of claim 1, wherein the first depth is between 70 nm and 100 nm.11. The method of claim 1, wherein the second depth is between 130 nm and 170 nm.12. The method of claim 1, wherein the hard mask etching process further comprises maintaining a chuck that supports the semiconductor substrate at a temperature between 35°C and 40°C during the hard mask etching process.13. The method of claim 1, wherein the first and second substrate etching processes further comprise, maintaining a chuck that supports the semiconductor substrate at a variable temperature across the substrate, wherein a temperature of the chuck proximate to the center of the semiconductor substrate is higher than a temperature of the chuck proximate to the edge of the semiconductor substrate.14. The method of claim 13, wherein the temperature of the chuck proximate to the center of the semiconductor substrate is maintained at 30°C and the temperature of the chuck proximate to the edge of the semiconductor substrate is maintained at io°C.15. The method of claim 1, wherein the first and second substrate etching processes further comprise, pulsing an RF power source.16. The method of claim 15, wherein pulsing the RF power source comprises pulsing the RF power with a duty cycle that is on for 10% of the time and off for 90% of the time.17. The method of claim 1, wherein the first and second substrate etching processes further comprise controlling a plasma density across the surface of the substrate such that a plasma density proximate to an edge of the substrate is lower than a plasma density proximate to the center of the substrate.18. The method of claim 1, wherein forming the patterned hard mask comprises a multiple patterning process.19. A method for forming high aspect ratio fins comprising:forming a dummy hard mask over a hard mask layer, wherein the dummy hard mask defines a plurality of features having one or more isolated features and one or more nested features, wherein the hard mask layer is disposed above an etch stop layer, and wherein the etch stop layer is disposed above a semiconductor substrate;performing a hard mask etching process to etch through the hard mask layer, wherein the nested and isolated features in the dummy hard mask are transferred into the hard mask layer; performing a break through etching process to etch through the etch stop layer;etching through the substrate to a first depth with a first substrate etching process; and etching through the substrate to a second depth with a second substrate etching process that is different from the first substrate etching process.20. The method of claim 19, wherein the first substrate etching process utilizes a chemistry comprising HBr, 02 and CF4 and wherein the second substrate etching process utilizes a chemistry comprising Cl2, Ar, and CH4.21. The method of claim 19, wherein a first substrate etch chemistry utilized in the first substrate etching process provides a greater lateral passivation rate for the isolated fins than for the nested fins, and a second substrate etch chemistry utilized in the second substrate etching process provides a greater lateral etch rate for the isolated fins than for the nested fins.22. A semiconductor device comprising:one or more nested high aspect ratio features having a first width; andone or more isolated high aspect ratio features having a second width, wherein the second width is equal to the first width.23. The semiconductor device of claim 22, wherein the aspect ratio of the isolated and nested fins is greater than 10: 1.24. The semiconductor device of claim 22, wherein the nested fins have a pitch of 42 nm or less.25. The semiconductor device of claim 22, wherein the first width and second width are less than 15 nm. |
A device compiler and linker within a parallel processing unit (PPU) is configured to optimize program code of a co-processor enabled application by rematerializing a subset of live-in variables for a particular block in a control flow graph generated for that program code. The device compiler and linker identifies the block of the control flow graph that has the greatest number of live-in variables, then selects a subset of the live-in variables associated with the identified block for which rematerializing confers the greatest estimated profitability. The profitability of rematerializing a given subset of live-in variables is determined based on the number of live-in variables reduced, the cost of rematerialization, and the potential risk of rematerialization. |
1.A computer-implemented method for optimizing program code that can be compiled for execution on a parallel processing unit (PPU), the method comprising:Generate a control flow graph for the program code;Identify the first block in the control flow graph that has the largest number of entry active variables compared to other blocks in the control flow graph;Selecting a first subset of the entrance active variables associated with the first block by performing a revenue analysis on different subsets of the entrance active variables associated with the first block; andOptimizing the program code by re-emboditing the first subset of the entry active variables into the second block in the control flow graph after the first block in the control flow graph, where The optimized program code will be executed on the PPU.2.The computer-implemented method of claim 1, wherein selecting the first subset of entry active variables includes:Estimate the revenue value for each of the different subsets of the import active variable by performing the revenue analysis on each of the different subsets; andThe first subset of portal active variables is selected based on the first subset of portal active variables having the largest revenue value compared to the revenue value associated with other different subsets of the portal active variables.3.The computer-implemented method of claim 2, wherein the revenue analysis for a given set of inlet active variables is based on re-specification of the given set of inlet active variables into the control flow graph The reduced number of entry active variables in the second block is generated.4.The computer-implemented method of claim 3, wherein the revenue analysis for the given set of inlet active variables is further based on re-specification of the given set of inlet active variables to the second The number of instructions of the second block that is pulled into the control flow graph is generated in the block.5.The computer-implemented method of claim 4, wherein the revenue analysis for the given set of inlet active variables is further based on re-specification of the given set of inlet active variables to the control flow The number of associated use positions in the second block of the graph is generated.6.The computer-implemented method of claim 5, wherein the revenue analysis for the given set of inlet active variables is further generated based on at least one of the following costs: the given set of inlet active variables is changed from The cost of transferring the register memory to the system memory and the cost of accessing a given subset of the entry active variables in the system memory.7.The computer-implemented method of claim 1, further comprising performing data flow analysis on the program code to generate the control flow graph.8.The computer-implemented method of claim 1, further comprising iteratively optimizing the program code and estimating the number of register deficiencies caused by executing the optimized program code within the PPU until the pass The number of insufficient registers caused by execution of the program code in the PPU falls below the threshold.9.The computer-implemented method of claim 1, further comprising:Determining to re-embed the first set of entry active variables to make the set of registers in the register memory available; andThe set of registers is allocated to one or more threads configured to execute on the PPU.10.A computing device configured to optimize program code that can be compiled for execution on a parallel processing unit (PPU), including:Processing unit, configured as:Generate a control flow graph for the program code;Identify the first block in the control flow graph that has the largest number of entry active variables compared to other blocks in the control flow graph;Selecting the first subset of the entrance active variables associated with the first block by performing a revenue analysis on different subsets of the entrance active variables associated with the first block; andOptimizing the program code by re-emboditing the first subset of the entry active variables into the second block in the control flow graph after the first block in the control flow graph, where The optimized program code will be executed on the PPU. |
Techniques for re-specification based on active analysis to reduce register shortages and increase parallelismCross-reference of related applicationsThis application claims the rights of the US provisional patent application with serial number 61 / 556,782 filed on November 7, 2011, and the US patent application with serial number 13 / 669,401 filed on November 5, 2012. Each of these applications is incorporated by reference.Technical fieldThe present invention relates generally to a compiler for parallel processing unit (PPU), and, more specifically, to a rematerialization based on live analysis to reduce register shortages and improve parallelism technology.Background techniqueThe graphics processing unit (GPU) has evolved over time to support a wide range of operations beyond graphics-oriented operations. In fact, modern GPUs can execute arbitrary program instructions. Such GPUs generally include compilers for compiling program instructions for execution on one or more processing cores included in the GPU. Each such core can execute one or more different execution threads in parallel with other processing cores that execute execution threads.When the processing core within the GPU executes the set of program instructions, the processing core may store the program variables associated with these instructions in the register memory. As is known in the art, when the register memory is completely filled with program variables, the additional program variables may "overflow" to the system memory. One problem with conventional methods for "overflow" is that system memory has a much higher latency than register memory. Therefore, after an "overflow" event occurs, the speed of program instruction execution may be significantly reduced because the program variables have to be accessed from system memory instead of register memory. The second problem is that the number of threads that a given processing core within the processing unit can execute simultaneously depends on the available register memory. Therefore, filling the register memory with program variables may eventually reduce the number of threads executing simultaneously and thus reduce the overall processing throughput of the GPU.Therefore, what is needed in the art is a more efficient technique for managing register memory within the GPU.Summary of the inventionAn embodiment of the present invention sets forth a computer-implemented method for optimizing program code that can be compiled for execution on a parallel processing unit (PPU), the method including generating a control flow graph for the program code, identifying the control flow graph , The first block with the largest number of live-in variables compared to other blocks in the control flow graph, selected by performing a revenue analysis on a different subset of the live-in variables associated with the first block The first subset of ingress active variables associated with the first block, and the second block in the control flow graph after the first subset of ingress active variables is re-embodiated to the first block in the control flow graph To optimize the program code, the optimized program code will be executed on the PPU.One advantage of the disclosed technique is that re-specification of certain subsets of entry active variables reduces register shortages, thereby reducing the possibility of overflow events. Reducing the shortage of registers also allows a larger number of execution threads to execute simultaneously within the PPU, thereby increasing the overall processing throughput of the PPU.BRIEF DESCRIPTIONTherefore, the above-mentioned features of the present invention can be understood in detail, and a more specific description of the present disclosure as briefly summarized above can be obtained with reference to the embodiments, some of which are shown in the accompanying drawings. However, it should be noted that the drawings only show typical embodiments of the present disclosure, and therefore should not be considered as limiting the scope thereof, and the present disclosure may have other equivalent embodiments.FIG. 1 is a block diagram illustrating a computer system configured to implement one or more aspects of the present invention;2 is a block diagram of a parallel processing subsystem used in the computer system of FIG. 1 according to an embodiment of the present invention;Fig. 3 shows a construction process of an application for compiling an enabled coprocessor according to an embodiment of the present invention;4 is a flowchart of method steps for implementing re-specification based on activity analysis with a set of entry active variables according to an embodiment of the present invention;FIG. 5 is a flowchart of method steps for performing a revenue analysis on a set of ingress active variables according to an embodiment of the present invention;6 illustrates an exemplary control flow diagram to illustrate the operation of the device compiler and linker according to one embodiment of the invention.detailed descriptionIn the following description, numerous specific details are set forth to provide a more thorough understanding of the present invention. However, those skilled in the art should understand that the present invention can be implemented without one or more of these specific details.System OverviewFIG. 1 is a block diagram illustrating a computer system 100 configured to implement one or more aspects of the present invention. The computer system 100 includes a central processing unit (CPU) 102 and a system memory 104 that communicate via an interconnect path that may include a memory bridge 105. The system memory 104 includes an image of the driver 103, the coprocessor-enabled application 134, and the operating system 130. The operating system 130 provides detailed instructions for managing and coordinating the operation of the computer system 100. The driver 103 provides detailed instructions for managing and coordinating the operation of the parallel processing subsystem 112 and one or more parallel processing units (PPUs) residing therein, as described in more detail below in conjunction with FIG. 2. The driver 103 also provides compilation facilities for generating machine code specifically optimized for use with such PPUs, as described in more detail below in conjunction with FIGS. 3-6. The coprocessor-enabled application 134 contains instructions that can be executed on the CPU 102 and the PPU. Those instructions are implemented in an abstract format such as virtual assembly and are mapped to machine code for the PPU in the parallel processing subsystem 112. The machine code for those PPUs can be stored in the system memory 104 or in a memory coupled to the PPU.In one embodiment, the coprocessor-enabled application 134 represents CUDA ™ code containing programming instructions intended to be executed on the parallel processing subsystem 112. In the context of this description, the term "application" or "program" refers to any computer code, instructions, and / or functions that can be executed using a processor. For example, in various embodiments, the coprocessor-enabled application 134 may include C code, C ++ code, and so on. In one embodiment, the coprocessor-enabled application 134 may include language extensions in computer languages (e.g., C, C ++, etc.).The memory bridge 105 may be, for example, a Northbridge chip, connected to an input / output (I / O) bridge 107 via a bus or other communication path 106 (e.g., HyperTransport link). I / O bridge 107, which may be, for example, a Southbridge chip, receives user input from one or more user input devices 108 (eg, keyboard, mouse) and forwards the input to CPU 102 via communication path 106 and memory bridge 105. The parallel processing subsystem 112 is coupled to the memory bridge 105 via a bus or a second communication path 113 (eg, peripheral component interconnect (PCIe), accelerated graphics port (AGP), or hyper-transport link); in one embodiment, parallel processing Subsystem 112 is a graphics subsystem that passes pixels to display device 110, which can be any conventional cathode ray tube, liquid crystal display, light emitting diode display, or the like. The system disk 114 is also connected to the I / O bridge 107 and can be configured to store content and applications and data for use by the CPU 102 and the parallel processing subsystem 112. The system disk 114 provides non-volatile storage for applications and data and may include fixed or removable hard drives, flash memory devices, and compact disc (CD) read-only memory (ROM), digital video disc (DVD) ROM, Blu-ray, High-resolution (HD) DVD or other magnetic, optical, or solid-state storage devices.The switch 116 provides a connection between the I / O bridge 107 and other components such as a network adapter 118 and various cards 120 and 121. Other components (not explicitly shown), including universal serial bus (USB) or other port connections, CD drives, DVD drives, film recording devices, and similar components, can also be connected to the I / O bridge 107. The various communication paths shown in FIG. 1 including specifically named communication paths 106 and 113 can be implemented using any suitable protocol, such as PCIe, AGP, HyperTransport, or any other bus or point-to-point communication protocol, and are known in the art Yes, different protocols can be used to connect different devices.In one embodiment, the parallel processing subsystem 112 contains circuits optimized for graphics and video processing, including, for example, video output circuits, and constitutes a graphics processing unit (GPU). In another embodiment, the parallel processing subsystem 112 includes circuits optimized for general-purpose processing, while retaining the underlying computing architecture, which will be described in more detail herein. In yet another embodiment, the parallel processing subsystem 112 may be integrated with one or more other system elements in a single subsystem, such as combining the memory bridge 105, CPU 102, and I / O bridge 107 to form a system on chip (SoC) .It should be understood that the system shown herein is exemplary, and that variations and modifications are possible. The connection topology, including the number and arrangement of bridges, the number of CPUs 102, and the number of parallel processing subsystems 112, can be modified as needed. For example, in some embodiments, system memory 104 is directly connected to CPU 102 rather than through a bridge, and other devices communicate with system memory 104 via memory bridge 105 and CPU 102. In other alternative topologies, the parallel processing subsystem 112 is connected to the I / O bridge 107 or directly to the CPU 102 instead of the memory bridge 105. In other embodiments, the I / O bridge 107 and the memory bridge 105 may be integrated on a single chip rather than exist as one or more discrete devices. Large embodiments may include two or more CPUs 102 and two or more parallel processing subsystems 112. The specific components shown in this article are optional; for example, any number of cards or peripheral devices may be supported. In some embodiments, the switch 116 is removed, and the network adapter 118 and the cards 120, 121 are directly connected to the I / O bridge 107.FIG. 2 shows a parallel processing subsystem 112 according to an embodiment of the present invention. As shown, the parallel processing subsystem 112 includes one or more parallel processing units (PPUs) 202, each of which is coupled to a local parallel processing (PP) memory 204. Generally, the parallel processing subsystem includes U PPUs, where U is greater than or equal to 1. (In this article, multiple instances of similar objects are represented by reference numbers identifying objects and numbers in parentheses identifying instances when needed.) PPU 202 and parallel processing memory 204 can be implemented using one or more integrated circuit devices, such as Program the processor, application specific integrated circuit (ASIC) or memory device, or implement in any other technically feasible way.Referring to FIGS. 1 and 2, in some embodiments, some or all of the PPUs 202 in the parallel processing subsystem 112 are graphics processors with rendering pipelines that can be configured to implement various operations related to: via a memory bridge 105 and the second communication path 113 generate pixel data from graphics data supplied by the CPU 102 and / or the system memory 104, and interact with the local parallel processing memory 204 (which can be used as a graphics memory, including, for example, a conventional frame buffer) Store and update pixel data, transfer pixel data to the display device 110 and so on. In some embodiments, the parallel processing subsystem 112 may include one or more PPUs 202 operating as graphics processors and one or more other PPUs 202 for general-purpose computing. These PPUs may be the same or different, and each PPU may or may not have a dedicated parallel processing memory device. One or more PPUs 202 in the parallel processing subsystem 112 may output data to the display device 110, or each PPU 202 in the parallel processing subsystem 112 may output data to one or more display devices 110.In operation, the CPU 102 is the main processor of the computer system 100 and controls and coordinates the operation of other system components. Specifically, the CPU 102 issues a command to control the operation of the PPU 202. In some embodiments, CPU 102 writes a command stream for each PPU 202 into a data structure (not explicitly shown in FIG. 1 or FIG. 2), which may be located in system memory 104, parallel processing memory 204, or In other storage locations accessible by both CPU 102 and PPU 202. Write a pointer to each data structure to the push buffer to initiate processing of the command stream in the data structure. The PPU 202 reads the command stream from one or more push buffers, and then executes the command asynchronously with respect to the operation of the CPU 102. The execution priority can be assigned to each stack buffer by the application program via the device driver 103 to control the scheduling of different stack buffers.Each PPU 202 includes an I / O (input / output) unit 205 that communicates with the rest of the computer system 100 via a communication path 113 connected to the memory bridge 105 (or, in an alternative embodiment, directly connected to the CPU 102). The connection of PPU 202 to the rest of computer system 100 may also vary. In some embodiments, the parallel processing subsystem 112 may be implemented as a plug-in card that can be inserted into an expansion slot of the computer system 100. In other embodiments, PPU 202 may be integrated on a single chip with a bus bridge such as memory bridge 105 or I / O bridge 107. In other embodiments, some or all elements of PPU 202 may be integrated with CPU 102 on a single chip.In one embodiment, as described above, the communication path 113 is a PCIe link, as is known in the art, where a dedicated channel is allocated to each PPU 202. Other communication paths can also be used. The I / O unit 205 generates packets (or other signals) for transmission on the communication path 113, and also receives all incoming packets (or other signals) from the communication path 113, and guides the incoming packets to the appropriate PPU 202 component. For example, commands related to processing tasks may be directed to the host interface 206, while commands related to memory operations (eg, reading or writing to the parallel processing memory 204) may be directed to the memory crossbar unit 210. The host interface 206 reads each push buffer and outputs the command stream stored in the push buffer to the front end 212.Advantageously, each PPU 202 implements a highly parallel processing architecture. As shown in detail, the PPU 202 (0) includes a processing cluster array 230 that includes C general purpose processing clusters (GPC) 208, where C ≧ 1. Each GPC 208 can concurrently execute a large number (for example, hundreds or thousands) of threads, where each thread is an instance of a program. In various applications, different GPCs 208 can be allocated for processing different types of programs or for performing different types of calculations. The allocation of GPC 208 may vary depending on the workload generated by each type of program or calculation.The GPC 208 receives the processing task to be executed from the work distribution unit in the task / work unit 207. The work distribution unit receives pointers to processing tasks encoded as task metadata (TMD) and stored in memory. The pointer to the TMD is included in the command stream stored as a push buffer and received by the front-end unit 212 from the host interface 206. Processing tasks that can be coded as TMD include an index of the data to be processed, and status parameters and commands that define how the data will be processed (for example, what program will be executed). The task / work unit 207 receives tasks from the front end 212 and ensures that the GPC 208 is configured in a valid state before the processing designated by each TMD is initiated. Each TMD can be assigned a priority for scheduling the execution of processing tasks. Processing tasks may also be received from the processing cluster array 230. Alternatively, the TMD may include parameters that control whether TMD is added to the head or tail of the processing task list (or list of pointers to processing tasks), thereby providing another level of control in addition to priority.The memory interface 214 includes D partition units 215, each of which is directly coupled to a portion of the parallel processing memory 204, where D ≥ 1. As shown, the number of partition units 215 is generally equal to the number of dynamic random access memory (DRAM) 220. In other embodiments, the number of partition units 215 may not be equal to the number of memory devices. Those skilled in the art should understand that DRAM 220 may be replaced with other suitable storage devices and may be of generally conventional design. Therefore, a detailed description is omitted. Rendering targets such as frame buffers or texture maps can be stored across DRAM 220, which allows partitioning unit 215 to write portions of each rendering target in parallel to effectively use the available bandwidth of parallel processing memory 204.Any GPC 208 can process data to be written to any DRAM 220 in the parallel processing memory 204. The crossbar unit 210 is configured to route the output of each GPC 208 to the input of any partition unit 215 or to another GPC 208 for further processing. The GPC 208 communicates with the memory interface 214 through the crossbar unit 210 to read or write various external memory devices. In one embodiment, the crossbar unit 210 has a connection to the memory interface 214 to communicate with the I / O unit 205 and to the local parallel processing memory 204 so that the processing cores in different GPCs 208 can communicate with the system memory 104 Or other memory communication that is not local to the PPU 202. In the embodiment shown in FIG. 2, the cross-switch unit 210 is directly connected to the I / O unit 205. The crossbar unit 210 may use a virtual channel to separate the traffic flow between the GPC 208 and the partition unit 215.In addition, GPC 208 can be programmed to perform processing tasks related to a wide variety of applications, including but not limited to, linear and nonlinear data transformation, video and / or audio data filtering, modeling operations (eg, applying physical laws to determine objects Position, velocity, and other attributes), image rendering operations (for example, tessellation shaders, vertex shaders, geometry shaders, and / or pixel shader programs), and so on. PPU 202 may transfer data from system memory 104 and / or local parallel processing memory 204 to internal (on-chip) memory, process the data, and write the resulting data back to system memory 104 and / or local parallel processing memory 204, where such The data can be accessed by other system components, including the CPU 102 or another parallel processing subsystem 112.The PPU 202 can be equipped with any amount of local parallel processing memory 204, including no local memory, and the local memory and system memory can be used in any combination. For example, in a unified memory architecture (UMA) embodiment, PPU 202 may be a graphics processor. In such an embodiment, no or little dedicated graphics (parallel processing) memory will be provided, and the PPU 202 will use the system memory in an exclusive or almost exclusive manner. In UMA embodiments, the PPU 202 may be integrated into a bridge chip or a processor chip, or provided as a discrete chip with a high-speed link (eg, PCI Express) via a bridge chip or other communication means Connect PPU202 to system memory. Alternatively, each PPU 202 may be implemented with a non-uniform memory architecture, and each such PPU 202 may have access to multiple different memory spaces as directed by the coprocessor-enabled application 134.As shown above, any number of PPUs 202 may be included in the parallel processing subsystem 112. For example, multiple PPUs 202 may be provided on a single card, or multiple cards may be connected to the communication path 113, or one or more PPUs 202 may be integrated into a bridge chip. The PPUs 202 in a multi-PPU system may be the same as or different from each other. For example, different PPUs 202 may have different numbers of processing cores, different capacities of local parallel processing memory, and so on. In the case where there are multiple PPUs 202, those PPUs may be operated in parallel to process data at a higher throughput than a single PPU 202 may achieve. A system containing one or more PPUs 202 can be implemented in various configurations and form factors, including desktop computers, notebook computers or handheld personal computers, servers, workstations, game consoles, embedded systems, and so on.As described above, each PPU 202 is configured to execute the coprocessor-enabled application 134 shown in FIG. The coprocessor-enabled application 134 is compiled by the device compiler and connector application derived from the device driver 103, as described in more detail below in conjunction with FIG.FIG. 3 shows the construction process of the application 134 for compiling the coprocessor enabled FIG. 1 according to one embodiment of the present invention. The program code 310 includes a host source code 312 and a device source code 314. The host source code 312 contains programming instructions intended to be executed on a host such as an x86 based personal computer (PC) or server. Programming instructions in source code 312 may include calls to functions defined in device source code 314. Any technically feasible mechanism can be used to specify which functions are designated as device source code 314.The host source code 312 is pre-processed, compiled, and linked by the host compiler and connector 322. The host compiler and connector 322 generates the host machine code 342, which is stored in the coprocessor-enabled application 134.The device source code 314 is pre-processed, compiled, and connected by the device compiler and connector 324. This compilation operation constitutes the first stage compilation of the device source code 314. The device compiler and linker 324 generates a device virtual assembly 346, which is stored in the device code library 350, and resides with or within the coprocessor-enabled application 134. The virtual instruction translator 334 may generate the device machine code 344 based on the device virtual assembly 346. This compilation operation constitutes the second stage compilation of the device source code 314. The virtual instruction translator 334 may generate more than one version of the device machine code 344 based on the availability defined by the known architecture. For example, the virtual instruction translator 334 can generate the first version of the device machine code 344, which calls 2 native 64-bit arithmetic instructions (available in the first target architecture) and can generate the second version of the device machine code 344, which Simulate 64-bit arithmetic functions on the target including native 64-bit arithmetic instructions.The architecture information 348 indicates the actual architecture version used to generate the machine code 344 of the device. The actual architecture version defines the features implemented in the native instructions within the actual execution target such as PPU202. The architecture information 348 also indicates the virtual architecture version used to generate the device virtual assembly 346. Virtual architecture version definitions are assumed to be native or easily modeled features and define features that are difficult to model. For example, atomic addition operations are difficult to simulate at the instruction level, although atomic addition operations can be avoided together at the algorithm level in some cases, and therefore affect which functions may be compiled in the first compilation stage.In addition to the device machine code 344 and the device virtual assembly 346, the device code library also includes architectural information 348 that indicates which architectural features were assumed when generating the device machine code 344 and the device virtual assembly 346. Those skilled in the art will realize that the functions included in the device machine code 344 and the virtual assembly 346 reflect the functions associated with the actual architecture of the PPU 202. The architecture information 348 provides compatibility information for the device machine code 344 and compiler hints for the second stage compilation operation, which can be used by the device driver sometime after the development of the coprocessor-enabled application 134 has been completed 103 implemented.The device compiler and connector 324 are also configured to implement various optimization routines with the program code 310. One such optimization routine involves selectively rematerializing the set of entry active variables, as described in more detail below in conjunction with FIG.Respecification based on active analysisFIG. 4 is a flowchart of method steps for performing re-specification based on activity analysis with a set of entry active variables according to an embodiment of the present invention. Although the method steps are described in conjunction with the system of FIGS. 1-2, those skilled in the art should understand that any system configured to perform the method steps in any order is within the scope of the present invention. In one embodiment, the device compiler and connector 324 shown in FIG. 3 may be configured to implement method steps.As shown, the method 400 begins at step 402, where the device compiler and connector 324 generates a control flow graph for the program code 310. The control flow graph generated by the device compiler and connector 324 may be a conventional control graph generated using data flow analysis technology, and therefore, may include a collection of code blocks. At step 404, the device compiler and connector 324 identifies the block in the control flow graph that includes the maximum number of entry active variables. In one embodiment, the device compiler and linker 324 determines the number of entry active variables for each block in the control flow graph and then identifies the block with the largest number of entry active variables. The maximum number of entry active variables is represented by a value called “max live-in”. The maximum amount of entry activity may indicate the number of register shortages caused by the execution of the coprocessor-enabled application 134. At step 406, the device compiler and linker 324 collects the entry active variables associated with the block identified at step 404.At step 408, the device compiler and linker 324 selects a subset of entry active variables for re-specification based on performing revenue analysis with different subsets of entry active variables. The device compiler and connector 324 may perform a revenue analysis to determine the "revenue" that re-specificates the given set of active variables at the entrance. The "revenue" of a given subset of entry active variables may be a value that reflects the number of entry active variables reduced by re-concretion of a given subset. This value may additionally reflect the number of instructions pulled in for this re-materialization and / or the maximum number of registers allowed for each thread, as discussed in more detail below in conjunction with FIG. 5.At step 410, the device compiler and connector 324 re-materialize the entry active variables in the given sub-set. The device compiler and connector 324 can implement any technically feasible re-concrete technology. In one embodiment, the device compiler and linker 324 rematerializes the given set of entry active variables by first removing calculations involving those entry active variables from the block of the control flow graph. The device compiler and linker 324 may then modify the subsequent block of the control flow graph to recalculate the entry active variables associated with the subset within the subsequent block. In this way, the device compiler and connector 324 can modify the program code 310 as needed. At step 412, the device compiler and linker 324 updates the maximum amount of entry activity by identifying the number of entry active variables for each block and identifying the block with the largest number of entry active variables. The method 400 then ends.The device compiler and connector 324 may iteratively implement steps 404, 406, 408, 410, and 412 until specific goals are met. In one embodiment, the device compiler and linker 324 implements those steps a fixed number of times, such as 5 times. In another embodiment, the device compiler and connector 324 iteratively implement steps 404, 406, 408, 410, and 412 until the maximum number of active entries decreases below a given threshold, indicating that it is sufficient due to insufficient respecific registers Decrease.Fig. 5 is a flowchart of method steps for performing a revenue analysis on a set of ingress active variables according to an embodiment of the present invention. Although the method steps are described in conjunction with the system of FIGS. 1-2, those skilled in the art should understand that any system configured to perform the method steps in any order is within the scope of the present invention. In one embodiment, the device compiler and connector 324 shown in FIG. 3 may be configured to implement the method steps with a subset of the entry activity variables associated with the blocks identified in step 404 of the method 400.As shown, the method 500 begins at step 502, where the device compiler and linker 324 generates a first profit factor for a subset of the entry active variables based on the reduced number of entry active variables via respecification. For example, the device compiler and linker 324 may determine that respecification reduces the number of entry active variables by 2 and increases the number by 1, resulting in a net decrease of 1 entry active variable.At step 504, the device compiler and connector 324 generates a second profit factor based on the number of instructions pulled in for re-specification and the cost of the use-site required by the re-specification. Since different entry active variables can be associated with instructions of different complexity and / or usage locations with different costs, the device compiler and linker 324 generates a second profit factor to quantify the difference between different subsets of entry active variables This difference.At step 506, the device compiler and connector 324 generates a third revenue factor based on the maximum number of registers allowed for each thread configured to execute the coprocessor-enabled application 134. In this way, the device compiler and linker 324 can estimate the cost of "overflow" events that will occur when the maximum number of registers is exceeded. This cost can reflect, for example, the increase in memory latency due to overflow events and / or the decrease in program execution speed, among other things. At step 508, the device compiler and connector 324 estimates the benefits of re-specification of a subset of the active variables based on the first, second, and third profit factors generated at steps 402, 404, and 406, respectively. In general, the “benefit” of re-specification of a given subset of active variables is a value that reflects the potential benefit of re-specification of a subset of that variable.The device compiler and linker 324 is configured to implement the method 500 with multiple different subsets of the set of entry active variables associated with the block identified at step 404 of the method 400. In this way, the device compiler and linker 324 may estimate the benefits of rematerializing each possible subset of those entry active variables and then select the subset with the greatest benefit for rematerialization.The methods 400 and 500 described above in connection with FIGS. 4 and 5 respectively are explained in more detail below by way of example in conjunction with FIG. 6.Figure 6 illustrates an exemplary control flow diagram to illustrate the operation of the device compiler and linker according to one embodiment of the invention. At step 402 of method 400, device compiler and connector 324 may generate control flow graph 600 based on program code 310, as described above in connection with FIG. As shown, the control flow graph 600 includes blocks 610 and 620. Block 610 includes 2 expressions and receives 1 entry active variable "t" from the previous block (not shown). Block 620 includes 3 expressions and receives from block 610 2 entry active variables "x" and "y". The expressions in those blocks are derived from program code 310. In the following example, the device compiler and the connector 324 implement the methods 400 and 500 described above in conjunction with FIGS. 4 and 5, respectively, to selectively re-embodiize the variables in the control flow graph 600. In this way, when a given PPU 202 executes the code represented by the control flow graph 600, the device compiler and linker 324 can reduce register shortages.Once the device compiler and connector 324 has generated the control flow graph 600, the device compiler and connector 324 identifies the block within the control flow graph 600 that has the largest number of entry active variables. Since block 610 receives 1 entry active variable and block 620 receives 2 entry active variables, the device compiler and connector 324 recognizes block 620 as having the largest amount of entry active, similar to step 404 of method 400. The device compiler and linker 324 then selects a subset of the entry active variables associated with block 620 based on the revenue analysis performed with each possible subset.In this example, the device compiler and linker 324 may perform a revenue analysis with a subset that includes entry active variables "x", entry active variables "y", or entry active variables "x" and "y". The earnings analysis outlined in conjunction with FIG. 5 will reveal that re-concrete "x" or "y" alone will not reduce the number of entry active variables for block 620, because doing so will make "t" a new entry Active variables, resulting in a net reduction of 0 entry active variables. However, re-concretion of "x" and "y" together will reduce the number of entry active variables by 2 while the number of entry active variables only increases by 1, resulting in a net reduction of 1 entry active variable. This net reduction may be reflected in the first profit factor generated by the device compiler and connector 324 in step 502 of the method 500 for the subset including "x" and "y".The device compiler and connector 324 is also configured to determine the number of instructions that are pulled in to re-embed the entry active variables in the given sub-set and the cost of the usage location required to re-embed those entry active variables, Similar to step 504 of method 500. In this example, the device compiler and linker 324 will analyze the definitions of the entry active variables "x" and "y" and the types of memory accesses required by those definitions to determine the "involvement" of those variables to be rematerialized Overhead ". In some cases, due to, for example, the complexity of the instructions required to rematerialize certain entry active variables, or the cost of the location of use associated with the respecification of those variables, the inactive active variables in the given subset are rematerialized The overhead involved can be prohibitive. Generally, the second benefit factor generated by the device compiler and linker 324 at step 504 of the method 500 reflects this overhead.For each subset of entry active variables discussed in this example, specifically, including "x", "y", or a subset of "x" and "y", the device compiler and linker 324 generates the above and The first and second profit factors discussed in connection with steps 502 and 504 of method 500, respectively. For each such subset, the device compiler and connector 324 also generates the third profit factor discussed in connection with step 506 of the method 500. The device compiler and connector 324 is based on the maximum number of registers allowed for each thread configured to execute the coprocessor-enabled application 134 and the cost of "overflow" events that can occur when that number of registers is exceeded A third profit factor is generated for a given subset. In this case, the inlet active variables in the given sub-set may overflow into the system memory. The device compiler and linker 324 estimates the third profit factor for a given sub-set based on the "cost" of the overflow, such as an increase in memory latency and / or a decrease in program execution speed. Therefore, the third return factor generated for a given set of inlet active variables represents the degree of "risk" associated with re-specification of the inlet active variables in this subset.The device compiler and linker 324 estimates the overall revenue of the re-specification of the entry active variables in the different subsets discussed in this example based on the three revenue factors generated for each such subset, similar In step 508 of method 500. The device compiler and linker 324 then re-materializes the entry active variables in the subset with the greatest benefit. In this example, the subset that includes both "x" and "y" has the greatest benefit, and therefore the device compiler and linker rematerialize those variables within block 620 by modifying program code 310.In summary, the device compiler and linker in the parallel processing unit (PPU) are configured to: Optimize re-enable co-processing by re-specification of a subset of the entry active variables for specific blocks in the control flow graph generated by the program code The program code of the application. The device compiler and linker identify the block of the control flow graph with the largest number of entry active variables, and then select the subset of entry active variables associated with the identified block, and then express Estimated maximum benefit. The benefits of re-specification of the given set of active inlet variables are determined based on the reduced number of active variable inlets, the cost of re-specification, and the potential risk of re-specification.Advantageously, re-specification of certain subsets of entry active variables reduces register shortages, thereby reducing the possibility of overflow events. Reducing the shortage of registers also allows the PPU to simultaneously execute a larger number of execution threads, thereby increasing the overall processing throughput of the PPU.An embodiment of the present invention can be implemented as a program product for use with a computer system. The program of the program product defines various functions of the embodiment (including the method described herein) and can be contained on various computer-readable storage media. Exemplary computer-readable storage media include, but are not limited to: (i) non-writable storage media (eg, a read-only memory device within the computer, such as a CD-ROM that can be read by a CD-ROM drive Disk, flash memory, read-only memory (ROM) chip or any type of solid-state non-volatile semiconductor memory) on which information is permanently stored; and (ii) a writable storage medium (eg, a floppy disk drive Floppy disk or hard disk drive or any type of solid-state random access semiconductor memory), on which changeable information is stored.The invention has been described above with reference to specific embodiments. However, those skilled in the art will understand that various modifications and changes can be made thereto without departing from the broader spirit and scope of the invention as set forth in the appended claims. Therefore, the foregoing description and drawings should be considered exemplary rather than limiting. |
A processor including a plurality of logical processors, and an instruction set, the instruction set including of one or more instructions which when executed by a first logical processor, cause the first logical processor to make a processor execution resource previously reserved for the first processor available to a second processor in the plurality of processors in response to the first logical processor being scheduled to enter an idle state. |
Claims What is claimed is: 1. A method comprising: in a processor based system where a plurality of processors share processor execution resources, in response to a first processor in the plurality of processors being scheduled to enter an idle state, making a processor execution resource previously reserved for the first processor available to a second processor in the plurality of processors. 2. The method of claim 1 further comprising reserving the processor execution resource for the first processor in response to the first processor being scheduled to execute a task. 3. The method of claim 2 wherein each of the plurality of processors is a logical processor of the processor based system. 4. The method of claim 3 wherein the first processor being scheduled to enter an idle state further comprises the first processor executing a processor instruction requesting the first processor to enter an idle state. 5. The method of claim 4 wherein making the processor execution resource previously reserved for the first processor available to a second processor further comprises releasing the processor execution resource into a common pool of processor execution resources accessible from the second processor. 6. The method of claim 5 wherein the first processor being scheduled to execute a task further comprises the first processor receiving a wake up signal. <Desc/Clms Page number 7> 7. The method of claim 6 wherein the processor execution resource previously reserved for the first processor further comprises the processor execution resource previously statically allocated to the first processor; and wherein releasing the processor execution resource into a common pool of processor execution resources further comprises de- allocating the processor execution resource. 8. The method of claim 6 wherein the processor execution resource previously reserved for the first processor further comprises the processor execution resource previously locked by the first processor; and wherein releasing the processor execution resource into a common pool of processor execution resources further comprises the first processor unlocking the processor execution resource. 9. The method of claim 6 wherein the common pool of processor execution resources comprises a translation lookaside buffer and the processor execution resource is a translation cache entry from the translation lookaside buffer. 10. A processor comprising: a plurality of logical processors; and an instruction set, the instruction set comprising one or more instructions which when executed by a first logical processor, cause the first logical processor to make a processor execution resource previously reserved for the first processor available to a second processor in the plurality of processors in response to the first logical processor being scheduled to enter an idle state. <Desc/Clms Page number 8> 11. The processor of claim 10 wherein to the first logical processor being scheduled to enter an idle state further comprises the first processor executing a processor instruction requesting the first logical processor to enter an idle state. 12. The processor of claim 11 wherein causing the first logical processor to make the processor execution resource previously reserved for the first logical processor available to a second logical processor further comprises releasing the processor execution resource into a common pool of processor execution resources accessible from the second logical processor. 13. The processor of claim 12 wherein the processor execution resource previously reserved for the first logical processor further comprises the processor execution resource previously statically allocated to the first logical processor; and wherein releasing the processor execution resource into a common pool of processor execution resources further comprises de-allocating the processor execution resource. 14. The processor of claim 12 wherein the processor execution resource previously reserved for the first logical processor further comprises the processor execution resource previously statically allocated to the first logical processor; and wherein releasing the processor execution resource into a common pool of processor execution resources further comprises the first processor unlocking the processor execution resource. 15. A system comprising: a processor, the processor comprising a plurality of logical processors; and <Desc/Clms Page number 9> an instruction set, the instruction set comprising one or more instructions which when executed by a first logical processor, cause the first logical processor to make a processor execution resource previously reserved for the first processor available to a second processor in the plurality of processors in response to the first logical processor being scheduled to enter an idle state; firmware to schedule the first logical processor to enter an idle state; and a bus to interconnect the firmware and the processor. 16. The system of claim 15 wherein the first logical processor being scheduled to enter an idle state further comprises the first processor executing a processor instruction requesting the first logical processor to enter an idle state. 17. The system of claim 16 wherein causing the first logical processor to make the processor execution resource previously reserved for the first logical processor available to a second logical processor further comprises releasing the processor execution resource into a common pool of processor execution resources accessible from the second logical processor. 18. The system of claim 17 wherein the processor execution resource previously reserved for the first logical processor further comprises the processor execution resource previously statically allocated to the first logical processor; and wherein releasing the processor execution resource into a common pool of processor execution resources further comprises de-allocating the processor execution resource <Desc/Clms Page number 10> 19. The system of claim 17 wherein the processor execution resource previously reserved for the first logical processor further comprises the processor execution resource previously statically allocated to the first logical processor; and wherein releasing the processor execution resource into a common pool of processor execution resources further comprises the first processor unlocking the processor execution resource. 20. A machine accessible medium having stored thereon data which when accessed by a machine causes the machine to perform a method, the method comprising: in a processor based system where a plurality of processors share processor execution resources, in response to a first processor in the plurality of processors being scheduled to enter an idle state, making a processor execution resource previously reserved for the first processor available to a second processor in the plurality of processors. 21. The machine accessible medium of claim 20 further comprising reserving the processor execution resource for the first processor in response to the first processor being scheduled to execute a task. 22. The machine accessible medium of claim 21 wherein each of the plurality of processors is a logical processor of the processor based system. 23. The machine accessible medium of claim 22 wherein the first processor being scheduled to enter an idle state further comprises the first processor executing a processor instruction requesting the first processor to enter an idle state. 24. The machine accessible medium of claim 23 wherein making the processor execution resource previously reserved for the first processor available to a second processor <Desc/Clms Page number 11> further comprises releasing the processor execution resource into a common pool of processor execution resources accessible from the second processor. 25. The machine accessible medium of claim 24 wherein the first processor being scheduled to execute a task further comprises the first processor receiving a wake up signal. 26. The machine accessible medium of claim 25 wherein the processor execution resource previously reserved for the first processor further comprises the processor execution resource previously statically allocated to the first processor; and wherein releasing the processor execution resource into a common pool of processor execution resources further comprises de-allocating the processor execution resource. 27. The machine accessible medium of claim 25 wherein the processor execution resource previously reserved for the first processor further comprises the processor execution resource previously locked by the first processor; and wherein releasing the processor execution resource into a common pool of processor execution resources further comprises the first processor unlocking the processor execution resource. 28. The machine accessible medium of claim 25 wherein the common pool of processor execution resources comprises a translation lookaside buffer and the processor execution resource is a translation cache entry from the translation lookaside buffer. |
<Desc/Clms Page number 1> Sharing Idled Processor Execution Resources Background [01] In the high level view of a processor depicted in Fig. la, a processor may be conceptualized as being comprised of two components, the first implementing the architectural state of the processor, such as for example its registers and program counter, and the second composed of processor execution resources, such as, for example, a translation lookaside buffer (TLB). [02] In one type of multiprocessing processor based system, as depicted in Fig. lb, multiple physical processors are interconnected by a bus system, and each physical processor maintains a separate architectural state in hardware as well as a separate set of processor execution resources in hardware. In a thread scheduling scenario where each processor of such a system is scheduled to execute a different thread, an instance may arise when one of the processors in the system is idled because it is waiting on a slower device in the system, such as a disk drive, or because it is currently not scheduled to execute a thread. In this instance, the processor and all of its execution resources are also idled and unavailable to other processors of the system. [03] In another type of processor based system such as that depicted in Fig. lc, a hardware processor that maintains separate architectural states in the processor's hardware for a plurality of logical processors may, however, have a single processor core pipeline that is shared by the logical processors and a single set of processor execution resources, including the TLB, that is shared by the logical processors. Such a processor architecture is exemplified by the Intel (g) Xeon processor with Hyper Threading Technology, among others, and is well known in the art. [04] In such a logical multiprocessing system, a thread scheduler may schedule a different thread to execute on each of the logical processors because each logical processor maintains its architectural state separately from all other logical processors. When a logical processor is idled by an operating system thread scheduler or is waiting <Desc/Clms Page number 2> for data from a slow storage device, it may either execute an idle task, typically a tight loop, and periodically check for an interrupt; or it may suspend its activity and wait for a wake up signal of some type to resume execution of a thread. [05] In contrast to a multiprocessing system where processor execution resources are physically separated, in this type of logical multiprocessing system, when one of the multiple logical processors in such a system is idled, dynamically allocated processor execution resources that are not being used by the idled logical processor may be available to other logical processors that are currently executing threads for the user or the system. [06] Processor execution resources in a logical multiprocessing system may, however, be reserved for a logical processor. This may occur in different ways. For one example, a logical processor may lock a dynamically allocated processor execution resource such as a translation register (TR) from the TLB thus making it unavailable to other logical processors. In another instance, the logical processor may be statically allocated processor execution resources such as TCs and thus these statically allocated resources may be unavailable to other logical processors. These reserved resources typically continue to be unavailable to other logical processors even after the logical processor for which they are reserved is idled. Thus, TRs that are locked by a logical processor generally continue to be locked by the logical processor while it is idling ; and statically allocated TCs allocated to the logical processor continue to be statically allocated to the logical processor while it is idling. Brief Description of the Drawings Figure 1 Depicts high level views of different types of processor architectures. Figure 2 is a flowchart of processing in one embodiment. Figure 3 depicts a processor based system in one embodiment. <Desc/Clms Page number 3> Detailed Description [07] In one embodiment processing occurs as depicted in the high level flowchart in Fig. 2. In the figure, two logical processors, Processor 1,200, and Processor 2,205, are executing threads scheduled by an operating system that includes a thread scheduler 210. At 215, Processor 1 is switched out from an executing thread due to, for instance, termination of the thread or a page fault, and returns to the thread scheduler. If no more tasks are scheduled for this logical processor, 220, the processor executes an idling sequence, 225-230. First, the logical processor gives up any reserved processor execution resources held by the logical processor 225, releasing them to the common pool 260. Thus for example, Processor 1 may return a Translation Cache entry or Translation Cache Register to the general pool of registers in the Translation Lookaside Buffer. [08] In different embodiments, the processing in step 225 may differ. In some embodiments, the exclusively held resource released may be a dynamically allocated resource and have previously been locked by Processor 1. In such an embodiment, in step 225, the logical processor unlocks the resource and thereby makes it available to other logical processors. In another embodiment, the exclusively held resource may have been previously statically allocated to Processor 1. In such embodiments, in step 225, the statically allocated resource is deallocated and is returned to the pool of dynamically allocated resources 260. [09] After Processor 1 enters an idled state, such as a state of suspension 230 in this embodiment, it may be requested for execution of a new or resumed thread by a wake up signal such as an interrupt 235. In other embodiments the processor may enter an idle task loop instead of the suspension depicted at 230 and periodically check for interrupts. [10] Following the wake up signal, the logical processor then re-acquires the exclusively reserved resources by either locking or statically allocating them to itself <Desc/Clms Page number 4> as necessary, 240. The logical processor then switches to an incoming thread and continues execution of that thread, 245. [11] The resources freed by Processor 1 before suspension or idling at 225 become available to another logical processor such as Processor 2,205, executing a thread such as the depicted user thread 250. These resources may then be dynamically allocated to the logical processor as necessary from the pool of shared processor execution resources during the execution of the thread, 255. [12] Fig. 3 depicts a processor based system in one embodiment where the logical processors are implemented as part of a processor 300. Programs that execute on the logical processors are stored in memory 340 connectively coupled to the processor by bus system 320. The memory may include a non-volatile memory section storing firmware that includes a thread scheduler performing processing substantially as described above. [13] Many other embodiments are possible. For instance, while the above description limits itself to logical processors, similar processing is applicable to physically separate multiprocessors that share any common execution resources. In such embodiments, a hybrid version of logical and physical multiprocessing is implemented where separate architectural states and some execution resources are separated in hardware, but other execution resources are shared in hardware and may be released using processing similar to that depicted in Fig. 2. In some embodiments, the thread scheduler referenced above may form a component of firmware resident in non- volatile memory as depicted in Fig. 3, while in others it may be a portion of operating system software stored on disk media accessible to the processor. In some embodiments, the actions taken to release and reserve processor execution resources may be directly implemented in hardware and ancillary to the processor's instruction execution system, while in other embodiments they may be actions taken by the processor as part of the execution of one or more instructions. In some embodiments <Desc/Clms Page number 5> the shared execution resources may include special purpose registers unrelated to the TLB. Embodiments are not limited to two processors, three or more processors may share execution resources and perform processing analogous to the processing described above. [14] Embodiments in accordance with the claimed subject matter may be provided as a computer program product that may include a machine-readable medium having stored thereon data which when accessed by a machine may cause the machine to perform a process according to the claimed subject matter. The machine-readable medium may include, but is not limited to, floppy diskettes, optical disks, DVD-ROM disks, DVD- RAM disks, DVD-RW disks, DVD+RW disks, CD-R disks, CD-RW disks, CD-ROM disks, and magneto-optical disks, ROMs, RAMs, EPROMs, EEPROMs, magnet or optical cards, flash memory, or other type of media/machine-readable medium suitable for storing electronic instructions. Moreover, embodiments may also be downloaded as a computer program product, wherein the program may be transferred from a remote computer to a requesting computer by way of data signals embodied in a carrier wave or other propagation medium via a communication link (e. g. , a modem or network connection). [15] Many of the methods are described in their most basic form but steps can be added to or deleted from any of the methods and information can be added or subtracted from any of the described messages without departing from the basic scope of the claimed subject matter. It will be apparent to those skilled in the art that many further modifications and adaptations can be made. The particular embodiments are not provided to limit the invention but to illustrate it. The scope of the claimed subject matter is not to be determined by the specific examples provided above but only by the claims below. |
Embodiments of a cold plate and a manifold plate are disclosed. The cold plate may be coupled with an integrated circuit die, and the cold plate may also include a flow path to receive a liquid coolant. Coolant moving through the flow path can remove heat generated by the die. The cold plate may include one or more piercing elements that are coupled with the flow path. The manifold plate may hold a volume of a liquid coolant, and one or more breakable seals on the manifold plate contain the liquid coolant within the manifold plate (and perhaps other components of a fluid cooling system). The piercing element (or elements) on the cold plate may be inserted into the breakable seal (or seals) on the manifold plate to open the breakable seals and establish fluid communication between the cold and manifold plates. The use of a manifold plate including the breakable seals may enable the shipment and storage of a fluid cooling system precharged with a working fluid. Other embodiments are described and claimed. |
CLAIMS What is claimed is: 1. A thermal component comprising: a body; a fluid path disposed in the body, the fluid path including a port; and a piercing element coupled with the port, the piercing element to open a breakable seal on a second component. 2. The thermal component of claim 1 , wherein the port comprises an inlet port, the thermal component further comprising: an outlet port disposed on the fluid path; and a second piercing element coupled with the outlet port, the second piercing element to open a second breakable seal on the second component. 3. The thermal component of claim 2, wherein the fluid path further comprises a number of channels. 4. The thermal component of claim 3, wherein the fluid path further comprises: an inlet plenum coupled with one end of each of the channels and in fluid communication with the inlet port; and an outlet plenum coupled with an opposing end of each of the channels and in fluid communication with the outlet port. 5. The thermal component of claim 1, wherein the piercing element comprises a cylindrical tube having one end coupled with the port and an opposing end extending from the body, the opposing end having an angled profile. 6. The thermal component of claim 1, further comprising a retention element disposed on the body, the retention element to engage a socket or a retention mechanism. 7. The thermal component of claim 1, wherein the second component includes a volume of a liquid coolant. 8. The thermal component of claim 1 , wherein the piercing element is disposed on one surface of the body and an integrated circuit die is thermally coupled with an opposing surface of the body. 9. The thermal component of claim 1, wherein the piercing element is disposed on one surface of the body and a heat spreader is thermally coupled with an opposing surface of the body. 10. A component comprising: a body; a fluid channel disposed in the body and including a port; and a breakable seal disposed at the port, the breakable seal to be opened upon engagement with a piercing element of a second component. 11. The component of claim 10, wherein the port comprises an outlet port, the component further comprising: an inlet port disposed on the fluid channel, the fluid channel extending between the inlet port and outlet port; a second fluid channel disposed in the body, the second fluid channel extending between a second inlet port and a second outlet port; and a second breakable seal disposed on the second inlet port, the second breakable seal to be opened upon engagement with a second piercing element of the second component. 12. The component of claim 10, wherein the breakable seal comprises a membrane disposed over the port. 13. The component of claim 12, further comprising a recess formed in the body and disposed at the port, wherein the membrane is disposed in the recess. 14. The component of claim 13, further comprising a pre-seal element disposed in the recess, the pre-seal element to engage the piercing element prior to engagement of the piercing element with the membrane. 15. The component of claim 12, wherein the membrane comprises a material selected from a group consisting of fluorinated ethylene propylene (FEP), polychlorotriflouroethylene (PCTFE), aluminum, aluminum alloys, copper, and copper alloys. 16. The component of claim 10, further comprising a volume of a fluid coolant disposed in the fluid channel. 17. The component of claim 10, wherein a fluid cooling system is in fluid communication with the fluid channel, the fluid cooling system including a pump and a heat exchanger. 18. A method comprising: providing a thermal component having a piercing element, the thermal component coupled with an integrated circuit die; providing a fluid system component having a breakable seal, the fluid system component containing a volume of a fluid coolant; and inserting the piercing element into the breakable seal to open the breakable seal and establish fluid communication between the thermal and fluid system components. 19. The method of claim 18, further comprising securing the thermal component to the fluid system component. 20. The method of claim 19, wherein an engagement between the piercing element and the breakable seal secures the thermal component to the fluid system component. 21. The method of claim 18, wherein the thermal component includes a second piercing element and the fluid system component includes a second breakable seal, the method further comprising inserting the second piercing element into the second breakable seal to open the second breakable seal. 22. The method of claim 18, further comprising coupling the integrated circuit die to a substrate. 23. The method of claim 18, wherein the thermal component includes a fluid path disposed proximate the integrated circuit die and wherein flow of the fluid coolant through the fluid path removes heat generated by the die. 24. The method of claim 18, wherein the fluid system component is coupled with a fluid cooling system including a pump and a heat exchanger, and wherein the fluid cooling system includes an additional volume of the fluid coolant. 25. The method of claim 18, further comprising coupling at least one other component of a fluid cooling system with the fluid system component. 26. An assembly comprising: a cold plate, the cold plate including a body having a fluid path, a first piercing element coupled with the fluid path, and a second piercing element coupled with the fluid path; and a manifold plate secured to the cold plate, the manifold plate including a body, a first breakable seal disposed on the body, and a second breakable seal disposed on the body; wherein the first and second piercing elements have opened the first and second breakable seals, respectively, to establish fluid communication between the manifold and cold plates. 27. The assembly of claim 26, wherein the manifold plate is secured to the cold plate by an engagement between the first piercing element and the first breakable seal and by an engagement between the second piercing element and the second breakable seal. 28. The assembly of claim 26, further comprising an integrated circuit die coupled with the cold plate. 29. The assembly of claim 28, further comprising a substrate coupled with the integrated circuit die. 30. The assembly of claim 26, further comprising: a pump in fluid communication with a first fluid channel on the manifold plate, the first breakable seal disposed at an outlet of the first fluid channel; and a heat exchanger in fluid communication with a second fluid channel on the manifold plate, the second breakable seal disposed at an inlet of the second fluid channel. 31. The assembly of claim 26, wherein the cold plate further comprises: a number of channels disposed in the fluid path; an inlet plenum coupled with one end of each of the channels and in fluid communication with an inlet port of the fluid path, the first piercing element disposed at the inlet port; and an outlet plenum coupled with an opposing end of each of the channels and in fluid communication with an outlet port of the fluid path, the second piercing element disposed at the outlet port. |
COLD PLATE AND MATING MANIFOLD PLATE FOR IC DEVICE COOLING SYSTEM ENABLING THE SHIPMENT OF COOLING SYSTEM PRE-CHARGED WITH LIQUID COOLANTFIELD OF THE INVENTIONThe disclosed embodiments relate generally to cooling systems for integrated circuit (IC) devices, and more particularly to a cold plate and mating manifold plate that enable the shipment of a cooling system pre-charged with a liquid coolant.BACKGROUND OF THE INVENTIONThe power dissipation of microprocessors and other processing devices generally increases with each design generation, as the operating frequencies of these devices are increased. At the same time, feature sizes are decreasing and, therefore, the number of active circuit elements (e.g., transistors) per unit area is rising, which may lead to increased power densities. This increase in power density coupled with higher operating frequencies can result in greater heat generation during operation of an IC die, and this heat should be dissipated for proper functioning of the die and reliability. Further, due to the aforementioned factors as well as other design and operating conditions, one or more "hot spots" - e.g., a location on a die where the temperature is significantly greater than in surrounding regions on the die - may be present on an IC die during operation, and a failure to adequately extract heat from such hot spots may lead to damage and/or a degradation in performance of the die. Thus, the thermal performance of die cooling systems in present and future generations of IC devices will become increasingly critical.One technology that may meet the aforementioned needs is liquid cooling. Liquid cooling solutions may be used to cool a variety of IC devices, including processing devices such as microprocessors, field programmable gate arrays (FPGAs), application specific integrated circuits (ASICs), and any other type of IC device. Further, these liquid cooling systems may find application in numerous types of computing systems, including, for example, servers, desktop computers, laptop computers, as well as handheld and other portable computing devices. One challenge facing IC device manufacturers and computer system manufacturers alike is the handling of liquid coolants. Potential issues include the storage and shipment of IC devices and/or cooling systems with a liquid coolant, as well as the assembly of a computer including a liquid cooling system. The import of these issues may be most pronounced with regard to small equipment manufacturers who may not have the resources to purchase and/or operate their own liquid coolant filling systems.BRIEF DESCRIPTION OF THE DRAWINGS [0004] FIG. 1 A is a schematic diagram showing a plan view an embodiment of a cold plate for a liquid cooling system. [0005] FIG. IB is a schematic diagram showing a front elevation view of the cold plate of FIG. IA. [0006] FIG. 1 C is a schematic diagram showing a side elevation view of the cold plate of FIG. IA.FIG. 2 is an elevation view illustrating the cold plate of FIGS.1 A-IC in combination with an IC die, as well as an embodiment of a cover for the cold plate.FIG. 3A is a schematic diagram showing a plan view an embodiment of a manifold plate for a liquid cooling system.FIG. 3 B is a schematic diagram showing a front elevation view of the manifold plate of FIG. 3 A (shown in partial cross-section).FIG. 3C is a schematic diagram showing a side elevation view of the manifold plate of FIG. 3 A.FIG. 3D is a schematic diagram illustrating an embodiment of a breakable seal for the manifold plate of FIGS. 3 A-3C.FIG. 3E is a schematic diagram illustrating another embodiment of a breakable seal for the manifold plate of FIGS. 3A-3C(shown in partial cross-section).FIG. 4 is a schematic diagram illustrating an embodiment of a cover for the manifold plate of FIGS. 3A-3C (with the cover shown in cross-section).FIG. 5 is a schematic diagram illustrating an embodiment of an assembly including the cold plate of FIGS. 1 A-IC and the manifold plate of FIGS. 3A-3C.FIG. 6 is a block diagram illustrating an embodiment of a method for assembling a liquid cooling system for an IC device. [0016] FIG. 7 is a schematic diagram illustrating an embodiment of a computing system, which may include any of the disclosed embodiments of a liquid cooling system.DETAILED DESCRIPTION OF THE INVENTION [0017] Referring to FIGS. IA through 1C, illustrated is an embodiment of a cold plate 100, which may form part of a cooling system for an integrated circuit (IC) device. In one embodiment, for example, an IC die is thermally coupled with the cold plate 100, and a liquid coolant flows through one or more channels on the cold plate to remove heat generated by the die. A plan view of the cold plate 100 is shown in FIG. IA. A front elevation view of the cold plate 100 is shown in FIG. IB, whereas a side elevation view of the cold plate is shown in FIG. 1C. [0018] Cold plate 100 comprises a body 110 having an upper surface 112 and an opposing lower surface 114. Disposed in the body 110 is a flow path 150, and this flow path may comprise any combination of channels, plenums, ports, and other flow control devices or elements (e.g., pipes, conduits, valves, gates, etc.) in which a liquid coolant may pass. An IC die may be thermally coupled to the lower surface 114 of body 110 (see FIG. 2, which will be discussed below), and a liquid coolant circulating through the flow path 150 can remove heat generated by the die (and transferred to the cold plate 100 by, for example, conduction). [0019] According to one embodiment, as shown in the figures, the flow path 150 comprises an inlet plenum 151 and an outlet plenum 153. The inlet plenum 151 is in fluid communication with an inlet port 152 and, similarly, the outlet plenum 153 is coupled with an outlet port 154. Extending between the inlet and outlet plenums 151, 153 are one or more channels 155, with adjacent channels 155 being separated by a wall 156 (it should be noted that in FIG. IA the flow path 150 is shown in solid - rather than hidden - lines for clarity and ease of illustration). A liquid coolant may be introduced into the flow path 150 at the inlet port 152, and liquid coolant entering the inlet port flows into the inlet plenum 151. Liquid coolant in inlet plenum 151 can enter the channels 155 and flow toward the outlet plenum 153. As the liquid coolant traverses the channels 155, heat present in the body 110 (e.g., heat conducted into the body from an IC die) is transferred to the liquid coolant. The heated liquid coolant present in the outlet plenum 153 will then exit the flow path 150 at the outlet port 154. In one embodiment, this heated coolant is circulated through a heat exchanger (or other thermal device) to cool the liquid coolant, and this cooled liquid can be reintroduced into the flow path 150 at inlet port 152.The flow path 150 may have any suitable structure, and it should be understood that the embodiment shown in FIGS. 1 A-IC represents but one example of a flow path that may be disposed on the cold plate 100. Generally, the flow path 150 may have any suitable form which allows for the removal of heat from the cold plate by a liquid coolant. In addition, it should be noted that the pattern of channels 155 shown in FIGS. 1 A-IC is just one example of the layout of channels that may find application with the disclosed embodiments, and the reader will appreciate that any suitable number and pattern of channels may be employed in the flow path 150. Also, the channels 155 may have any suitable dimensions, and in one embodiment each of the channels has a width (w) of between 50 [mu]m and 200 [mu]m and a height (h) of between 100 [mu]m and 2,000 [mu]m (channels of such dimensions sometimes referred to as "microchannels").Coupled with the inlet port 152 of flow path 150 is a first piercing element 160 and, similarly, coupled with the outlet port 154 is a second piercing element 170. As will be described in more detail below, the cold plate 100 may be coupled with a manifold plate (or other thermal component or collection of components) containing a volume of a liquid coolant, and this manifold plate may contain one or more breakable seals to contain the coolant within the manifold plate prior to assembly (e.g., during shipping, handling, storage, etc.). Each of the piercing elements 160, 170 comprises any device capable of piercing or otherwise opening a breakable seal on the manifold plate. Although two piercing elements 160, 170 are shown in the illustrated embodiment, it should be understood that the cold plate 100 may include any suitable number of piercing elements (e.g., one, or more than two, etc.).According to one embodiment, as shown in the figures, the piercing element 160 comprises a tube or other conduit extending from the inlet port 152, and an opposing end 165 of this tube is cut or formed at an angle. Similarly, the piercing element 170 may comprise a tube or other conduit extending from the outlet port 154, wherein an opposing end 175 of this tube has been cut or formed at an angle. The angled profile at each of the opposing ends 165, 175 of the piercing elements 160, 170, respectively, assists in puncturing a membrane or other breakable seal disposed on the manifold plate, as will be described below in more detail. It should, of course, be understood that the piercing elements 160, 170 are presented by way of example and not limitation and, further, that the piercing elements may have any suitable shape and configuration and, further, that the piercing elements may comprise other suitable devices.In a further embodiment, the cold plate 100 further includes a retaining element 180. The retaining element 180 may comprise any feature or structure adapted to engage a socket or other retention mechanism (e.g., a socket and/or retention mechanism disposed on a motherboard). In one embodiment, as shown in the figures, the retaining element 180 comprises a lip extending around a periphery of the body 110. The Hp includes surfaces 187, 188 that may, in some embodiments, engage a socket and/or retention mechanism.The cold plate 100 may be manufactured using any suitable method or combination of methods. Fabrication processes that may be employed to make the cold plate, either alone or in combination, include etching, skiving, machining (e.g., milling, laser machining, etc.), molding, and/or stamping, as well as others. In one embodiment, the body 110 comprises an upper portion 190a and a lower portion 190b. The flow path 150 may be formed in the upper portion 190a (e.g., as by etching, skiving, molding, stamping, etc.), and then the lower portion 190b - which may comprise a generally flat plate - is attached to the upper portion (e.g., as by brazing, soldering, epoxy, etc.) to enclose the fluid plenums 151, 153 and channels 155. In another embodiment, the flow path 150 may be formed in the lower portion 190b, and in yet a further embodiment portions of the flow path may be formed in the upper portion 190a and other portions of the flow path formed in the lower portion 190b. The lower portion 190b of the body 110 may have dimensions larger than that of the upper portion 190a, such that an outer periphery of the lower portion extends beyond the periphery of the upper portion and functions as the retaining element 180. Further, the piercing elements 160, 170 may be formed integral with the body 110 or, alternatively, the piercing elements may comprise separate parts that are formed and subsequently attached to the body 110 (e.g., as by brazing, soldering, epoxy, etc.). [0025] The cold plate 100 may comprise any suitable material or combination of materials. According to one embodiment, the cold plate includes a thermally conductive material, such as a metal (e.g., copper, aluminum, steel, and alloys of these and/or other metals), a polymer, or a composite material, as well as combinations of these and/or other materials. The piercing elements 160, 170 may also comprise any suitable material or combination of materials, including metals (e.g., copper, aluminum, brass, steel, etc.), polymers, and composite materials. In one embodiment, the piercing elements are formed from the same material as the body 110 of the cold plate (e.g., copper). [0026] At this juncture, it should be noted that the term "cold plate" is used without limitation. Other thermal components, whether or not they exhibit the same functionality, may find application to the disclosed embodiments. Further, as the reader will appreciate, a thermal component providing functionality similar to that of the cold plate 100 may be referred to using alternative terminology. By way of example, thermal components that may find use with the disclosed embodiments include heat spreaders (a component sometimes referred to as an integrated heat spreader, or IHS) and heat sinks, and such thermal components may have functionality that is similar to that of cold plate 100 or that is different (at least in part) from that of the cold plate 100.Referring now to FIG. 2, illustrated is an embodiment of an assembly 200. The assembly 200 includes a cold plate, and in one embodiment the cold plate comprises the cold plate 100 of FIGS. lA-lC. Coupled with the cold plate 100 is an IC die 50, and a thermal interface material (TIM) layer 70 may be disposed between the die 50 and cold plate 100 to both thermally and mechanically couple the cold plate and die. The IC die 50 may comprise any type of integrated circuit device, such as a microprocessor, a field programmable gate array (FPGA), application specific integrated circuit (ASIC), a graphics processor, or other processing device. A number of interconnects 55 (e.g., electrically conductive bumps or columns, bond wires, etc.) may extend from the processing device 50. The TIM layer 70 may comprise any suitable thermally conductive material. By way of example, TIM layer 70 may comprise a solder material or a thermally conductive polymer. In another embodiment, the cold plate 100 may be coupled with a heat spreader (perhaps with a TIM layer disposed between these two components), wherein the heat spreader is, in turn, coupled with an IC die. [0028] In yet another embodiment, a cover 290 may be disposed over the cold plate 100 to protect the piercing elements 160, 170. The cover 290 may be disposed on the cold plate 100 (either by itself or as part of the assembly 200) to protect the cold plate during shipping, handling, and/or storage. The cover 290 may also prevent contaminates (e.g., particulates) from entering the flow path 150 of the cold plate 100. According to one embodiment, the cover comprises a body 292 having cavities 296, 297 sized and oriented to receive the piercing elements 160, 170, respectively, in order to protect these structures. The cover 290 may also include an upper surface 295 adapted to be picked up by fabrication and handling equipment (e.g., a pick-and-place head). The cover 290 may be constructed from any suitable material (e.g., plastics, composites, or metals) using any suitable fabrication technique (e.g., molding, machining, etc.).Illustrated in FIGS. 3 A through 3C is an embodiment of a manifold plate 300, which may form part of a cooling system for an IC device. In one embodiment, for example, an IC die is thermally coupled with a cold plate (e.g., the cold plate 100 described above, or other thermal component), and the manifold plate is attached to the cold plate, such that fluid communication is established between the cold and manifold plates. A volume of liquid coolant may be disposed in the manifold plate 300 (or the manifold plate in combination with other components of a liquid cooling system), and when fluid communication is established with the cold plate the coolant may flow into a flow path of the cold plate. A plan view of the manifold plate 300 is shown in FIG. 3 A, and a front elevation view of the manifold plate 300 is shown in FIG. 3B, with a side elevation view of the manifold plate being shown in FIG. 3C.Referring to FIGS. 3A-3C, the manifold plate 300 comprises a body 310 having an upper surface 312 and an opposing lower surface 314. Disposed in the body 310 is a first fluid channel 320 and a second fluid channel 340. First fluid channel 320 includes an inlet 322 and an outlet 324, whereas second fluid channel 340 includes an inlet 342 and an outlet 344. Each of the inlet 322 of first fluid channel 320 and the outlet 344 of second fluid channel 340 may extend beyond the body 310 (e.g., each may include a tube extending from the body to allow for the coupling of other fluid lines to the manifold plate 300). Disposed at the outlet 324 of first fluid channel 320 is a first breakable seal 330. Similarly, disposed at the inlet 342 of second fluid channel 340 is a second breakable seal 350. Although two breakable seals 330, 350 are shown in the illustrated embodiments, it should be understood that the manifold plate 300 may have any suitable number of breakable seals (e.g., one, or more than two, etc.).A volume of a liquid coolant (not shown in figures) may be disposed in each of the first and second fluid channels 320, 340 and retained therein by the breakable seals 330, 350, respectively. In one embodiment, the inlet 322 of first fluid channel 320 and the outlet 344 of second fluid channel 340 are coupled with other components of a liquid cooling system (e.g., a closed loop system), and this liquid cooling system may include an additional volume of the coolant. Alternatively, covers or seals may also be placed over the inlet 322 of first fluid channel 320 and the outlet 344 of second fluid channel 340 to aid in retaining the coolant within the first and second fluid channels 320, 340. Thus, the manifold plate 300 may contain a liquid coolant for an IC device cooling system, and the manifold plate - either alone or in combination with other components of the cooling system - may be stored, handled, and/or shipped with this liquid coolant. Further, the manifold plate 300 may be coupled with a cold plate (e.g., the cold plate 100 described above) or a die and cold plate assembly (e.g., the assembly 200 described above), and the liquid coolant stored in the manifold plate (and perhaps other system components) may serve as the working fluid in a liquid cooling system for an IC die, as will be described in more detail below. For the embodiments described below, it is assumed that the manifold plate 300 is coupled with the cold plate 100 of FIGS. IA- 1C; however, it should be understood that the disclosed manifold plate may be used with other types of cold plates or thermal components.When the manifold plate 300 is coupled with the cold plate 100 (or cold plate and die assembly 200), the piercing elements 160, 170 will puncture or otherwise open the breakable seals 330, 350, respectively, on the manifold plate 300, thereby establishing fluid communication between the cold and manifold plates. Therefore, in general, the breakable seals 330, 350 may each comprise any device or structure capable of maintaining a fluid seal (and retaining fluid within the manifold plate) and, further, capable of being opened by the piercing elements 160, 170 of cold plate 100.In one embodiment, each of the breakable seals comprises a membrane capable of being punctured by one of the piercing elements 160, 170. The membrane may comprise any suitable material, including a polymer or a metal. Suitable polymers include fluorinated ethylene propylene (FEP), polychlorotriflouroethylene (PCTFE), as well as polymers suitable for blister pack technology, whereas suitable metals may include aluminum, copper, and alloys of these and/or other metals. According to one embodiment, this membrane is ruptured by one of the piercing elements, but the piercing element does not completely sever the ruptured portion of the membrane away from the remainder of the membrane body, which prevents a piece of the membrane from breaking away and entering the fluid path of either the cold or manifold plates (or other components of a fluid cooling system). By way of example, as the manifold plate 300 engages the cold plate 100, the opening in the breakable seals 330, 350 may follow the circular contour of the piercing elements 160, 170, the ends of which may be cut at an angle, as described above. As insertion of the piercing elements 160, 170 into the breakable seals 330, 350 continues, the piercing elements may peel the breakable seals out of the way, but do not separate the cut portions away from the remainder of the seals (e.g., a circular chad remains attached to each breakable seal and this chad may be bent upwards and out of the flow region). The angled profile at the end of each piercing element 160, 170 (which may be similar to a hypodermic needle) may aid in the cutting and peeling action of the breakable seals 330, 350. [0034] In one embodiment, as shown in FIGS. 3A-3C, the first breakable seal 330 is disposed in a recess 370 formed at the outlet 324 of the first fluid channel 320, and the second breakable seal 350 is disposed in a recess 380 formed at the inlet 342 of the second fluid channel 340. Thus, in the embodiment of FIGS. 3A-3C, the breakable seals 330, 350 are generally flush with the lower surface 314 of body 310. The breakable seals 330, 350 may be secured in the recesses 370, 380, respectively, by any suitable process (e.g., by adhesive bonding, by an interference fit, etc.).According to another embodiment, as shown in FIG. 3D, the breakable seals 330, 350 (e.g., membranes) are disposed on the lower surface 314 of the body 310. In the embodiment of FIG. 3D, the breakable seals 330, 350 may be secured to the surface 314 of the manifold plate using any suitable process (e.g., by adhesive bonding, etc.).In a further embodiment, as illustrated in FIG. 3 E, the breakable seals 330, 350 provide a "make-before-break" functionality. In the embodiment of FIG. 3E, each of the breakable seals 330, 350 is disposed in a recess 370, 380, respectively. In addition, a pre-seal element 375 is disposed in the recess 370 adjacent the membrane 330 (or other breakable seal) and, similarly, a pre-seal element 385 is disposed in the recess 380 adjacent the membrane 350. During insertion of the piercing elements 160, 170 of cold plate 100 into the recesses 370, 380, respectively, a seal is formed between the piercing elements and the pre- seal elements 375, 385 prior to opening of the membranes 330, 350. This seal (which may be temporary) can prevent the leakage of liquid coolant during the breaking (e.g., peeling back) of the membranes 330, 350 before full engagement between the manifold and cold plates is achieved. The pre-seal elements 375, 385 may each comprise any device or structure capable of forming a seal with the piercing elements 160, 170 on the cold plate, and these pre-seal elements may be fabricated from any suitable material. In one embodiment, each of the pre-seal elements 375, 385 comprises an O-ring or grommet fabricated from a polymer material (e.g., a soft rubber) or other suitable material or combination of materials. Also, in one embodiment, the depth of the recesses 370, 380 in FIG. 3E is sufficient to provide for the full engagement between the pre-seal elements 370, 380 and a periphery of the piercing elements 160, 170, respectively, prior to engagement between the piercing elements and the breakable seals 330, 350. Each of the breakable seals 330, 350 and the pre-seal elements 375, 385 may be secured within the recesses 370, 380, respectively, using any suitable process (e.g., by adhesive bonding, by an interference fit, etc.).The manifold plate 300 may be fabricated using any suitable process or combination of processes. For example, molding, stamping, and/or machining (e.g., milling, laser machining, etc.) may be employed to fabricate the manifold plate. Also, the manifold plate 300 may comprise any suitable material or combination of materials. Materials believed suitable for fabrication of the manifold plate include, for example, metals (e.g., copper, aluminum, steel, brass, etc.), polymers, and composite materials, or combinations thereof. In one embodiment, the manifold plate 300 is formed, at least in part, from the same material used to construct the body 110 of cold plate 100 (e.g., copper). However, in other embodiments, the manifold plate 300 and cold plate 100 comprise different materials.Turning to FIG. 4, in a further embodiment, a cover 490 may be disposed over the manifold plate 300 to protect the breakable seals 330, 350. The cover 490 may be disposed on the manifold plate 300 (either by itself or as part of a larger assembly) to protect the manifold plate during shipping, handling, and/or storage. The cover 490 may also prevent contaminates from contacting and/or lodging on the breakable seals. In one embodiment, the cover 490 comprises a body 492 having cavities 496, 497 sized and oriented to receive the breakable seals 330, 350, respectively, in order to protect these structures. Where the breakable seals 330, 350 are disposed in recesses (e.g., the recesses 370, 380 of either FIG. 3B or 3E), the cover 490 may be configured to rest against the breakable seals in order to protect these devices from inadvertent rupture or other damage. The cover 490 may also include a lower surface 495 adapted to be picked up by fabrication and handling equipment (e.g., a pick-and-place head). The cover 490 may be constructed from any suitable material (e.g., plastics, composites, or metals) using any suitable fabrication technique (e.g., molding, machining, etc.).At this point, it should be noted that the term "manifold plate" is used without limitation. Other fluid system components, whether or not they exhibit the same functionality as the above-described manifold plate 300, may find application to the disclosed embodiments. Further, as the reader will appreciate, a fluid system component providing functionality similar to that of the manifold plate 300 may be referred to using alternative terminology. By way of example, components that may find use with the disclosed embodiments include headers or header plates, fluid couplings, etc., and such components may have functionality that is similar to that of manifold plate 300 or that is different (at least in part) from that of the manifold plate 300.Turning now to FIG. 5, illustrated is an embodiment of an assembly 500. The assembly 500 may include the cold plate 100 and manifold plate 300 described above, as well as other components, to form a fluid cooling system of an integrated circuit (IC) die 50. The assembly 500 may form part of any type of computer system, such as, for example, a server, a desktop computer, or a laptop computer, as well as a handheld or other portable computing device.The assembly 500 includes the cold plate 100, as noted above, and an IC die 50 is coupled with the cold plate. A TIM layer 70 may be disposed between the cold plate and die to both thermally and mechanically couple these two parts, and this TIM layer may comprise any suitable thermally conductive material (e.g., a metal, such as a solder, or a thermally conductive polymer, etc.). A number of interconnects 55 (e.g., electrically conductive bumps or columns, wire bonds, etc.) extend from the die 50, and these interconnects may be electrically coupled with corresponding lands or other leads (not shown) on a substrate 505. To aid in mechanically securing the die 50 to substrate 505, a layer of underfill material or a die attach material (not shown) may be disposed between the die and substrate. The substrate 505 may have any suitable construction, and in one embodiment the substrate 505 comprises a multilayer substrate having several layers of metallization for routing electrical signals. Further, in one embodiment, a number of interconnects (not shown) may be disposed on a lower surface of the substrate 505 opposite the die side, and these interconnects (e.g., electrically conductive bumps or columns, pins, etc.) may be used to coupled the assembly 500 to a next-level component (e.g., a motherboard, etc.). In one embodiment, however, the substrate 505 comprises a motherboard.The assembly 500 further includes the manifold plate 300, as previously noted, and the manifold plate is secured to the cold plate 100, such that fluid communication is established between these two components. To establish fluid communication, the piercing elements 160, 170 of cold plate 100 are inserted into the breakable seals 330, 350, respectively, of manifold plate 300, and the piercing elements open the breakable seals, as described above. In FIG. 5, the manifold plate 300 is shown fully engaged with the cold plate 100, wherein the lower surface 314 of the manifold plate is resting against (or at least in close proximity to) the upper surface 112 of the cold plate 100, and the piercing elements 160, 170 have opened the breakable seals 330, 350 and are fully inserted into the fluid channels 320, 340, respectively, on the manifold plate. [0043] The assembly 500 may further include a heat exchanger 510 and a pump 520. A first fluid line 531 may couple an outlet of the heat exchanger 510 to an inlet of the pump 520, and a second fluid line 532 may couple an outlet of the pump 520 to the inlet 322 on the manifold plate 300. A third fluid line 533 may couple the outlet 344 on the manifold plate 300 to an inlet of the heat exchanger 510. Heat exchanger 510 may comprise any suitable type of heat exchanger, and may include a passive device (e.g., a multi-fin heat sink) and/or an active cooling device (e.g., a fan). The pump 520 may comprise any suitable type of pump for circulating a fluid, such as a centrifugal pump, a gear pump, a diaphragm pump, a turbine, etc. Fluid lines 531, 532, 533 may comprise any suitable type of conduit for containing the flow of a fluid (e.g., pipes, flexible tubing, etc.). According to one embodiment, the heat exchanger 510 may be coupled with the manifold plate 300. In another embodiment, the heat exchanger 510 and pump 520 (and perhaps any one or more of fluid lines 531, 532, 533) may be disposed on or coupled with the substrate 505 or a next-level component, such as a motherboard. [0044] The cold plate 100, manifold plate 300, heat exchanger 510, pump 520, and fluid lines 531, 532, 533 provide a fluid cooling system for the IC die 50. This fluid cooling system may be single phase or two phase, and may utilize any suitable type of working fluid, such as water, propylene glycol, ethylene glycol, potassium formate, or a hydrocarbon based fluid, as well as a mixture of these and/or other substances (e.g., a mixture of water and propylene glycol). Further, the flow path 150 on cold plate 100, the first and second fluid channels 320, 340 of manifold plate 300, the heat exchanger 510, the pump 520, as well as the first, second, and third fluid lines 531, 532, 533 collectively define a fluid circuit (e.g., a closed-loop circuit) in which a liquid coolant may be circulated to cool the integrated circuit die 50. This fluid circuit may include a volume of the liquid coolant prior to engagement between the cold plate 100 and manifold plate 300, and the liquid coolant may be retained in the cooling system by the breakable seals 330, 350. Thus, the cooling system (without the cold plate 100) could be shipped (and/or stored, handled, etc.) precharged with an appropriate volume of the liquid coolant for operation of the cooling system. Note that the volume of the flow path 150 in cold plate 100 may be relatively small in comparison to the volume of the remaining portion of the fluid circuit and, therefore, the cold plate (either by itself or in combination with the IC die 50) could be shipped dry and subsequently assembled with the manifold plate 300 without concern for the small amount of gas that it may introduce into the fluid circuit. It should also be understood that the above-described fluid cooling system may include other components in addition to those shown in FIG. 5 (e.g., filters, sensors, valves, etc.) and, further, that such a cooling system may not include all of the components illustrated in FIG. 5.Referring to FIG. 6, illustrated is an embodiment of a method 600 for assembling a liquid cooling system for an IC die. Referring to block 610 in this figure, at least one piercing element on a cold plate (or other thermal component) is inserted into a corresponding breakable seal on a manifold plate (or other component of a fluid system) in order to establish fluid communication between the cold and manifold plates, as described above. An IC die may be thermally coupled with the cold plate. As set forth in block 620, the manifold plate is secured to the cold plate. In one embodiment, the engagement between the piercing element (or elements) and the breakable seal (or seals) secures the cold and manifold plates together. However, in other embodiments, alternative techniques may be employed to secure these two components to one another (e.g., adhesive bonding, mechanical fasteners, etc.). In a further embodiment, as set forth in block 630, the cold and manifold plate assembly, which may include an IC die (and perhaps other components of a fluid cooling system), is attached to a substrate. For example, a number of interconnects extending from the die may be electrically coupled to corresponding lands or leads on the substrate and, in addition, an underfill or die attach material may be disposed between the die and substrate. In a further embodiment, a die may be secured to a substrate prior to attachment of the die to the cold plate (in which case, element 630 may be unnecessary). According to one embodiment, other components of a fluid cooling system (for the IC die) are coupled with the manifold plate prior to assembly with the cold plate. However, referring to block 640, in another embodiment, one or more additional components of a cooling system (e.g., a pump, heat exchanger, fluid lines, etc.) are then coupled with the manifold plate.Referring to FIG. 7, illustrated is an embodiment of a computer system 700. Computer system 700 includes a bus 705 to which various components are coupled. Bus 705 is intended to represent a collection of one or more buses - e.g., a system bus, a Peripheral Component Interface (PCI) bus, a Small Computer System Interface (SCSI) bus, etc. - that interconnect the components of system 700. Representation of these buses as a single bus 705 is provided for ease of understanding, and it should be understood that the system700 is not so limited. Those of ordinary skill in the art will appreciate that the computer system 700 may have any suitable bus architecture and may include any number and combination of buses. [0047] Coupled with bus 705 is a processing device (or devices) 710. The processing device 710 may comprise any suitable processing device or system, including a microprocessor (e.g., either a single core or a multi-core processor), a network processor, an application specific integrated circuit (ASIC), or a field programmable gate array (FPGA), or similar device. It should be understood that, although FIG. 7 shows a single processing device 710, the computer system 700 may include two or more processing devices.Computer system 700 also includes system memory 720 coupled with bus 705, the system memory comprising, for example, any suitable type and number of memories, such as static random access memory (SRAM), dynamic random access memory (DRAM), synchronous DRAM (SDRAM), or double data rate DRAM (DDRDRAM). During operation of computer system 700, an operating system and other applications may be resident in the system memory 720. [0049] The computer system 700 may further include a read-only memory (ROM) 730 coupled with the bus 705. The ROM 730 may store instructions for processing device 710. The system 700 may also include a storage device (or devices) 740 coupled with the bus 705. The storage device 740 comprises any suitable non-volatile memory, such as, for example, a hard disk drive. The operating system and other programs may be stored in the storage device 740. Further, a device 750 for accessing removable storage media (e.g., a floppy disk drive or a CD ROM drive) may be coupled with bus 705.The computer system 700 may also include one or more I/O (Input/Output) devices 760 coupled with the bus 705. Common input devices include keyboards, pointing devices such as a mouse, as well as other data entry devices, whereas common output devices include video displays, printing devices, and audio output devices. It will be appreciated that these are but a few examples of the types of I/O devices that may be coupled with the computer system 700. [0051] The computer system 700 may further comprise a network interface 770 coupled with bus 705. The network interface 770 comprises any suitable hardware, software, or combination of hardware and software that is capable of coupling the system 700 with a network (e.g., a network interface card). The network interface 770 may establish a link with the network (or networks) over any suitable medium - e.g., wireless, copper wire, fiber optic, or a combination thereof- supporting the exchange of information via any suitable protocol - e.g., TCP/IP (Transmission Control Protocol/Internet Protocol), HTTP (Hyper-Text Transmission Protocol), as well as others.It should be understood that the computer system 700 illustrated in FIG. 7 is intended to represent an exemplary embodiment of such a system and, further, that this system may include many additional components, which have been omitted for clarity and ease of understanding. By way of example, the system 700 may include a DMA (direct memory access) controller, a chip set associated with the processing device 710, additional memory (e.g., a cache memory), as well as additional signal lines and buses. Also, it should be understood that the computer system 700 may not include all of the components shown in FIG. 7. The computer system 700 may comprise any type of computing device, such as a desktop computer, a laptop computer, a server, a handheld computing device (e.g., a personal digital assistant, or PDA), a wireless communication device, an entertainment system, etc. [0053] In one embodiment, the computer system 700 includes a component constructed according to any of the embodiments described above. For example, the computer system 700 may include the cold plate 100 and manifold plate 300. In one embodiment, the computer system 700 includes the assembly 500 described above (in which case, the IC die 50 may comprise the processing device 710). [0054] The foregoing detailed description and accompanying drawings are only illustrative and not restrictive. They have been provided primarily for a clear and comprehensive understanding of the disclosed embodiments and no unnecessary limitations are to be understood therefrom. Numerous additions, deletions, and modifications to the embodiments described herein, as well as alternative arrangements, may be devised by those skilled in the art without departing from the spirit of the disclosed embodiments and the scope of the appended claims. |
Embodiments of methods and apparatuses for defending against speculative side-channel analysis on a computer system are disclosed. In an embodiment, a processor includes a decoder, a cache, address translation circuitry, a cache controller, and a memory controller. The decoder is to decode an instruction. The instruction is to specify a first address associated with a data object, the first address having a first memory tag. The address translation circuitry is to translate the first address to a second address, the second address to identify a memory location of the data object. The comparator is to compare the first memory tag and a second memory tag associated with the second address. The cache controller is to detect a cache miss associated with the memory location. The memory controller is to, in response to the comparator detecting a match between the first memory tag and the second memory tag and the cache controller detecting the cache miss, load the data object from the memory location into the cache. Other embodiments include encryption of memory tags together with addresses. |
CLAIMSWhat is claimed is:1. A processor comprising:a decoder to decode an instruction, the instruction to specify a first address associated with a data object, the first address having a first memory tag; a cache;address translation circuitry to translate the first address to a second address, the second address to identify a memory location of the data object; a comparator to compare the first memory tag and a second memory tag associated with the second address;a cache controller to detect a cache miss associated with the memory location; and a memory controller to, in response to the comparator detecting a match between the first memory tag and the second memory tag and the cache controller detecting the cache miss, load the data object from the memory location into the cache.2. The processor of claim 1, wherein the first address is a virtual address and the second address is a physical address.3. The processor of claim 1, wherein the memory controller is also to prevent loading a cache line corresponding to the memory location until the comparator has detected the match between the first memory tag and the second memory tag.4. The processor of claim 1, wherein the memory controller is also to, in response to the comparator detecting a mismatch between the first memory tag and the second memory tag, load data not indicative of the data object into a cache line corresponding to the memory location.5. The processor of claim 1, further comprising pointer security circuitry to provide the first memory tag.6. The processor of claim 1, further comprising encryption circuitry to cryptographically secure the data object at least partially based on the first memory tag.7. The processor of claim 1, wherein the first memory tag includes an identification tag to identify a type, a function, a memory location, or a use for the data object.8. The processor of claim 6, wherein the encryption circuitry uses a least a portion of the memory tag to at least partially define a tweak input to an encryption algorithm.9. The processor of claim 6, wherein the first memory tag includes an encryption tag, wherein the encryption circuitry is to use the encryption tag to identify one of a plurality of encryption keys.10. The processor of claim 1, wherein the first memory tag includes a small object tag to indicate whether a cache line associated with the memory location is to include a plurality of data objects.11. The processor of claim 10, wherein the small object tag is to enable sub-cacheline granularity of memory tagging.12. The processor of claim 1, further comprising integrity check circuitry to generate an integrity check value at least partially based on the first address and an encrypted value of the data object.13. The processor of claim 12, further comprising pointer security circuitry to detect tampering with the first address at least partially based on the integrity check values.14. A processor comprising:a decoder to decode a first instruction, the first instruction to allocate a memoryregion to a software program;an execution unit to execute the first instruction and the second instruction, theexecution unit including:range rule circuitry to determine a valid range for the memory region; address adjustment circuitry to determine a first number of address bits to be used by the software program to manipulate an address within the valid range and a second number of address bits to include a memory tag to indicate access permission; andencryption circuitry to encrypt at least a portion of the address and thememory tag to generate an encrypted address to be returned to the software program.15. The processor of claim 14, wherein the decoder is also to decode a second instruction, the second instruction to specify an encrypted first address associated with a data object, the processor further comprising decryption circuitry to decrypt the encrypted first address to generate a decrypted address and a decrypted memory tag.16. A method comprising:decoding an instruction, the instruction to specify a first address associated with a data object, the first address having a first memory tag;translating the first address to a second address, the second address to identify amemory location of the data object;comparing the first memory tag and a second memory tag associated with the second address;detecting a cache miss associated with the memory location; andloading, in response to detecting a match between the first memory tag and the second memory tag and detecting the cache miss, the data object from the memory location into a cache.17. The method of claim 16, wherein the first address is a virtual address and the second address is a physical address.18. The method of claim 16, further comprising preventing loading a cache linecorresponding to the memory location until the match is detected.19. The method of claim 16, further comprising loading, in response to detecting a mismatch between the first memory tag and the second memory tag, data not indicative of the data object into a cache line corresponding to the memory location.20. The method of claim 16, further comprising decrypting an encrypted address to provide the first address and the first memory tag.21. A processor comprising:decoding means for decoding an instruction, the instruction to specify a first address associated with a data object, the first address having a first memory tag; a cache;address translation means for translating the first address to a second address, the second address to identify a memory location of the data object; comparing means for comparing the first memory tag and a second memory tagassociated with the second address;cache controller means for to detecting a cache miss associated with the memory location; andmemory controller means for, in response to the comparing means detecting a match between the first memory tag and the second memory tag and the cache controller means detecting the cache miss, load the data object from the memory location into the cache.22. The processor of claim 21, wherein the first address is a virtual address and the second address is a physical address.23. The processor of claim 21, wherein the memory controller means is also to prevent loading a cache line corresponding to the memory location until the comparing means has detected the match between the first memory tag and the second memory tag.24. The processor of claim 21, wherein the memory controller means is also to, in response to the comparing means detecting a mismatch between the first memory tag and the second memory tag, load data not indicative of the data object into a cache line corresponding to the memory location.25. The processor of claim 21, wherein the first memory tag includes an identification tag to identify a type, a function, a memory location, or a use for the data object. |
DEFENSE AGAINST SPECULATIVE SIDE-CHANNELANALYSIS OF A COMPUTER SYSTEMFIELD OF THE INVENTION[0001] The field of invention relates generally to computers, and, more specifically, to computer system security.BACKGROUND[0002] Computer systems may be vulnerable to attempts by adversaries to obtain confidential, private, or secret confidential information. For example, attacks, such as Spectre and Meltdown, exploit speculative and out-of-order execution capabilities of processors to illicitly read data through side-channel analysis.BRIEF DESCRIPTION OF THE DRAWINGS[0003] The present invention is illustrated by way of example and not limitation in the figures of the accompanying drawings, in which like references indicate similar elements and in which:[0004] FIG. 1A is a block diagram of a computing environment that reduces the likelihood of successful side-channel analysis within a processor by providing address-based security features for memory within the processor, in accordance with at least one embodiment described herein;[0005] FIG. IB is a block diagram of a processor in accordance with at least one embodiment;[0006] FIG. 2 is a diagram of an implementation of memory tags that may be used to secure memory address pointers against side-channel analysis, in accordance with at least one embodiment;[0007] FIG. 3 is a flow diagram of a method for using memory tags in a defense against side-channel analysis, in accordance with at least one embodiment;[0008] FIG. 4 is a block diagram illustrating the use of memory tags in a defense against side-channel analysis, in accordance with at least one embodiment;[0009] FIG. 5 is a block diagram of a virtual memory address that illustrates that an identification tag (e.g., a color tag) that may be stored in various locations within a virtual memory address, in accordance with at least one embodiment; [0010] FIG. 6 is a block diagram of a processor in accordance with at least oneembodiment;[0011] FIG. 7 is a diagram of a computing environment, illustrating an application of the secure memory access logic of FIG. 6 according to an embodiment;[0012] FIG. 8 A is a flow diagram of at least one embodiment of a method for initiating a memory allocation operation during execution of a computer program;[0013] FIG. 8B is a flow diagram of at least one embodiment of a method for continuing the memory allocation operation of FIG. 8 A;[0014] FIG. 9 is a flow diagram of at least one embodiment of a method for providing security for an indirect address;[0015] FIG. 10 is a flow diagram of at least one embodiment of a method for verifying a previously secured indirect address;[0016] FIG. 11 represents an embodiment in which memory tags are encrypted with the encrypted part of the address;[0017] Figure 12A is a block diagram illustrating both an exemplary in-order pipeline and an exemplary register renaming, out-of-order issue/execution pipeline according to embodiments of the invention;[0018] Figure 12B is a block diagram illustrating both an exemplary embodiment of an in- order architecture core and an exemplary register renaming, out-of-order issue/execution architecture core to be included in a processor according to embodiments of the invention;[0019] Figure 13 is a block diagram of an illustrative out-of-order issue/execution processor core that may be included in a processor according to embodiments of the invention;[0020] Figure 14 is a block diagram of a processor that may have more than one core, may have an integrated memory controller, and may have integrated graphics according to embodiments of the invention;[0021] Figure 15 is a block diagram of an illustrative central processing unit (CPU) complex that may be included in a processor according to embodiments of the invention;[0022] Figure 16 is a block diagram of an illustrative cache hierarchy that may be included in a processor according to embodiments of the invention;[0023] Figures 17-21 are block diagrams of exemplary computer architectures;[0024] Figure 17 shows a block diagram of a system in accordance with one embodiment of the present invention;[0025] Figure 18 is a block diagram of a first more specific exemplary system in accordance with an embodiment of the present invention; [0026] Figure 19 is a block diagram of a second more specific exemplary system in accordance with an embodiment of the present invention;[0027] Figure 20 is a block diagram of a system-on-chip (SoC) in accordance with an embodiment of the present invention;[0028] Figure 21 is a block diagram of a system-on-chip (SoC) in accordance with an embodiment of the present invention;[0029] Figure 22 is a block diagram contrasting the use of a software instruction converter to convert binary instructions in a source instruction set to binary instructions in a target instruction set according to embodiments of the invention.DETAILED DESCRIPTION[0030] In the following description, numerous specific details are set forth. However, it is to be understood that embodiments of the invention may be practiced without these specific details. In other instances, well-known circuits, structures, and techniques have not been shown in detail in order not to obscure the understanding of this description.[0031] References in the specification to“one embodiment,”“an embodiment,”“an example embodiment,” etc., indicate that the embodiment described may include a particular structure, feature, or characteristic, but every embodiment may not necessarily include the particular structure, feature, or characteristic. Moreover, such phrases are not necessarily referring to the same embodiment. Further, when a feature, structure, or characteristic is described in connection with an embodiment, it is submitted that it is within the knowledge of one skilled in the art to affect such feature, structure, or characteristic in connection with other embodiments whether or not explicitly described.[0032] Many processors and processor cores support capabilities to increase performance, such as caching, multithreading, out-of-order execution, branch prediction, and speculative execution. Adversaries have found ways to exploit capabilities of these processors to illicitly read data.[0033] For example, an adversary might intentionally ahempt to read data (e.g., secret data) from a memory location that should not be readable by it (e.g., out-of-bounds). The read might be allowed to proceed speculatively until it is determined whether the access is out-of- bounds. The architectural correctness of the system might be ensured by not committing any results until the determination is made, but the speculative execution might cause the microarchitectural state of the processor to change before the determination is made, and the adversary might be able to perform side-channel analysis to infer the value of the secret data from differences in the microarchitectural state of the processor. Many variants of this type of speculative attacks are possible. In one scenario, the adversary might speculatively use the secret data as part of a memory address, and, using a timing analysis to determine what memory locations are being loaded into a cache, infer the value.[0034] As a more specific example, with a cacheline size of 64 bits, a change to any of the six least-significant bits of a memory address does not cause the address to refer to a different cacheline, but a change to the seventh least-significant bit does cause the address to refer to a different cacheline. Therefore, an adversary might repeatedly (e.g., to eliminate noise and/or achieve a statistically significant result) flush and/or fill a cache to a known or predictable state, use a speculative flow to cause a processor to speculatively access secret data, speculatively apply a bit of the secret data to the seventh least-significant bit of a known memory address stored in a register (e.g., using shift and/or other bit manipulation instructions), speculatively access their own memory space with the manipulated memory address, use a timing side-channel analysis to determine if a new cacheline loaded, and infer whether the value of the secret bit was the same as or different from the value of the seventh least-significant bit of the known memory address.[0035] Embodiments of the invention include systems, methods, and apparatuses providing features or characteristics that may be desirable for use in a variety of computer systems for a variety of reasons, including to reduce vulnerability to attacks based on speculation, side- channel analysis; to reduce vulnerability to such analysis with less cost, in performance or otherwise, than an alternative approach; and/or to improve security in general. Embodiments provide for fine-grain memory access control mechanisms, or combinations thereof, that cannot be bypassed by speculation but do not prevent the use of speculation, thus preserving speculative performance improvements. For example, to protect against the type of analysis described in the preceding paragraph, embodiments provide for memory tagging technology and/or address encoding/encrypting, each as described below, to limit the effectiveness of adversarial efforts to infer information about secret data by attempting to use it speculatively, as a memory address or otherwise and/or to access memory locations outside of boundaries defined for validity, security, or other purposes.[0036] Embodiments may include any and/or any combination of the following (each as may be further described below): identification/integrity tagging such that inline tag information is available to the processor or memory controller concurrent with the data access, so the processor will know immediately if the speculative memory access is incorrect and should be stopped or obscured; encryption of memory tags such that proper decryption of the data is based on knowledge of the proper tag value (key), so speculating through with an incorrect encryption tag (key identifier) will only provide useless ciphertext to the side- channel adversary and not reveal secret data for a given address; cryptographic pointers such that the assigned pointer (e.g., via malloc) available to the side-channel adversary cannot be used outside of its bounds without causing address corruption, fundamentally preventing the side-channel adversary from using such a pointer to target known memory locations for side- channel analysis (typically the processor will also catch a corrupted virtual address as it will terminate into unallocated virtual memory resulting in a page fault); a combination of techniques such that the cryptographic pointer includes the tag information (tag portion of address is also encrypted along with the actual location address information) such that a side- channel adversary can neither target memory nor independently select tag information, thus, modifying one or the other corrupts both, resulting in total chaos from the perspective of side- channel analysis; and a counter to count speculative memory access control violations such that if an adversary persists with incorrect speculation, the processor may kill a process that exceeds a specified threshold, and an OS may follow up to remediate.Memory Tagging Technology[0037] The disclosed embodiments include memory tagging circuitry and techniques to address existing and potential computer system security vulnerabilities. The memory tagging circuitry may be configured to prevent memory pointers (references) from being used to speculatively go beyond a valid boundary, prevent memory pointer manipulation (e.g., by adding values) that cause the pointers to access a wrong (unauthorized) data object, and increase the granularity of memory tagging to include byte-level tagging in a cache. The memory tagging circuitry may also be configured to sandbox untrusted code by tagging portions (e.g., words) of memory to indicate when the tagged portions of memory include a protected pointer. The memory tagging circuitry provides security features while enabling processors to continue using and benefiting from performing speculative operations.[0038] FIG. 1A is a block diagram of a computing environment 100 in which the likelihood of successful side-channel analysis may be reduced by providing address-based security features for memory within a processor, consistent with embodiments of the present disclosure. The computing environment 100 includes a system 104 that reduces the likelihood of successful side-channel analysis, while concurrently enabling the processor to perform and benefit from performing speculative operations, according to an embodiment. Figure 1 shows an adversary 102 coupled to system 104 through one or more networks 106 or one or more physical connections 108. The adversary 102 may perform one or more side- channel analysis 110 on the system 104 through the networks 106 and/or through the physical connections 108. The system 104 may include one or more of a variety of computing devices, including, but not limited, to a personal computer, a server, a laptop, a tablet, a phablet, a smartphone, a motherboard with a chipset, or some other computing device. The system 104 is configured to protect a processor against side-channel analysis using a variety of address-based security features that enable the processor to safely operate while performing speculative operations. In other words, the processor and/or associated hardware is provided with concurrent access (meaning available at the same time as the speculative data access) to additional (access control) information that allows the processor to speculate safely.[0039] The adversary 102 may be a computer system, a person, or a combination of the computer system and a person, which may attempt one or more side-channel analysis (e.g., Spectre) on or against the system 104. The adversary 102 may use one or more networks 106 to execute the side-channel analysis 110. The adversary 102 may also use one or more physical connections 108, such as a memory interposer, memory probes, or the like, to read, modify, and/or write to one or more memory addresses within the system 104. Some of the side-channel analysis 110 may include attempting to use a pointer to speculatively access data beyond an allowed memory bounds, attempting to manipulate a pointer (e.g., add a value to a pointer to cause the pointer to point to an unintended object), and the like.[0040] The system 104 is configured to provide a variety of memory-based security features to protect against the side-channel analysis 110. The system 104 includes processor 112, which is coupled to memory 114 through one or more communications channels 116. The processor 112 may include processor cores 118, cache 120, encryption circuitry 122, integrity check circuitry 124, and memory controller 170. The processor 112 also includes pointer security circuitry 126 that is configured to expand memory tag capabilities, reduce or prevent pointer override attacks, reduce or prevent pointer manipulation, and enable bite- granularity memory safety for the processor 112.[0041] The processor 112 may include any number and/or combination of currently available and/or future developed single- or multi-core processing units. In embodiments, the processor 112 may represent or include a general-purpose processor, such as a Core® i3, i5, i7, 2 Duo and Quad, Xeon®, Itanium®, Atom®, or Quark® microprocessor, available from Intel® (Intel Corporation, Santa Clara, CA). Alternatively, the processor 112 may represent or include one or more processors from another manufacturer or supplier, such as Advanced Micro Devices (AMD®, Inc.), ARM Holdings® Ltd, MIPS®, etc. The processor 112 may represent or include a special-purpose processor, such as, for example, a network or communication processor, compression engine, graphics processor, co-processor, embedded processor, or the like. The processor 112 may be implemented as a single semiconductor die or package or as a combination of stacked or otherwise interconnected semiconductor dies and/or packages. The processor 112 may be a part of and/or may be implemented on one or more substrates using any of a number of process technologies, such as, for example, BiCMOS, CMOS, or NMOS.[0042] The memory 114 represents one or more of a variety of types of memory that may be used in the system 104, according to an embodiment. The memory 114 may be volatile memory, non-volatile memory, or a combination of volatile memory and non-volatile memory. The volatile memory may include various types of random access memory (RAM). The non-volatile memory may include NAND memory, 3D crosspoint (3DXP), phase-change memory (PCM), hard disk drives, etc.[0043] The processor 112 may use memory controller 170 to move data back and forth between the processor 112 and the memory 114, according to embodiments. For example, while operating one or more software programs or while executing various instructions, the processor cores 118 may generate new data 128. The processor cores 118 may use a virtual address (or linear address) 130 of the new data 128 to write the new data 128 to the cache 120 or to the memory 114. The new data 128 may be saved in the cache 120 as cached data 132, or may be added to existing cached data 132. The cached data 132 may have a physical address 134. The processor 112 may be configured to use the encryption circuitry 122 and an encryption algorithm 136 to encrypt the new data 128 and/or the cached data 132 prior to saving the new data 128 and/or the cached data 132 to the memory circuitry 114, as stored data 138; however, stored data 138 need not be encrypted and may include encrypted data, unencrypted data, or any combination of encrypted and unencrypted data. The processor 112 may also use the integrity check circuitry 124 to generate integrity check values 140 based on the new data 128, the virtual address 130, the cached data 132, and/or the physical address 134. The memory controller 170 may write the integrity check values to the memory 114 (to facilitate corruption detection for the stored data 138) based on a key identifier (key ID) by the processor and/or the memory controller 170 may encrypt the data written to memory using a key identified by the processor (e.g. by specifying a key identifier in the physical address).[0044] The processor 112 may use the pointer security circuitry 126 to provide security for data within the system 104. The pointer security circuitry 126 may be configured to detect when the virtual address 130 and/or the physical address 134 is being overwritten, detect when the virtual address 130 and/or the physical address 134 has been manipulated, provide byte-granularity memory safety, and/or provide for the use of memory tags, according to various embodiments disclosed herein. FIGs. 2, 4, and 5 illustrate various example memory tag configurations that may be identified, defined, and/or applied by the pointer security circuitry 126 to secure the system 104 from the side-channel analysis 110, according to various embodiments.[0045] When the processor cores 118 assign (e.g., by executing a software program) the virtual address 130 to the new data 128, the pointer security circuitry 126 may define, insert, or identify one or more memory tags 142A in the virtual address 130, to associate with the new data 128 to reduce the likelihood of a successful side-channel analysis. The one or more memory tags 142 A may include an identification tag 144 and/or an encryption tag 146. In some embodiments the tag may be chosen by the software that writes its data to memory. Software (e.g. a memory allocator function, such as malloc) may select a tag value and insert it into the virtual (linear) address. The hardware may interpret that tag value or encode or translate it and/or pass it through to a physical address.[0046] The virtual address 130 for the new data 128 may include the identification tag 144 to provide access control for the new data 128. The identification tag 144 may be referred to as a color, a cryptographic color, a memory color, a tag color, a key ID, etc. The pointer security circuitry 126 may be configured to define where within the virtual address 130 the identification tag 144 resides or is defined. For example, the pointer security circuitry 126 may define the identification tag 144 as the eight most significant bits in the virtual address 130. The identification tag 144 may be defined as, for example, bits 56-62 (i.e., seven bits) of bits 0-63 of the virtual address 130, assuming, as an example, that the length of the virtual address 130 is 64 bits. Other embodiments may use larger or smaller tag sizes and/or larger or smaller virtual address sizes (e.g., 128 bits).[0047] The pointer security circuitry 126 may use the identification tag 144 in a variety of ways to provide security to the new data 128. For example, the pointer security circuitry 126 may use the identification tag 144 as a tweak or as part of a tweak (e.g., an input to a cipher, in addition to the plaintext data, that results in different ciphertext data for different tweak values) in the encryption algorithm 136. In an embodiment, the identification tag 144 is combined with a subset of the virtual address 130 to define a tweak that may be used by the encryption algorithm 136 when encrypting the new data 128. [0048] The virtual address 130 for the new data 128 may include the encryption tag 146 to provide security for the new data 128. The pointer security circuitry 126 may be configured to define where within the virtual address 130 the encryption tag 146 resides or is defined.For example, the pointer security circuitry 126 may define the encryption tag 146 as the three most significant bits in the virtual address 130. The encryption tag 146 may be defined as, for example, bits 59-62 (i.e., four bits) of bits 0-63 of the virtual address 130, assuming, as an example, that the length of the virtual address 130 is 64 bits. The encryption tag 146 may be a representation of a key ID 152 that is used to look up the encryption key 154 within a key table 156, by the encryption circuitry 122. The encryption tag 146 may also or alternatively be identified using other techniques, e.g., may be defined within one or more bits in the physical address 134. It may be copied or translated from the virtual address and into the physical address such that the key ID may be communicated to the memory encryption circuitry. Embodiments may provide for use of the key ID to contribute to defenses against speculative side channel analysis because if a wrong memory encryption tag is used the data will not be revealed (it will be decrypted into random bits), so speculation based on this random data does not reveal any secrets to the adversary on side-channel analysis.[0049] The pointer security circuitry 126 may also include pointer security instructions 158 that at least partially provide tag definitions 160. The pointer security instructions 158 may include instructions or operations that may be used by the pointer security circuitry 126 or the processor 112 to add a pointer in accordance with the tag definitions 160. The tag definitions 160 may define one or more of the length, location, and use of one or more of the identification tag 144 and/or the encryption tag 146. In embodiments, the instructions may be used to set the corresponding tag value in memory and/or to read the tag values from memory and may be limited to use by privileged software (e.g., OS kernel or VMM).[0050] The pointer security circuitry 126 may use a pointer metadata table 162 to store, update, and retrieve the memory tags 142E and/or the tag definitions 160.[0051] When the processor 112 writes the new data 128 to the cached data 132 with the physical address 134, the pointer security circuitry 126 may define, insert, or identify one or more memory tags 142B in the physical address 134, to associate with the cached data 132 to reduce the likelihood of a successful side-channel analysis. The one or more memory tags 142B embedded within the physical address 134 may include one or more of theidentification tag 144 and/or the encryption tag 146. The physical address 134 may include fewer, more, or different portions of the memory tags 142B than are used or associated with the virtual address 130. [0052] For the purpose of speculation attack prevention, since it is the software function doing the memory write that“owns” the data being written, the write may assign the associated identification tag (e.g., a write-for-ownership, or my-data-my-tag), the tag assignment starts with a software flow as applied to the corresponding virtual (linear) address used to write the data. The hardware is responsible for performing the read memory access control based on the tag value. In the identification tag scenario, the hardware is comparing a tag value that was originally written with the data to a tag value in the address of a memory access request (a memory load). If the tags match, then then the memory access may proceed (the requestor knew the correct identification tag value to use). Similarly, for the encryption tag, the requestor knows the correct key to use (the key that was used to encrypt on writing the data to memory).[0053] FIG. IB is a block diagram of the processor 112 of FIG. 1A, showing processor cores 118, cache 120, pointer metadata table 162, and a more detailed depiction of memory controller 170, according to an embodiment. Tag lookup circuitry 172 may be to lookup a memory tag associated with a physical address, according to embodiments such as method 300 of FIG 3. Tag comparison circuitry 174 may compare a memory tag associated with a physical address 130 to a memory tag associated with a virtual address 130, according to embodiments such as methods 300 of FIG. 3. Memory control circuitry 178 may represent any other circuitry to perform memory control operations according to any approach.[0054] FIG. 2 illustrates a memory address translation diagram 200 of an implementation of memory tags that may be used to secure memory address pointers against side-channel analysis, according to an embodiment. The memory address translation diagram 200 illustrates an extended virtual address (or linear address) 202 including an identification tag 204, which occupies one or more bits (e.g., non-canonical bits), and a virtual address 206, which occupies a subset of the extended virtual address 202. The extended virtual address 202 may be 64 bits. The identification tag 204 may occupy one or more most significant bits, or other bits within the extended virtual address 202. The virtual address 206 is translated to a physical address 208 through a translation lookaside buffer (TLB) 210, as illustrated, or through the walking of page tables. The identification tag 204 is appended to the physical address 208. The physical address 208 and the identification tag 204 may be combined to form or define an encryption tweak 212 which be applied to an encryption algorithm as described below. An encryption tag 214 may be appended to the identification tag 204 and the physical address 208 to identify one or more encryption keys through the key table 156 (shown in FIG. 1). The identification tag 204, the physical address 208, and the encryption tag 214 may be combined to define a cache line physical address 216. Bit positions and/or tag sizes may vary from embodiment to embodiment. The bigger the tag size, the more possible tag values, and the harder it is for an adversary to guess.[0055] In embodiments, the memory tag architecture illustrated in the memory address translation diagram 200 may employ different sizes of identification tag 204 and/or encryption tag 214 to adjust the difficulty of guessing which memory tag (e.g., identification tag 204 and/or encryption tag 214) is associated with a particular memory address pointer and/or a particular object.[0056] Memory tagging works similarly to multi-key total memory encryption (MKTME), where physical address bits (or other cached metadata) hold tag bits (e.g., the Key Identifier KeylD or Key Domain). Software (e.g., a memory allocator library such as malloc in glibc) chooses tags may select the tag bits within a linear address space by setting non-canonical bits to the tag value. Hardware can bypass paging structures for these translations, allowing the linear address to directly set tag bits in the physical address. Embodiments may include checking tag meta-data between external memory and the processor caches on a load operation, providing a side-channel defense mechanism as tags can be checked and validated before cache contents are affected. Tagging operations described herein may be performed by hardware referred to as a“memory controller” or“memory tag controller,” which more generally refers to memory controller subsystem hardware, potentially located in a variety of locations of a memory subsystem to handle cacheline data tagging according toembodiments.[0057] FIG. 3 shows is a flow diagram of a method 300 for performing a load operation according to an embodiment of the present invention. Method 300, in an embodiment, may be performed by a memory controller or other memory execution circuitry. As such, method 300 may be performed by hardware circuitry, firmware, software, and/or combinations thereof.[0058] As illustrated, method 300 begins with the receiving of a load request including tag information for a data-line in memory (block 310). This load request may be a request from any given component to read at least one data slot from a data-line. In embodiments, this data request may include a tag identifier included in a non-canonical portion of an address of the load request. In response to this load request, the memory controller sends the load request to memory (block 320). For this load operation, the memory controller may receive the tags included in metadata associated with the memory -line along with the data for the requested address from memory (block 330). The memory controller (e.g., by tag comparison circuitry 174 in memory controller 170) may determine whether one or more tags of the tag information matches a tag of the address of the memory request (diamond 340). If so, one or more portions, e.g., data slots, may be stored in a cacheline, along with storing the tag identifier itself, e.g., in metadata of the cacheline.[0059] If a matching tag is not found on a load, control passes to block 360 where the memory controller may, in an embodiment, prevent loading of a cacheline, or in an other embodiment, load the cacheline, but with random or garbage data (i.e., not indicative of the data stored at the corresponding physical address in memory), which may provide, by not revealing whether there was an identification tag match, a defense against an adversary attempting to guess identification tag values.[0060] Referring now to FIG. 4, shown is a high-level arrangement of a system 400 including a processor (CPU) 410 and an associated memory (DRAM) 460. As illustrated, assume a load or read request is generated. Software may request data to be read using a 64- bit linear address 420 which, as shown, includes various portions including a least significant portion 422 (e.g., 6 bits to identify a byte within a cacheline), another portion 424 to identify a cacheline, a linear address portion 425 to identify a page location, and a small object indicator 426, e.g., a small object bit, which when set identifies that the request is for less than a cacheline width. For example, this small object address bit may be set by page table entries corresponding to pages that are part of a small object region of a heap. As further illustrated, a non-canonical portion of the address may include a tag 428 as described herein. Note that linear address portion 425 may be used to perform a lookup within page table and TLB caching structures 430 to obtain a memory physical address 442. Assume that this physical address corresponds to memory-line 466 also shown in FIG. 4, which includes four 16B slots (slot 0 - slot 3) each having a corresponding tag 4680-3 stored in ECC (or a table in) memory 468.[0061] When each of these stored tags is of a different tag identifier value, this means that each slot is associated with a different tag and thus as further illustrated in FIG. 4, when loaded and stored into a cache 445, each slot may be stored into a different cacheline (e.g., in a right hand side of the cacheline as shown), with its corresponding tag identifier 448 in the PA address for the cacheline. Thus, as illustrated in FIG. 4, with tag identifiers 4680-4683 each including a different value (namely values 1-4), each corresponding data slot in memory-line 466 may be stored in a different cacheline 446, each stored in association with its corresponding tag identifier in an address or metadata portion of cache 445 associated with the cacheline. [0062] As further illustrated, memory controller operations to be performed on a load are shown. Of course, in other cases, this (memory tag controller) functionality could be performed between any of the caching layers, e.g., between the L2 cache and LLC, or between the LI and L2 cache, and so on. As seen, a memory controller 450 may determine whether the tag of the address matches any of the identified tags in the tag information obtained from memory (diamond 452). If so, it may also be determined whether the small address object indicator is set (diamond 456). If it is, memory controller 450 may cause the data slot associated with the matching tag to be stored in a given cacheline aligned to the right-hand side as illustrated. Data shifting in a cacheline with out-of-bounds detection may occur when the next byte to be read or written goes beyond the end of the cacheline. And note that data may be aligned/shifted either to the beginning or end of the cacheline depending on whether one wishes to catch an underflow read or an overflow read error. Depending on use cases, data slots may be shifted to one end or the other. For example, for a stack usage, shifts may be to the most significant side. If there is an overflow by pushing all the data to the end of the cacheline, a buffer overflow may be detected on a byte granularity because one more byte is walked beyond the end of the buffer, and another cacheline is read. When this subsequent adjacent cacheline read occurs, it is provided to the memory controller for the adjacent cache line, which determines that the tag does not match that last one, thus detecting the violation. Which direction the shifts occur for a particular cacheline may be configured as part of the tag configuration stored in (e.g., ECC) memory or, alternatively, may be indicated by another address bit akin to the small object indictor bit indicating the expected direction of the shift operations.[0063] If there is no match between the tag of the address and any of the tag identifiers received from the memory on a memory load, memory controller 450 may, in anembodiment, prevent loading of a cacheline, or in an other embodiment, load the cacheline, but with random or garbage data (i.e., not indicative of the data stored at the corresponding physical address in memory), which may provide, by not revealing whether there was an identification tag match, a defense against an adversary attempting to guess identification tag values.[0064] FIG. 5 illustrates a block diagram 500 of an extended virtual memory address 502 that illustrates that an identification tag 504 (e.g., a color tag) may be stored in various locations within the virtual memory address. The identification tag 504 may occupy one or more bits within the virtual memory address 502 such that the virtual memory address 502 includes one or more bits above the identification tag 504 and one or more bits between the identification tag and the portion of the virtual memory address that is translated into the physical address (e.g., through a translation lookaside buffer).Address Encoding/Encrvnting[0065] Returning to FIG. 1 A, the processor 112 may be configured to encrypt and decrypt virtual/linear addresses (e.g., virtual address 130), any portion thereof, any other addresses (direct or indirect), and/or any portions thereof, also or instead of encrypting and decrypting new data 128, cached data 132, and/or stored data 138, using encryption circuitry 122 for example.[0066] FIG. 6 is a block diagram of another representation of the processor 112 of FIG. 1A and FIG. IB, showing processor cores 118, cache 120, and secure memory access circuitry 600, according to an embodiment in which virtual addresses may be encrypted. Secure memory access circuitry 600 may use or include encryption circuitry as represented by encryption circuitry 122 as shown in FIG. 1A, or may use or include other encryption circuitry, but is shown for convenience as including encrypting circuitry 658 and decrypting circuitry 654.[0067] In an embodiment, the secure memory access circuitry 600 utilizes metadata about an address (e.g., virtual address 130), which is encoded into memory tag bits (e.g., memory tags 142A) or other bits within or associated with the address (e.g., non-canonical bits of a 64-bit address, or a range of addresses set aside, e.g., by the operating system, such that the corresponding high order bits of the address range may be used to store the metadata), in order to secure and/or provide access control to memory locations pointed to by the address. For example, the metadata encoding and decoding provided by the secure memory access circuitry 600 may prevent the address from being manipulated to cause a buffer overflow, and/or may prevent program code from accessing memory that it does not have permission to access.[0068] In an embodiment, secure memory access circuitry 600 includes address encoding circuitry 652, which may be invoked when memory is allocated (e.g., by an operating system, in the heap) and provided to executing programs in any of a number of different ways, including by using a function such as malloc, alloc, or new; or implicitly via the loader, or statically allocating memory by the compiler, etc. As a result, the encoded address, which points to the allocated memory, is encoded with the address metadata.[0069] The address metadata may include valid range metadata. The valid range metadata may allow executing programs to manipulate the value of the address within a valid range (e.g., by performing pointer arithmetic on the plaintext portion of the pointer), but may corrupt the address if the memory is accessed using the address beyond the valid range (e.g., by affecting the ciphertext portion of the pointer or other bits of the pointer that are used as part of the tweak input to the cipher). Alternatively or in addition, the valid range metadata may be used to identify a valid code range, e.g., a range of memory that program code is permitted to access (e.g. the encoded range information may be used to set explicit ranges on registers). Other information that may be encoded in the address metadata includes access restrictions on the address (e.g., whether the address may be used to write, execute, or only read the referenced memory).[0070] In an embodiment, secure memory access circuitry 600 includes address decoding circuitry 662, which may be invoked to verify the encoded metadata on memory read and write operations that utilize processor instructions such as MOV, where a general purpose register is used as a memory address to read a value from memory or to write a value to memory (e.g., load/store), as well as on other operations that involve the“use” of memory (such as control transfer instructions, e.g. CALL/JMP etc.) and/or any instruction that can take a memory operand. The indirect (e.g. encoded virtual/linear) memory address used for the memory access is first decoded and/or decrypted by the processor to get the correct virtual/linear memory address.[0071] In an embodiment, on (or during) a memory allocation operation (e.g., a“malloc”), memory allocation circuitry (e.g., memory management code within a privileged system component in system memory) may allocate a range of memory for a buffer and return an address and the corresponding metadata (e.g., range and/or permission metadata). For example, the memory allocation circuitry may encode plaintext range information in the address (e.g., in the unused/non-canonical bits, prior to encryption), or supply the metadata as one or more separate parameters to the instruction, where the parameter(s) specify the range and/or code permission information. Thus, according to an embodiment, the memory allocation circuitry may invoke the address encoding circuitry 652. The address encoding circuitry 652 includes range rule circuitry 654 and address adjustment circuitry 656, which encode the address with the metadata (e.g., range and/or permission metadata) and an “adjustment,” as described below. The address encoding circuitry 652 may store the metadata in an unused portion of the address (e.g., non-canonical bits of a 64-bit address).[0072] To determine valid range metadata, the range rule circuitry 654 may select the valid range metadata to indicate an upper limit for the size of the buffer referenced by the address. The address adjustment circuitry 656 may adjust the valid range metadata as needed so that the upper address bits (e.g., most significant bits) of the addresses in the address range do not change as long as the address refers to a memory location that is within the valid range indicated by the range metadata, which may enable the address to be manipulated (e.g., by software performing arithmetic operations, etc.) but only so long as the manipulations do not cause the address to go outside the valid range (e.g., overflow the buffer). In other words, the processor will take the adjustment value and add it to the address (pointer) value and, as long as this operation does not affect the ciphertext (or bits used as part of the tweak) of the address, the memory access is allowed. In embodiments, the adjustment value itself may be encrypted as part of the encoded address to prevent a speculative adversary from controlling the adjustment value of an address.[0073] The address encoding circuitry 652 may use the valid range metadata to select a portion of the address to be encrypted. The encrypting circuitry 658 may encrypt the selected portion of the address (and the adjustment and/or an identification tag, in someembodiments), using a secret key (e.g., key 154 in FIG. 1A) and a tweak, as described further below. On a memory access operation (e.g., a read, write, or execute operation), the address decoding circuitry 662 may decode the previously-encoded address. To do this, the decrypting circuitry 664 may decrypt the encrypted portion of the address (and in some embodiments, the encrypted adjustment and/or encrypted identification tag) using the secret key and the tweak, as described further below.[0074] The address restoration circuitry 666 may return the address to its original (e.g., canonical) form, in order to restore the original value of the address (e.g., the true, original linear memory address). To do this, the address restoration circuitry 666 may remove the valid range metadata encoded in the unused bits of the address (e.g., return the unused bits to their original form). If the address decodes successfully, the memory access operation completes successfully. However, if the encoded address has been manipulated (e.g., by software) so that its value falls outside the valid range indicated by the range metadata (e.g., overflows the buffer), the address will be corrupted as a result of the decrypting process performed by the decrypting circuitry 664. A corrupted indirect address will raise a fault (e.g., a general protection fault). In this way, the secure memory access circuitry 650 enables the processor to provide access control and address security against buffer overflow attacks and similar exploits.[0075] Referring now to FIG. 7, in some embodiments, a computer system may establish a computing environment 710 during operation (e.g., native and/or virtual runtime or “execution” environments). The various modules depicted in the environment 710 may be embodied as hardware, firmware, software, or a combination thereof. In the environment 710, the user space application 734 (or the privileged system component 742, e.g., in loading a user space application 734) may, from time to time, during the operation of the computer system, issue a memory allocation 702. The memory allocation 702 may be translated (e.g., compiled or interpreted), as needed, by the memory allocation circuitry 746 of the privileged system component 742 before being passed on to the processor (e.g., processor 112). In the processor, the address encoding circuitry 652 is invoked in response to the memory allocation 702 (e.g., in place of a conventional“malloc” instruction). Whereas a conventional malloc instruction simply allocates memory and returns an (unsecured) pointer, the address encoding circuitry 652 encodes the address 704, including metadata 705 (e.g., the range and/or permission information, either already plaintext encoded in the address-without encryption yet applied by the processor-or as a separate parameter to the instruction specifying the range), as described herein, and returns an encoded address 706.[0076] Similarly, the user space application 734 or the privileged system component 742 may issue a memory access 708 from time to time, which may be handled by the processor as a processor instruction that reads from memory and writes to a register or reads from a register and writes to memory (e.g. a MOV instruction). Using the MOV instruction as an example, the secure move circuitry 660 performs the memory access only after successfully invoking the address decoding circuitry 662. While the secure move circuitry 660 and the address decoding circuitry 662 are shown as separate modules in FIG. 6 and FIG. 7, it should be understood that the address decoding circuitry 662 can be incorporated into the secure move circuitry 660 or may be implemented separately. Further, it should be understood that the address decoding circuitry 662 may be incorporated into or referenced by other types of instructions, alternatively or in addition to the MOV instructions (e.g., call, JMP, etc.). For example, control transfer instructions such as call and JMP may load the encoded address for the code to execute into the processor’s program counter register (e.g. instruction pointer or the RIP, where the RIP is the instruction pointer register using instruction relative addressing in 64-bit code). The instruction pointer register may then be queried by a program and as a result, the current program counter address will be the encoded form (offset to the current program counter location).[0077] If the address decoding circuitry 662 successfully decodes the encoded address 706, the original address 704 is returned to the privileged system component 742 and the memory access is completed (716), or program execution begins at the new program counter location (in the case of control flow changes). If the encoded address 706 does not successfully decode, a fault is raised (718). [0078] Referring now to FIGS. 8 A and 8B, examples of methods 802 and 820 for performing a memory allocation process, are shown. Portions of the methods 802 and 820 may be executed by hardware, firmware, and/or software of a computer system (e.g., by the privileged system component 742 executing the memory allocation circuitry 746). In FIG. 8A, the method 802 begins in response to a call for memory allocation from calling code (e.g., the privileged system component 742 or the user space application 734). In block 810, the computer system determines whether the calling code is authorized to allocate memory. To do this, the computer system may utilize a set of processor registers to log the locations of recently -taken code branches, e.g., the last branch record (LBR). For example, to determine the calling code (e.g., the code that called a function), the function can query the LBR to see the branch history. Alternatively, the function may query the call stack for the return address (but the return address on the stack may not be as secure as data stored in processor registers). If the computer system determines that the calling code is not authorized to allocate memory, a fault is raised in block 812. If the computer system determines that the calling code is authorized to allocate memory, the computer system proceeds to block 814 and initiates secure memory allocation using the techniques disclosed herein. Accordingly, the computer system proceeds from block 814 to the beginning of the method 900, shown in FIG. 9 and described below.[0079] In FIG. 8B, the method 820 begins in response to the output of an encoded address at block 924 of the method 900. In block 822, the computer system returns the encoded version of the address (e.g., the encoded address 706) to the calling code that initiated the memory allocation in block 814 of FIG. 8A. In block 824, the calling code uses the encoded address to access the allocated memory (e.g., buffer). In doing so, the calling code may alter or modify the encoded address by, for example, performing arithmetic operations on the encoded address. Thus, a subsequent read or write operation of the calling code may trigger the execution of the method of FIG. 10, described below.[0080] Referring now to FIG. 9, an example of a method 900 for securing an address is shown. Portions of the method 900 may be executed by hardware, firmware, and/or software of the computer system (e.g., by the processor 112 invoking the address encoding circuitry 652). The method 900 begins in response to a memory allocation (e.g., by a memory manager module in block 814 of FIG. 8A). In block 910, the computer system obtains the address, address range, and other inputs needed to encode the address (e.g., a code block identifier or instruction pointer, as described below). In block 912, the computer system determines whether the calling code (e.g., the code initiating the memory allocation in block 810 of FIG. 8 A) is authorized to access the indirect address received in block 910 (e.g., address 704). To do this, the computer system may perform an access control check by verifying the instruction pointer or caller privilege level information for the calling code, which may be obtained from, for example, a heap manager of a memory manager module. If the computer system determines that the calling code is not authorized to access the address, a fault is raised (914). If the computer system determines that the calling code is authorized to access the address, the computer system proceeds to block 916. In block 916, the computer system determines the unused (e.g., non-canonical) address bits of the address to perform the address range encoding. To do this, the computer system may simply use the higher (e.g., most significant) unused/non-canonical bits of the address. It should be noted that the encoded addresses do not need to be architecturally non-canonical. Rather, the unused/non-canonical addresses may simply be a range of memory set aside by, for example, the privileged system component 742, to enable the address encoding as disclosed herein.[0081] In block 918, the computer system creates the metadata (e.g., valid range and/or permission data) and stores the metadata in the unused/non-canonical bits of the address selected in block 916. Illustratively, the metadata indicates an upper limit on the size of the buffer pointed to by the address. To create the metadata, the computer system converts the address values to a center location in which the most significant canonical address bits do not change for the valid memory range. In some embodiments, the range metadata includes an “exponent” to determine the 2’s power of the memory range size. In some cases, an “adjustment” is used to force values to the end of the 2’s power range as described below. In other embodiments, the adjustment may be used to force the buffer to the beginning of the 2’s power range when buffer“underflow” needs to be addressed (as opposed to buffer “overflow”). Using the exponent metadata, any 2’s power memory range may be defined (e.g., 2, 4, 8, 16... 2L64).[0082] The following is a simple example of range metadata encoding. The addresses 0000b - 001 lb fit the range 0-3 where the upper two bits do not change. However, if a pointer is modified to go to the index 4, one of the upper bits will change. Accordingly, the valid range metadata may be encoded as [2] (for the upper two bits to encode a range of 4) and the valid range metadata may be stored in the higher non-canonical bits, e.g.,“[2]OOxxb.” In this example, the exponent would be 2 bits in size (e.g., values [1-4]), to cover the 4-bit addresses used in the example. Table 1 below illustrates a number of additional, simplified examples.TABLE 1. Address encoding examples.[0083] In Table 1, the encoded address is represented using a format that is similar to a floating-point format. In the encoded addresses in the third column of Table 1, the number in brackets, e.g., [2], is the exponent or valid range metadata; the number in braces, e.g., {3}, is the adjustment value, and the address to the right of the adjustment value indicates the unused/non-canonical bits in which the valid range metadata and adjustment value are stored. In block 920, the computer system determines the adjustment (or“offset”) to be applied to the valid range and stores the adjustment value in the unused/non-canonical bits of the indirect address. In some embodiments, the adjustment is used to force the encoded range to the end of a 2’s power boundary (e.g., to set a specific upper bound on the buffer size). In this way, an encoded version of the original (not encoded) valid address range may be created. The encoded version may be designed such that the least number of upper bits will change over the valid range (e.g., so that encryption of the upper bits will detect/amplify modifications to the encoded address on decryption). The encoding is reversible, such that the original intended valid address range is returned as long as it is modified within the range. In the example above, the range 0-3 decimal (OOOOb-OOl lb binary) may be encoded as [2]{0} OOxxb (where“xx” means those bits may take any value for the range: 00, 01, 10, 11). In another example, the range 1-4 decimal (000 lb-0100b) may be encoded as [2] {-1} OOxxb (where the adjustment is subtracted in order to keep the upper bits constant). Alternatively, the same range 1-4 decimal (OOOlb-OlOOb), may be encoded as [2] {3} Olxxb (this time adding an adjustment of 3 in order to keep the upper bits constant). With eitherrepresentation, the encoded version decodes back to the original address range 1-4. In still another example, if the buffer size is 4KB, a 10-bit adjustment value with a resolution of 4 bytes may be used. [0084] Other embodiments may use a signed adjustment value (e.g., 2’s complement) where the buffer may be either adjusted to the beginning or end of the 2’s power boundary depending on the sign (+/-) of the adjustment. Such embodiments may provide protection from either buffer overflow or underflow situations depending on the adjustment sign. In cases where 16 bits are available in unused/non-canonical addresses (e.g., in current 64-bit processors), 10 of the available bits may be used for the adjustment and the remaining 6 bits may be used for the valid range metadata (e.g., exponent value/2’s power). If the exponent value reaches a range beyond a 4KB page, the adjustment may expand by a 2’s multiplier to allow adjustments of large buffers within even larger power of 2 ranges (noting that in some embodiments, 4096 bytes are fully covered with a 10-bit adjustment value allowing the adjustment to“adjust” a buffer to end with the very last 4byte word in a 4KB page before the upper (2’s power) bits will change). Such an adjustment (e.g., incremented by 1) will adjust the buffer location 4 bytes at a time. Any other choice of initial adjustment size and word size is possible in other embodiments. In another example, if the exponent has a value of 13, then the adjustment value may be multiplied by 2 so that the adjustment may still encompass the full 2’s power range (in this case, two 4KB pages, if adjusting by 8 bytes at a time), and so on (e.g. an exponent value of 14 means the adjustment value is multiplied by 4, and an exponent value of 15 means the adjustment value is multiplied by 8 and so on, allowing the adjustment to encompass the full 2 powers range).[0085] In block 922, the computer system encrypts a portion of the address, where the portion of the address to be encrypted is determined by the valid range metadata (e.g., exponent’s power) and the adjustment value. The valid range metadata determines the number of the most significant address bits of the encoded address that are to be encrypted (e.g., down to a minimum number so some address bits may always be encrypted). In some embodiments, the adjustment value and/or an identification tag is encrypted as well (e.g., to create a reasonable block size for a block cipher). In some embodiments, the most significant bits of the used bits/canonical address identified in the valid range metadata are encrypted with a secret key (e.g., the secret key 720), using the valid range metadata (which may or may not include the adjustment value) as a tweak. In the illustrated embodiments, the valid range metadata (e.g., exponent’s power) would not be encrypted because the processor uses the valid range metadata plaintext to determine the number of bits to decrypt. However, the valid range metadata (e.g., exponent/two’s power) may be used as a tweak in the case of a tweakable block cipher (and thereby affect the encrypted bits). Other data values that may be used as tweaks include: data stored in the unused bits of the indirect address, the upper limit on the buffer size, an exponent of a two’s power boundary selected as the upper limit on the buffer size, an adjustment value applied to the two’s power boundary, a code block identifier, instruction pointer data, permission information encoded in the metadata, and/or version number (useful when reassigning/revoking pointers that were previously assigned to a program, version may be maintained by the processor in a register). Embodiments may use small block ciphers (e.g. encrypting 32 bits of data), such as Simon, Speck ciphers, and PRINCE cipher, as the block sizes corresponded to (fit within) the size of a 64-bit virtual/linear memory address/pointer. In addition or alternatively to encryption, ciphers may be used to generate a message authentication code (MAC) which may be truncated and stored in the unused non-canonical linear/virtual address bits; this cryptographic MAC may be used to detect tampering or modification of the virtual address when manipulated outside of its bounds.[0086] As used herein, a“tweak” may refer to, among other things, a second input to a block cipher, in addition to the usual plaintext or ciphertext input and the key (e.g., the secret key 720). Encrypting the upper two canonical bits enables the computer system to detect when the address has been illegally changed, because the encryption algorithm will cause the illegally-changed upper bits to produce a random sequence of bits that are non-deterministic to an adversary, which likely results in a fault when the illegally-changed indirect address is used.[0087] The portion of the address to be encrypted (e.g., the upper used/canonical bits) is encrypted using a cipher mode encryption algorithm, such as a tweakable block cipher, using the valid range metadata and adjustment (e.g., [2] {-1}, in the above example) as a tweak. Some examples of tweakable block ciphers include: XOR-encrypt-XOR (XEX), Liskov, Rivest, and Wagner (LRW), and XEX-based tweaked-codebook mode with ciphertext stealing (XTS). Other bit diffusion methods in which any single bit change in the cipher text results in changes across the entire decrypted plaintext can be used. If desired, alternative embodiments may trade off security for performance by using non-cryptographic methods that still achieve reasonable bit diffusion analogous to a block cipher.[0088] The cipher selected for the encryption may be implemented in hardware, using an algorithm that has a bit-selectable block size (e.g. SPECK), or an algorithm that allows a fixed block size with a tweak using the remaining unencrypted bits (e.g., the extra bits outside the fixed block size). In some embodiments, the cipher has sufficient bit diffusion so that any bit change made to the encrypted address bits will equally affect (cascade through) all bit positions when decrypted. This provides the basis for a corrupted address given any change or bounds violation. Using this method, if the adversary attempts to tamper with the metadata (e.g., the exponent or adjustment values or identification tag, or the encrypted most significant bits) the resulting decoded address will be corrupted. In the 64-bit address space, address corruption will result in a fault with high probability, thus allowing the address corruption (and pointer access or bounds violation) to be caught by the privileged system component 742 (e.g., an operating system/executive/VMM/alternative mode/debug trace/management processor/subsystem, etc.).[0089] In the example above, if the address/pointer value is incremented beyond 3, modifying the address/pointer in this way will corrupt the upper canonical bits and cause a non-deterministic memory access that cannot be controlled by an adversary. For instance, going beyond a buffer size by one byte will result in a random memory access that will page fault with high probability. This is due to the bit diffusion properties of the cipher to ensure that even one-bit changes will diffuse through all of the most significant bits. As a result of the adjustment, which forces values to the end of the 2’s power range, buffer overflows cause corruption of the encrypted address bits.[0090] The cipher tweak may be extended to include a code block identifier to provide access controls over which code blocks (e.g., blocks of the calling code) are permitted to use an indirect address/pointer to access memory. Additionally, instruction pointer (which may be referred to as the“program counter”) information or ranges may be encoded as part of the pointer encryption tweak. The instruction pointer information may be used to limit the scope of what code can access what data. For example, all code may be arranged within fixed blocks of memory within the 64-bit address space. Code with similar access permissions may be grouped together in the same block or range. The tweak may include the identifier for the block of memory from which an instruction is executing. In this way, code and data may be associated, and access controlled, such that an adversary coming from a different code block will not be able to access data of the protected block using the encrypted pointers, because the encrypted pointers will not decode properly if the wrong code block identifier is used as a tweak. Further, when a block of code calls, e.g., malloc, to allocate memory to itself, malloc may return the encrypted address using the calling code’s memory block to ensure private access to the allocated memory (so long as the allocated memory isn’t freed and then reallocated to another code block). Alternatively, other methods of identifying the calling code may be used in the tweak, such as protection keys. Still further, the metadata for read/write/execute access that is used by the processor 112 to control access to memory may be used as part of the tweak for the encrypted address bits. Additionally, the instruction pointer may itself be represented as an encoded pointer (e.g., range-based). In this case, the metadata and encrypted address bits may be used as part of the“tweak” identifying the code block accessing a data pointer or requesting a memory allocation/assignment.[0091] Referring now to FIG. 10, an example of a method 1000 for decoding an address is shown. Portions of the method 1000 may be executed by hardware, firmware, and/or software of a computer system (e.g., by the processor 112 invoking the secure move circuitry 660 and/or the address decoding circuitry 662). The method 1000 begins in response to a memory access operation such as a read, write, or execute operation, e.g., a MOV instruction. Table 2 below provides some illustrative examples of MOV instructions that can use the address encoding technology disclosed herein. Of course, different processor architectures may refer to the“MOV” functionality by different names for the instructions or different options/parameters. As such, the disclosed embodiments apply to all types of“MOV” functionality across different architectures, irrespective of the terminology used to refer to such functionality. Further, the MOV instruction is one example, and any instruction that can access memory to read/write data may apply the address encoding and decoding methods disclosed herein.TABLE 2. Example MOV instructions.[0092] In block 1010, the computer system obtains the encoded address (e.g., the encoded address 706, which may be obtained from a register). In block 1012, the computer system determines whether the encoded address obtained in block 1010 has unused or non-canonical bits. If the computer system determines that the encoded address does not have unused/non- canonical bits (e.g., the address does not fall within the non-canonical, or otherwise reserved, range of addresses, whether the address range is 32-bit, 64-bit, 128-bit or whatever range an alternate architecture may require), a fault is raised (1014). If the computer system determines that the encoded address has unused/non-canonical bits (e.g., the address falls with the canonical or reserved address range), the computer system proceeds to block 1016.In block 1016, the computer system decrypts the encrypted portion of the encoded address, using the decryption algorithm counterpart of the encryption algorithm used in block 922 of FIG. 9, and using the same secret key and tweak as used by the encryption algorithm in block 922 of FIG. 9. In block 1018, the computer system“undoes” the adjustment to the range metadata in the decrypted address (e.g., by subtracting the decrypted adjustment value in the unused/non-canonical bits from the full decrypted value of the address). In block 1030, the computer system returns the decrypted address to its original (e.g., canonical) form by, for example, removing the unused/non-canonical bits. In block 1022, the computer system uses the decoded address output by block 1020 as a“true” (e.g., virtual or linear) memory address (e.g., as a pointer). In block 1024, the computer system determines whether the decoded address used as a memory address/pointer at block 1022 is a corrupted address. If the decoded address is corrupted, a fault is raised (1014). If the decoded address is not corrupted, the computer system completes the memory access operation successfully, using the decoded address as a memory address/pointer, in block 1026. In this way, the method 1000 allows the computer system to verify the range-encoded indirect address and enforce the embedded range check before converting the range-encoded address into a real memory address.Additionally, invalid adjustment values (e.g., adjustment values that go beyond the 2’s power range), may be used to determine with some probability when a corruption occurs as well as invalid address values or metadata reserved to detect when corruption occurs. Even if corruption is not detected, the resulting address would not be deterministic (and therefore usable) to an adversary.Tag and Address Encoding/Encrvnting[0093] FIG. 11 represents an embodiment in which memory tags are encrypted with the encrypted part of the address. Embodiments in which memory tags are included in the encrypted part of the address as assigned by the memory allocator (e.g. malloc) may prevent adversaries from manipulating the ciphertext portion of the address without also affecting the tag bits. Therefore, an adversary cannot take a fixed address and start guessing the tag bits independently. The encryption will cause the tag value and the encrypted address bits to change, resulting in a random tag associated with a random address.[0094] Embodiments in which the memory tag is included in the encrypted portion of the cryptographic pointer may prevent an adversary from guessing tags. If any part of the encrypted pointer is modified, all the decrypted bits will be random. Both the decrypted address and tag will be different; therefore, an adversary cannot correlate a tag with an encrypted address because both change each time the adversary attempts to modify the ciphertext and speculate on the resultExample Embodiments[0095] In an embodiment, a processor includes a decoder, a cache, address translation circuitry, a cache controller, and a memory controller. The decoder is to decode an instruction. The instruction is to specify a first address associated with a data object, the first address having a first memory tag. The address translation circuitry is to translate the first address to a second address, the second address to identify a memory location of the data object. The comparator is to compare the first memory tag and a second memory tag associated with the second address. The cache controller is to detect a cache miss associated with the memory location. The memory controller is to, in response to the comparator detecting a match between the first memory tag and the second memory tag and the cache controller detecting the cache miss, load the data object from the memory location into the cache.[0096] In various embodiments, any or any combination of the following may also apply. The first address may be a virtual address and the second address may be a physical address. The memory controller may also be to prevent loading a cache line corresponding to the memory location until the comparator has detected the match. The memory controller may also be to, in response to the comparator detecting a mismatch between the first memory tag and the second memory tag, load data not indicative of the data object into a cache line corresponding to the memory location. The processor may also include poison queue circuitry to, in response to the comparator detecting a mismatch between the first memory tag and the second memory tag, set an indicator to indicate that a cache line corresponding to the memory location is invalid. The poison queue circuitry may also be to provide an indication to software that the cache line is invalid only after a result of the instruction has been committed. The processor may also include pointer security circuitry to define the first memory tag. The processor may also include encryption circuitry to cryptographically secure the data object at least partially based on the first memory tag. The first memory tag may include an identification tag to identify a type, a function, a memory location, or a use for the data object. The encryption circuitry may be to use a least a portion of the memory tag to at least partially define a tweak input to an encryption algorithm. The first memory tag may include an encryption tag, and the encryption circuitry may also be to use the encryption tag to identify one of a plurality of encryption keys. The first memory tag may include a small object tag to indicate whether a cache line associated with the memory location is to include a plurality of data objects. The small object tag may be to enable sub-cacheline granularity of memory tagging. The first memory tag includes a bound distance tag to indicate an allowed distance between the first memory address and the data object. The processor may also include integrity check circuitry to generate an integrity check value at least partially based on the first address and an encrypted value of the data object. The processor may also include pointer security circuitry to detect tampering with the first address at least partially based on the integrity check values.[0097] In an embodiment, a processor includes a decoder to decode an instruction to allocate a memory region to a software program and an execution unit to execute the instruction. The execution unit includes range rule circuitry to determine a valid range for the memory region; address adjustment circuitry to determine a first number of address bits to be used by the software program to manipulate an address within the valid range and a second number of address bits to include a memory tag to indicate access permission; and encryption circuitry to encrypt at least a portion of the address and the memory tag to generate an encrypted address to be returned to the software program.[0098] In an embodiment, a processor includes a decoder to decode an instruction, the instruction to specify an encrypted first address associated with a data object; decryption circuitry to decrypt the encrypted first address to generate a decrypted first address and a decrypted first memory tag; a cache; address translation circuitry to translate the decrypted first address to a second address, the second address to identify a memory location of the data object; a comparator to compare the first memory tag and a second memory tag associated with the second address; a cache controller to detect a cache miss associated with the memory location; and a memory controller to, in response to the comparator detecting a match between the first memory tag and the second memory tag and the cache controller detecting the cache miss, load the data object from the memory location into the cache.[0099] In an embodiment, a processor includes a decoder to decode an instruction, the instruction to specify a first address associated with a data object, the first address having a first memory tag; a cache; address translation circuitry to translate the first address to a second address, the second address to identify a memory location of the data object; a comparator to compare the first memory tag and a second memory tag associated with the second address; a cache controller to detect a cache miss associated with the memory location; and means for, in response to the comparator detecting a match between the first memory tag and the second memory tag and the cache controller detecting the cache miss, loading the data object from the memory location into the cache.[00100] In various embodiments, any or any combination of the following may also apply. The means may also be for preventing loading a cache line corresponding to the memory location until the comparator has detected the match. The means may also be for, in response to the comparator detecting a mismatch between the first memory tag and the second memory tag, loading data not indicative of the data object into a cache line corresponding to the memory location. The processor may also include poison queue means for, in response to the comparator detecting a mismatch between the first memory tag and the second memory tag, setting an indicator to indicate that a cache line corresponding to the memory location is invalid. The poison queue means may also be for providing an indication to software that the cache line is invalid only after a result of the instruction has been committed.[00101] In an embodiment, a processor includes a decoder to decode an instruction to allocate a memory region to a software program; an execution unit to execute the instruction, the execution unit including range rule means for determining a valid range for the memory region; address adjustment means for determining a first number of address bits to be used by the software program to manipulate an address within the valid range and a second number of address bits to include a memory tag to indicate access permission; and encryption means for encrypting at least a portion of the address and the memory tag to generate an encrypted address to be returned to the software program.[00102] In an embodiment, a processor includes a decoder to decode an instruction, the instruction to specify an encrypted first address associated with a data object; decryption circuitry for decrypting the encrypted first address to generate a decrypted first address and a decrypted first memory tag; a cache; address translation circuitry to translate the decrypted first address to a second address, the second address to identify a memory location of the data object; a comparator to compare the first memory tag and a second memory tag associated with the second address; a cache controller to detect a cache miss associated with the memory location; and means for, in response to the comparator detecting a match between the first memory tag and the second memory tag and the cache controller detecting the cache miss, loading the data object from the memory location into the cache.[00103] In an embodiment, a method includes decoding an instruction, the instruction to specify a first address associated with a data object, the first address having a first memory tag; translating the first address to a second address, the second address to identify a memory location of the data object; comparing the first memory tag and a second memory tag associated with the second address; detecting a cache miss associated with the memory location; and loading, in response to detecting a match between the first memory tag and the second memory tag and detecting the cache miss, the data object from the memory location into a cache.[00104] In various embodiments, any or any combination of the following may also apply. The first address may be a virtual address and the second address may be a physical address. The method may also include preventing loading a cache line corresponding to the memory location until the match is detected. The method may also include loading, in response to detecting a mismatch between the first memory tag and the second memory tag, data not indicative of the data object into a cache line corresponding to the memory location. The method may also include setting, in response to the comparator detecting a mismatch between the first memory tag and the second memory tag, an indicator to indicate that a cache line corresponding to the memory location is invalid. The method may also include providing an indication to software that the cache line is invalid only after a result of the instruction has been committed.[00105] In an embodiment, a method includes decoding an instruction to allocate a memory region to a software program; executing the instruction, execution including determining a valid range for the memory region; determining a first number of address bits to be used by the software program to manipulate an address within the valid range; determining a second number of address bits to include a memory tag to indicate access permission; and encrypting at least a portion of the address and the memory tag to generate an encrypted address; and returning the encrypted address to the software program.[00106] In an embodiment, a method includes decoding an instruction, the instruction to specify an encrypted first address associated with a data object; decrypting the encrypted first address to generate a decrypted first address and a decrypted first memory tag; translating the decrypted first address to a second address, the second address to identify a memory location of the data object; comparing the first memory tag and a second memory tag associated with the second address; detecting a cache miss associated with the memory location; and loading, in response to detecting a match between the first memory tag and the second memory tag and detecting the cache miss, the data object from the memory location into the cache.[00107] In embodiments, an apparatus may include means for performing any of the functions and/or methods described above. In embodiments, a machine-readable tangible medium may store instructions, which, when executed by a machine, cause the machine to perform any of the methods described above.Exemplary Core Processor and System Architectures[00108] Embodiments of the invention have been described and depicted with reference to a processor 112, which may represent any of many different processors in which the invention is embodied in different ways and/or for different purposes. These processors and cores, for example as described below, may include hardware, such as caches and branch predictors, that improve performance but may make the processor and/or core more vulnerable to analysis that may be defended against according to embodiments of the invention.[00109] For instance, implementations of cores (e.g., cores 118) in a processor in which the invention may be embodied may include: a general purpose in-order core intended for general-purpose computing; a high performance general purpose out-of-order core intended for general-purpose computing; a special purpose core intended primarily for graphics and/or scientific (throughput) computing. Implementations of processors in which the invention may be embodied may include: a central processing unit (CPU) including one or more general purpose in-order cores intended for general-purpose computing and/or one or more general purpose out-of-order cores intended for general-purpose computing; and a coprocessor including one or more special purpose cores intended primarily for graphics and/or scientific (throughput) computing. Such different processors lead to different computer system architectures, which may include: the coprocessor on a separate chip from the CPU; the coprocessor on a separate die in the same package as a CPU; the coprocessor on the same die as a CPU (in which case, such a coprocessor is sometimes referred to as special purpose logic, such as integrated graphics and/or scientific (throughput) logic, or as special purpose cores); and a system on a chip (SoC) that may include on the same die the described CPU (sometimes referred to as the application core(s) or application processors )), the above described coprocessor, and additional functionality.[00110] Exemplary core architectures are described next, followed by descriptions of exemplary processors and computer architectures. Each processor may include one or more cores, where each core and/or combination of cores may be architected and designed to execute one or more threads, processes, or other sequences of instructions at various times. Core architectures and design techniques may provide for and/or support the concurrent execution of multiple threads, according to any of a type of approaches known as simultaneous (or symmetric) multi-threading (SMT) or any other approach.[00111] Further, as mentioned above and explained in more detail below, embodiments of the present disclosure may apply to any type of processor or processing element, including general-purpose processors, server processors or processing elements for use in a server- environment, coprocessors (e.g., security coprocessors) high-throughput MIC processors, GPGPU’s, accelerators (such as, e.g., graphics accelerators or digital signal processing (DSP) units, cryptographic accelerators, fixed function accelerators, machine learning accelerators, networking accelerators, or computer vision accelerators), field programmable gate arrays, or any other processor or processing device. The processor or processors may be implemented on one or more chips. The processor or processors may be a part of and/or may be implemented on one or more substrates using any of a number of process technologies, such as, for example, BiCMOS, CMOS, or NMOS. The processors and processing devices listed above and described herein are exemplary; as explained herein, the present disclosure is applicable to any processor or processing device.[00112] Further, as mentioned above and explained in more detail below, embodiments of the present disclosure may apply to processors or processing elements using a wide variety of instruction sets and instruction set architectures, including for example, the x86 instruction set (optionally including extensions that have been added with newer versions); the MIPS instruction set of MIPS Technologies of Sunnyvale, CA; the ARM instruction set (with optional additional extensions such as NEON) of ARM Holdings of Sunnyvale, CA; IBM’s “Power” instruction set, or any other instruction set, including both RISC and CISC instruction sets. The instruction sets and instruction set architectures listed above and described herein are exemplary; as explained herein, the present disclosure is applicable to any instruction set or instruction set architecture.Exemplary Core Architecture[00113] Figure 12A is a block diagram illustrating both an exemplary in-order pipeline and an exemplary register renaming, out-of-order issue/execution pipeline according to embodiments of the invention. Figure 12B is a block diagram illustrating both an exemplary embodiment of an in-order architecture core and an exemplary register renaming, out-of- order issue/execution architecture core to be included in a processor according toembodiments of the invention. The solid lined boxes in Figures 12A-B illustrate the in-order pipeline and in-order core, while the optional addition of the dashed lined boxes illustrates the register renaming, out-of-order issue/execution pipeline and core. Given that the in-order aspect is a subset of the out-of-order aspect, the out-of-order aspect will be described.[00114] In Figure 12A, a processor pipeline 1200 includes a fetch stage 1202, a length decode stage 1204, a decode stage 1206, an allocation stage 1208, a renaming stage 1210, a scheduling (also known as a dispatch or issue) stage 1212, a register read/memory read stage 1214, an execute stage 1216, a write back/memory write stage 1218, an exception handling stage 1222, and a commit stage 124.[00115] Figure 12B shows processor core 1290 including a front-end unit 1230 coupled to an execution engine unit 1250, and both are coupled to a memory unit 1270. The core 1290 may be a reduced instruction set computing (RISC) core, a complex instruction set computing (CISC) core, a very long instruction word (VLIW) core, or a hybrid or alternative core type. As yet another option, the core 1290 may be a special-purpose core, such as, for example, a network or communication core, compression engine, coprocessor core, general purpose computing graphics processing unit (GPGPU) core, graphics core, or the like. For example, as explained above, core 1290 may be any member of a set containing: general-purpose processors, server processors or processing elements for use in a server-environment, coprocessors (e.g., security coprocessors) high-throughput MIC processors, GPGPU’s, accelerators (such as, e.g., graphics accelerators or digital signal processing (DSP) units, cryptographic accelerators, fixed function accelerators, machine learning accelerators, networking accelerators, or computer vision accelerators), field programmable gate arrays, or any other processor or processing device.[00116] The front-end unit 1230 includes a branch prediction unit 1232 coupled to a micro op cache 1233 and an instruction cache unit 1234, which is coupled to an instruction translation lookaside buffer (TLB) 1236, which is coupled to an instruction fetch unit 1238, which is coupled to a decode unit 1240. The decode unit 1240 (or decoder) may decode instructions, and generate as an output one or more micro-operations, micro-code entry points, microinstructions, other instructions, or other control signals, which are decoded from, or which otherwise reflect, or are derived from, the original instructions. The micro operations, micro-code entry points, microinstructions, etc. may be stored in at least the micro-op cache 1233. The decode unit 1240 may be implemented using various different mechanisms. Examples of suitable mechanisms include, but are not limited to, look-up tables, hardware implementations, programmable logic arrays (PLAs), microcode read only memories (ROMs), etc. In one embodiment, the core 1290 includes a microcode ROM or other medium that stores microcode for certain macroinstructions (e.g., in decode unit 1240 or otherwise within the front-end unit 1230). The micro-op cache 1233 and the decode unit 1240 are coupled to a rename/allocator unit 1252 in the execution engine unit 1250. In various embodiments, a micro-op cache such as 1233 may also or instead be referred to as an op-cache, u-op cache, uop-cache, or mor-cache; and micro-operations may be referred to as micro-ops, u-ops, uops, and pops.[00117] The execution engine unit 1250 includes the rename/allocator unit 1252 coupled to a retirement unit 1254 and a set of one or more scheduler unit(s) 1256. The scheduler unit(s) 1256 represents any number of different schedulers, including reservations stations, central instruction window, etc. The scheduler unit(s) 1256 is coupled to the physical register file(s) unit(s) 1258. Each of the physical register file(s) units 1258 represents one or more physical register files, different ones of which store one or more different data types, such as scalar integer, scalar floating point, packed integer, packed floating point, vector integer, vector floating point, status (e.g., an instruction pointer that is the address of the next instruction to be executed), etc. In one embodiment, the physical register file(s) unit 1258 comprises a vector registers unit, a write mask registers unit, and a scalar registers unit. These register units may provide architectural vector registers, vector mask registers, and general purpose registers. The physical register file(s) unit(s) 1258 is overlapped by the retirement unit 1254 to illustrate various ways in which register renaming and out-of-order execution may be implemented (e.g., using a reorder buffer(s) and a retirement register file(s); using a future file(s), a history buffer(s), and a retirement register file(s); using a register maps and a pool of registers; etc.)· The retirement unit 1254 and the physical register file(s) unit(s) 1258 are coupled to the execution cluster(s) 1260. The execution cluster(s) 1260 includes a set of one or more execution units 1262 and a set of one or more memory access units 1264. The execution units 1262 may perform various operations (e.g., shifts, addition, subtraction, multiplication) and on various types of data (e.g., scalar floating point, packed integer, packed floating point, vector integer, vector floating point). While some embodiments may include a number of execution units dedicated to specific functions or sets of functions, other embodiments may include only one execution unit or multiple execution units that all perform all functions. The scheduler unit(s) 1256, physical register file(s) unit(s) 1258, and execution cluster(s) 1260 are shown as being possibly plural because certain embodiments create separate pipelines for certain types of data/operations (e.g., a scalar integer pipeline, a scalar floating point/packed integer/packed floating point/vector integer/vector floating point pipeline, and/or a memory access pipeline that each have their own scheduler unit, physical register file(s) unit, and/or execution cluster - and in the case of a separate memory access pipeline, certain embodiments are implemented in which only the execution cluster of this pipeline has the memory access unit(s) 1264). It should also be understood that where separate pipelines are used, one or more of these pipelines may be out-of-orderissue/execution and the rest in-order.[00118] The set of memory access units 1264 is coupled to the memory unit 1270, which includes a data TLB unit 1272 coupled to a data cache unit 1274 coupled to a level 2 (L2) cache unit 1276. In one exemplary embodiment, the memory access units 1264 may include a load unit, a store address unit, and a store data unit, each of which is coupled to the data TLB unit 1272 in the memory unit 1270. The instruction cache unit 1234 is further coupled to a level 2 (L2) cache unit 1276 in the memory unit 1270. The L2 cache unit 1276 is coupled to one or more other levels of cache and eventually to a main memory.[00119] By way of example, the exemplary register renaming, out-of-order issue/execution core architecture may implement the pipeline 1200 as follows: 1) the instruction fetch 1238 performs the fetch and length decoding stages 1202 and 1204; 2) the decode unit 1240 performs the decode stage 1206; 3) the rename/allocator unit 1252 performs the allocation stage 1208 and renaming stage 1210; 4) the scheduler unit(s) 1256 performs the schedule stage 1212; 5) the physical register file(s) unit(s) 1258 and the memory unit 1270 perform the register read/memory read stage 1214; the execution cluster 1260 perform the execute stage 1216; 6) the memory unit 1270 and the physical register file(s) unit(s) 1258 perform the write back/memory write stage 1218; 7) various units may be involved in the exception handling stage 1222; and 8) the retirement unit 1254 and the physical register file(s) unit(s) 1258 perform the commit stage 1224.[00120] The core 1290 may support one or more instructions sets (e.g., the x86 instruction set (with some extensions that have been added with newer versions); the MIPS instruction set of MIPS Technologies of Sunnyvale, CA; the ARM instruction set (with optional additional extensions such as NEON) of ARM Holdings of Sunnyvale, CA, IBM’s“Power” instruction set, or any other instruction set, including both RISC and CISC instruction sets), including the instruction(s) described herein. In one embodiment, the core 1290 includes logic to support a packed data instruction set extension (e.g., AVX, AVX2, AVX-512), thereby allowing the operations used by many multimedia applications to be performed using packed data.[00121] It should be understood that the core may support multithreading (executing two or more parallel sets of operations or threads), and may do so in a variety of ways including time sliced multithreading, SMT (e.g., a single physical core provides a logical core for each of the threads that physical core is simultaneously multithreading), or a combination thereof (e.g., time sliced fetching and decoding, and SMT thereafter such as in the Intel® Hyperthreading technology).[00122] While register renaming is described in the context of out-of-order execution, it should be understood that register renaming may be used in an in-order architecture. While the illustrated embodiment of the processor also includes separate instruction and data cache units 1234/1274 and a shared L2 cache unit 1276, alternative embodiments may have a single internal cache for both instructions and data, such as, for example, a Level 1 (LI) internal cache, or multiple levels of internal cache. In some embodiments, the system may include a combination of an internal cache and an external cache that is external to the core and/or the processor. Alternatively, all of the cache(s) may be external to the core and/or the processor. Exemplary Core Architecture[00123] Figure 13 is a block diagram of an illustrative out-of-order issue/execution processor core that may be included in a processor according to embodiments of the invention. In Figure 13, processor core 1300 includes front-end unit 1310, integer unit 1320, FP unit 1330, load-store unit 1340, and level 2 (L2) cache unit 1350. Figure 13 is provided for illustrative purposes, and as such, shows various units arranged and named according to one of many approaches that are possible according to embodiments of the present invention. [00124] In Figure 13, front-end unit 1310 includes branch prediction unit 1311, micro operation-cache (op-cache) unit 1312, instruction cache (i-cache) unit 1313, decode unit 1314, and micro-operation (micro-op) queue unit 1315. Branch prediction unit 1311 includes branch prediction circuitry, such as a branch-target buffer (BTB), to reduce average branch delay and is coupled to op-cache unit 1312 and i-cache unit 1313. Op-cache unit 1312 includes an op-cache in which to cache micro-ops associated with instructions. I-cache 1313 unit includes an i-cache, which in an embodiment may be a 64K, four-way i-cache, in which to cache instructions. I-cache unit 1313 is coupled to decode unit 1314 to provide cached instructions to be decoded. Decode unit 1314 includes decoding circuitry, such as an instruction decoder, to decode instructions. In an embodiment, front-end unit 1310 may fetch and decode unit 1314 may decode up to four instructions per clock cycle. Op-cache unit 1312 and decode unit 1314 are each coupled to micro-op queue unit 1315 to provide two paths for loading micro-ops into micro-op queue unit 1315. Micro-op queue unit 1315 includes a micro-op queue, which in an embodiment may dispatch six micro-ops per cycle to one or more execution units.[00125] Also, in Figure 13, integer unit 1320 includes integer rename unit 1321; integer scheduler units 1322A, 1322B, 1322C, 1322D, 1322E, and 1322F (collectively, integer scheduler units 1322); integer physical register file 1323; arithmetic-logic units (ALUs) 1324A, 1324B, 1324C, and 1324D (collectively, ALUs 1324); and address generation units (AGUs) 1325A and 1325B (collectively, AGUs 1325). Integer rename unit 1321 is coupled to micro-op queue unit 1315 to receive one or more micro-ops to be executed, in whole or in part, by one or more of ALUs 1324 and/or AGUs 1325. Integer rename unit 1321 includes register renaming circuity and is also coupled to integer scheduler units 1322, which in turn are coupled to integer physical register file 1323, to provide for integer-register renaming. Integer scheduler units 1322 include scheduling circuitry for scheduling micro-ops to be executed, in whole or in part, by one or more of ALUs 1324 and/or AGUs 1325. Integer physical register file 1323 includes a file of physical integer registers, which in an embodiment may include 168 physical integer registers. Each of ALUs 1324 and AGUs 1325 are coupled to physical register file 1323 to receive values to be used as inputs in the execution of micro-ops and/or to provide values as outputs of the execution of micro-ops.[00126] Also, in Figure 13, FP unit 1330 includes FP rename unit 1331, FP scheduler unit 1332, FP register file 1333, FP multipliers 1334A and 1334B (collectively, FP multipliers 1334), and FP adders 1335A and 1335B (collectively, FP adders 1335). FP rename unit 1331 is coupled to micro-op queue unit 1315 to receive one or more micro-ops to be executed, in whole or in part, by one or more of FP multipliers 1334 and/or FP adders 1335. FP rename unit 1331 includes register renaming circuity and is also coupled to FP scheduler unit 1332, which in turn are coupled to FP register file 1333, to provide for FP-register renaming. FP scheduler unit 1332 includes scheduling circuitry for scheduling micro-ops to be executed, in whole or in part, by one or more of FP multipliers 1334 and/or FP adders 1335. Each of FP multipliers 1334 and FP adders 1335 are coupled to FP register file 1333 to receive values to be used as inputs in the execution of micro-ops and/or to provide values as outputs of the execution of micro-ops.[00127] Also, in Figure 13, load-store unit 1340 includes load-store queue unit 1341 and data cache (d-cache) unit 1342. Load-store queue unit 1341 may include any number of load and/or store queues, in an embodiment providing for two loads and one store per clock cycle, coupled to AGUs 1325 to receive memory addresses for load and/or store operations. D- cache unit 1342 includes a d-cache, which in an embodiment may be a 32K, eight-way level 1 (LI) d-cache, in which to cache data, coupled to integer physical register file 1323, FP register file 1333, and load-store queue unit 1341 to receive and provide data generated by and to be used in the execution of micro-ops.[00128] Also, in Figure 13, L2 cache unit 1350 includes an L2 cache, which in an embodiment may be a 512K, eight- way cache, in which to cache instructions and data.Exemplary Processor Architectures[00129] Figure 14 is a block diagram of a processor 1400 that may have more than one core, may have an integrated memory controller, and may have integrated graphics according to embodiments of the invention. The solid lined boxes in Figure 14 illustrate a processor 1400 with a single core 1402A, a system agent 1410, a set of one or more bus controller units 1416, while the optional addition of the dashed lined boxes illustrates an alternative processor 1400 with multiple cores 1402A-N, a set of one or more integrated memory controller unit(s) 1414 in the system agent unit 1410, and special purpose logic 1408.[00130] Thus, different implementations of the processor 1400 may include: 1) a CPU with the special purpose logic 1408 being integrated graphics and/or scientific (throughput) logic (which may include one or more cores), and the cores 1402A-N being one or more general purpose cores (e.g., general purpose in-order cores, general purpose out-of-order cores, a combination of the two); 2) a coprocessor with the cores 1402A-N being a large number of special purpose cores intended primarily for graphics and/or scientific (throughput); 3) a coprocessor with the cores 1402A-N being a large number of general purpose in-order cores; and 4) the cores 1402A-N representing any number of disaggregated cores with a separate input/output (I/O) block. Thus, the processor 1400 may be a general-purpose processors, server processors or processing elements for use in a server-environment, coprocessors (e.g., security coprocessors) high-throughput MIC processors, GPGPU’s, accelerators (such as, e.g., graphics accelerators or digital signal processing (DSP) units, cryptographic accelerators, fixed function accelerators, machine learning accelerators, networking accelerators, or computer vision accelerators), field programmable gate arrays, or any other processor or processing device. The processor may be implemented on one or more chips.The processor 1400 may be a part of and/or may be implemented on one or more substrates using any of a number of process technologies, such as, for example, BiCMOS, CMOS, or NMOS.[00131] The memory hierarchy includes one or more levels of cache within the cores, a set or one or more shared cache units 1406, and external memory (not shown) coupled to the set of integrated memory controller units 1414. The set of shared cache units 1406 may include one or more mid-level caches, such as level 2 (L2), level 3 (L3), level 4 (L4), or other levels of cache, a last level cache (LLC), and/or combinations thereof. While in one embodiment a ring-based interconnect unit 1412 interconnects the integrated graphics logic 1408 (integrated graphics logic 1408 is an example of and is also referred to herein as special purpose logic), the set of shared cache units 1406, and the system agent unit 1410/integrated memory controller unit(s) 1414, alternative embodiments may use any number of well-known techniques for interconnecting such units. In one embodiment, coherency is maintained between one or more cache units 1406 and cores 1402-A-N.[00132] In some embodiments, one or more of the cores 1402A-N are capable of multi threading. The system agent 1410 includes those components coordinating and operating cores 1402A-N. The system agent unit 1410 may include for example a power control unit (PCU) and a display unit. The PCU may be or include logic and components needed for regulating the power state of the cores 1402A-N and the integrated graphics logic 1408. The display unit is for driving one or more externally connected displays.[00133] The cores 1402A-N may be homogenous or heterogeneous in terms of architecture instruction set; that is, two or more of the cores 1402A-N may be capable of execution the same instruction set, while others may be capable of executing only a subset of that instruction set or a different instruction set.[00134] Figure 15 is a block diagram of an illustrative central processing unit (CPU) complex that may be included in a processor according to embodiments of the invention. In an embodiment, the L3 cache is an 8 MB 16-way cache split over a four-core module (referred to as a CPU complex or CCX), affording a 2 MB“slice” of L3 cache per core. However, the L3 cache slices in a CCX are implemented such that the L3 cache is a shared cache. Multiple CCXs may be included in a single processor (e.g., two CCXs form a 16 MB L3 cache). The 8 MB caches on each CCX are separate, so they act as a last level cache per four-core module with the appropriate hooks into the other L3 cache to determine if data is needed (the protocols involved in the L3 cache design allow each core to access the L3 cache of each other core). Thus, these LI, L2, and L3 caches are coherent caches, with the L3 cache slices within a CCX and between CCXs being connected by a cache coherent interconnect (also referred to as a cache coherent fabric).[00135] Figure 16 is a block diagram of an illustrative cache hierarchy that may be included in a processor according to embodiments of the invention. In Figure 16, cache hierarchy 1600 includes LI i-cache 1610A and LI d-cache 1610B (collectively, LI cache 1610), L2 instruction and date cache 1620, and level 3 (L3) instruction and data cache 1630. In an embodiment, both LI cache 1610 and L2 cache 1620 are private/local writeback caches, while L3 cache 1630 is a victim cache. In an embodiment, LI i-cache 1610A is a 64 KB 4- way cache, LI d-cache 1610B is a 32 KB 8-way cache, L2 cache 1620 is a 512 KB 8-way cache, and level 3 (L3) cache 1630 is an 8 MB 16-way cache.Exemplary Computer Architectures[00136] Figures 17-21 are block diagrams of exemplary computer architectures. Other system designs and configurations known in the arts for laptops, desktops, handheld PCs, personal digital assistants, engineering workstations, servers, network devices, network hubs, switches, embedded processors, digital signal processors (DSPs), general-purpose processors, server processors or processing elements for use in a server-environment, coprocessors (e.g., security coprocessors) high-throughput MIC processors, GPGPU’s, accelerators (such as, e.g., graphics accelerators, cryptographic accelerators, fixed function accelerators, machine learning accelerators, networking accelerators, or computer vision accelerators), field programmable gate arrays, or any other processor or processing device, graphics devices, video game devices, set-top boxes, micro controllers, cell phones, portable media players, hand held devices, and various other electronic devices, are also suitable. In general, a huge variety of systems or electronic devices capable of incorporating a processor and/or other execution logic as disclosed herein are generally suitable.[00137] Referring now to Figure 17, shown is a block diagram of a system 1700 in accordance with one embodiment of the present invention. The system 1700 may include one or more processors 1710, 1715, which are coupled to a controller hub 1720. In one embodiment, the controller hub 1720 includes a graphics memory controller hub (GMCH) 1790 and an Input/Output Hub (I OH) 1750 (which may be on separate chips); the GMCH 1790 includes memory and graphics controllers to which are coupled memory 1740 and a coprocessor 1745; the IOH 1750 couples I/O devices 1760 to the GMCH 1790.Alternatively, one or both of the memory and graphics controllers are integrated within the processor (as described herein), the memory 1740 and the coprocessor 1745 are coupled directly to the processor 1710, and the controller hub 1720 in a single chip with the IOH 1750.[00138] The optional nature of additional processors 1715 is denoted in Figure 17 with broken lines. Each processor 1710, 1715 may include one or more of the processing cores described herein and may be some version of the processor 1400.[00139] The memory 1740 may be, for example, dynamic random-access memory(DRAM), phase change memory (PCM), or a combination of the two. For at least one embodiment, the controller hub 1720 communicates with the processor(s) 1710, 1715 via a multi-drop bus, such as a front-side bus (FSB), point-to-point interface such as QuickPath Interconnect (QPI), or similar connection 1795.[00140] In one embodiment, the coprocessor 1745 is a special-purpose processor(including, e.g., general-purpose processors, server processors or processing elements for use in a server-environment, coprocessors such as security coprocessors, high-throughput MIC processors, GPGPU’s, accelerators, such as, e.g., graphics accelerators or digital signal processing (DSP) units, cryptographic accelerators, fixed function accelerators, machine learning accelerators, networking accelerators, or computer vision accelerators), field programmable gate arrays, or any other processor or processing device). In one embodiment, controller hub 1720 may include an integrated graphics accelerator.[00141] There can be a variety of differences between the physical resources 1710, 1715 in terms of a spectrum of metrics of merit including architectural, microarchitectural, thermal, power consumption characteristics, and the like.[00142] In one embodiment, the processor 1710 executes instructions that control data processing operations of a general type. Embedded within the instructions may be coprocessor instructions. The processor 1710 recognizes these coprocessor instructions as being of a type that should be executed by the attached coprocessor 1745. Accordingly, the processor 1710 issues these coprocessor instructions (or control signals representing coprocessor instructions) on a coprocessor bus or other interconnect, to coprocessor 1745. Coprocessor(s) 1745 accept and execute the received coprocessor instructions. [00143] Referring now to Figure 18, shown is a block diagram of a first more specific exemplary system 1800 in accordance with an embodiment of the present invention. As shown in Figure 18, multiprocessor system 1800 is a point-to-point interconnect system, and includes a first processor 1870 and a second processor 1880 coupled via a point-to-point interconnect 1850. Each of processors 1870 and 1880 may be some version of the processor 1400. In one embodiment of the invention, processors 1870 and 1880 are respectively processors 1710 and 1715, while coprocessor 1838 is coprocessor 1745. In another embodiment, processors 1870 and 1880 are respectively processor 1710 coprocessor 1745.[00144] Processors 1870 and 1880 are shown including integrated memory controller (IMC) units 1872 and 1882, respectively. Processor 1870 also includes as part of its bus controller unit’s point-to-point (P-P) interfaces 1876 and 1878; similarly, second processor 1880 includes P-P interfaces 1886 and 1888. Processors 1870, 1880 may exchange information via a point-to-point (P-P) interface 1850 using P-P interface circuits 1878, 1888. As shown in Figure 18, IMCs 1872 and 1882 couple the processors to respective memories, namely a memory 1832 and a memory 1834, which may be portions of main memory locally attached to the respective processors.[00145] Processors 1870, 1880 may each exchange information with a chipset 1890 via individual P-P interfaces 1852, 1854 using point to point interface circuits 1876, 1894, 1886, 1898. Chipset 1890 may optionally exchange information with the coprocessor 1838 via a high-performance interface 1892. In one embodiment, the coprocessor 1838 is a special- purpose processor, such as, for example, a high-throughput MIC processor, a network or communication processor, compression engine, graphics processor, GPGPU, embedded processor, or the like.[00146] A shared cache (not shown) may be included in either processor or outside of both processors, yet connected with the processors via P-P interconnect, such that either or both processors’ local cache information may be stored in the shared cache if a processor is placed into a low power mode.[00147] Chipset 1890 may be coupled to a first bus 1816 via an interface 1896. In one embodiment, first bus 1816 may be a Peripheral Component Interconnect (PCI) bus, or a bus such as a PCI Express bus or another third generation I/O interconnect bus, although the scope of the present invention is not so limited.[00148] As shown in Figure 18, various I/O devices 1814 may be coupled to first bus 1816, along with a bus bridge 1818 which couples first bus 1816 to a second bus 1820. In one embodiment, one or more additional processor(s) 1815, such as general-purpose processors, server processors or processing elements for use in a server-environment, coprocessors (e.g., security coprocessors) high-throughput MIC processors, GPGPU’s, accelerators (such as, e.g., graphics accelerators or digital signal processing (DSP) units, cryptographicaccelerators, fixed function accelerators, machine learning accelerators, networking accelerators, or computer vision accelerators), field programmable gate arrays, or any other processor or processing device, are coupled to first bus 1816. In one embodiment, second bus 1820 may be a low pin count (LPC) bus. Various devices may be coupled to a second bus 1820 including, for example, a keyboard and/or mouse 1822, communication devices 1827 and a storage unit 1828 such as a disk drive or other mass storage device which may include instructions/code and data 1830, in one embodiment. Further, an audio I/O 1824 may be coupled to the second bus 1820. Note that other architectures are possible. For example, instead of the point-to-point architecture of Figure 18, a system may implement a multi-drop bus or other such architecture.[00149] Referring now to Figure 19, shown is a block diagram of a second more specific exemplary system 1900 in accordance with an embodiment of the present invention. Like elements in Figures 18 and 19 bear like reference numerals, and certain aspects of Figure 18 have been omitted from Figure 19 in order to avoid obscuring other aspects of Figure 19.[00150] Figure 19 illustrates that the processors 1870, 1880 may include integrated memory and I/O control logic (“CL”) 1872 and 1882, respectively. Thus, the CL 1872, 1882 include integrated memory controller units and include I/O control logic. Figure 19 illustrates that not only are the memories 1832, 1834 coupled to the CL 1872, 1882, but also that I/O devices 1914 are also coupled to the control logic 1872, 1882. Legacy I/O devices 1915 are coupled to the chipset 1890.[00151] Referring now to Figure 20, shown is a block diagram of a SoC 2000 in accordance with an embodiment of the present invention. Similar elements in Figure 14 bear like reference numerals. Also, dashed lined boxes are optional features on more advanced SoCs. In Figure 20, an interconnect unit(s) 2002 is coupled to: an application processor 2010 which includes a set of one or more cores 1402A-N, which include cache units 1404A-N, and shared cache unit(s) 1406; a system agent unit 1410; a bus controller unit(s) 1416; an integrated memory controller unit(s) 1414; a set or one or more coprocessors 2020 which may include integrated graphics logic, an image processor, an audio processor, and a video processor, general-purpose processors, server processors or processing elements for use in a server-environment, security coprocessors, high-throughput MIC processors, GPGPU’s, accelerators (such as, e.g., graphics accelerators or digital signal processing (DSP) units, cryptographic accelerators, fixed function accelerators, machine learning accelerators, networking accelerators, or computer vision accelerators), field programmable gate arrays, or any other processor or processing device; an static random access memory (SRAM) unit 2030; a direct memory access (DMA) unit 2032; and a display unit 2040 for coupling to one or more external displays. In one embodiment, the coprocessor(s) 2020 include a special- purpose processor, such as, for example, a network or communication processor, compression engine, GPGPU, a high-throughput MIC processor, embedded processor, or the like.[00152] Referring now to Figure 21, shown is a block diagram of a SoC 2000 in accordance with an embodiment of the present invention.Concluding Remarks[00153] Embodiments of the mechanisms disclosed herein may be implemented in hardware, software, firmware, or a combination of such implementation approaches.Embodiments of the invention may be implemented as computer programs or program code executing on programmable systems comprising at least one processor, including, e.g., general-purpose processors, server processors or processing elements for use in a server- environment, coprocessors (e.g., security coprocessors) high-throughput MIC processors, GPGPU’ s, accelerators (such as, e.g., graphics accelerators or digital signal processing (DSP) units, cryptographic accelerators, fixed function accelerators, machine learning accelerators, networking accelerators, or computer vision accelerators), field programmable gate arrays, or any other processor or processing device, a storage system (including volatile and non volatile memory and/or storage elements), at least one input device, and at least one output device.[00154] Program code, such as code 1830 illustrated in Figure 18, may be applied to input instructions to perform the functions described herein and generate output information. The output information may be applied to one or more output devices, in known fashion. For purposes of this application, a processing system includes any system that has a processor, such as, for example; a digital signal processor (DSP), a microcontroller, an application specific integrated circuit (ASIC), or a microprocessor.[00155] The program code may be implemented in a high level procedural or object oriented programming language to communicate with a processing system. The program code may also be implemented in assembly or machine language, if desired. In fact, the mechanisms described herein are not limited in scope to any particular programming language. In any case, the language may be a compiled or interpreted language. [00156] One or more aspects of at least one embodiment may be implemented by representative instructions stored on a machine-readable medium which represents various logic within the processor, which when read by a machine causes the machine to fabricate logic to perform the techniques described herein. Such representations, known as“IP cores” may be stored on a tangible, machine readable medium and supplied to various customers or manufacturing facilities to load into the fabrication machines that actually make the logic or processor.[00157] Such machine-readable storage media may include, without limitation, non- transitory, tangible arrangements of articles manufactured or formed by a machine or device, including storage media such as hard disks, any other type of disk including floppy disks, optical disks, compact disk read-only memories (CD-ROMs), compact disk rewritables (CD- RWs), and magneto-optical disks, semiconductor devices such as read-only memories (ROMs), random access memories (RAMs) such as dynamic random access memories (DRAMs), static random access memories (SRAMs), erasable programmable read-only memories (EPROMs), flash memories, electrically erasable programmable read-only memories (EEPROMs), phase change memory (PCM), magnetic or optical cards, or any other type of media suitable for storing electronic instructions.[00158] Accordingly, embodiments of the invention also include non-transitory, tangible machine-readable media containing instructions or containing design data, such as Hardware Description Language (HDL), which defines structures, circuits, apparatuses, processors and/or system features described herein. Such embodiments may also be referred to as program products.[00159] Instructions to be executed by a processor core according to embodiments of the invention may be embodied in a“generic vector friendly instruction format” which is detailed below. In other embodiments, such a format is not utilized and another instruction format is used, however, the description below of the write-mask registers, various datatransformations (swizzle, broadcast, etc.), addressing, etc. is generally applicable to the description of the embodiments of the instruction(s) above. Additionally, exemplary systems, architectures, and pipelines are detailed below. Instructions may be executed on such systems, architectures, and pipelines, but are not limited to those detailed.[00160] In some cases, an instruction converter may be used to convert an instruction from a source instruction set to a target instruction set. For example, the instruction converter may translate (e.g., using static binary translation, dynamic binary translation including dynamic compilation), morph, emulate, or otherwise convert an instruction to one or more other instructions to be processed by the core. The instruction converter may be implemented in software, hardware, firmware, or a combination thereof. The instruction converter may be on processor, off processor, or part on and part off processor.[00161] Figure 22 is a block diagram contrasting the use of a software instruction converter to convert binary instructions in a source instruction set to binary instructions in a target instruction set according to embodiments of the invention. In the illustrated embodiment, the instruction converter is a software instruction converter, although alternatively the instruction converter may be implemented in software, firmware, hardware, or various combinations thereof. Figure 22 shows a program in a high-level language 2202 may be compiled using an x86 compiler 2204 to generate x86 binary code 2206 that may be natively executed by a processor with at least one x86 instruction set core 2216. The processor with at least one x86 instruction set core 2216 represents any processor that can perform substantially the same functions as an Intel processor with at least one x86 instruction set core by compatibly executing or otherwise processing (1) a substantial portion of the instruction set of the Intel x86 instruction set core or (2) object code versions of applications or other software targeted to run on an Intel processor with at least one x86 instruction set core, in order to achieve substantially the same result as an Intel processor with at least one x86 instruction set core. The x86 compiler 2204 represents a compiler that is operable to generate x86 binary code 2206 (e.g., object code) that can, with or without additional linkage processing, be executed on the processor with at least one x86 instruction set core 2216. Similarly, Figure 22 shows the program in the high level language 2202 may be compiled using an alternative instruction set compiler 2208 to generate alternative instruction set binary code 2210 that may be natively executed by a processor without at least one x86 instruction set core 2214 (e.g., a processor with cores that execute the MIPS instruction set of MIPS Technologies ofSunnyvale, CA and/or that execute the ARM instruction set of ARM Holdings of Sunnyvale, CA). The instruction converter 2212 is used to convert the x86 binary code 2206 into code that may be natively executed by the processor without an x86 instruction set core 2214.This converted code is not likely to be the same as the alternative instruction set binary code 2210 because an instruction converter capable of this is difficult to make; however, the converted code will accomplish the general operation and be made up of instructions from the alternative instruction set. Thus, the instruction converter 2212 represents software, firmware, hardware, or a combination thereof that, through emulation, simulation or any other process, allows a processor or other electronic device that does not have an x86 instruction set processor or core to execute the x86 binary code 2206. [00162] Operations in flow diagrams may have been described with reference to exemplary embodiments of other figures. However, it should be understood that the operations of the flow diagrams may be performed by embodiments of the invention other than those discussed with reference to other figures, and the embodiments of the invention discussed with reference to other figures may perform operations different than those discussed with reference to flow diagrams. Furthermore, while the flow diagrams in the figures show a particular order of operations performed by certain embodiments of the invention, it should be understood that such order is exemplary (e.g., alternative embodiments may perform the operations in a different order, combine certain operations, overlap certain operations, etc.).[00163] One or more parts of embodiments of the invention may be implemented using different combinations of software, firmware, and/or hardware. Embodiments may be implemented using an electronic device that stores and transmits (internally and/or with other electronic devices over a network) code (which is composed of software instructions and which is sometimes referred to as computer program code or a computer program) and/or data using machine-readable media (also called computer-readable media), such as machine- readable storage media (e.g., magnetic disks, optical disks, read only memory (ROM), flash memory devices, phase change memory) and machine-readable transmission media (also called a carrier) (e.g., electrical, optical, radio, acoustical or other form of propagated signals - such as carrier waves, infrared signals). Thus, an electronic device (e.g., a computer) may include hardware and software, such as a set of one or more processors coupled to one or more machine-readable storage media to store code for execution on the set of processors and/or to store data. For instance, an electronic device may include non-volatile memory containing the code since the non-volatile memory may persist code/data even when the electronic device is turned off (when power is removed), and while the electronic device is turned on that part of the code that is to be executed by the processor(s) of that electronic device is typically copied from the slower non-volatile memory into volatile memory (e.g., dynamic random access memory (DRAM), static random access memory (SRAM)) of that electronic device. Typical electronic devices also include a set or one or more physical network interface(s) to establish network connections (to transmit and/or receive code and/or data using propagating signals) with other electronic devices.[00164] While the invention has been described in terms of several embodiments, those skilled in the art will recognize that the invention is not limited to the embodiments described, can be practiced with modification and alteration within the spirit and scope of the appended claims. The description is thus to be regarded as illustrative instead of limiting. |
An extended General Purpose Input/Output (eGPIO) scheme is disclosed. In some implementations, an input/output (I/O) boundary scan cell comprises an output path to route output signals from a first voltage domain and signals from a second voltage domain to an I/O pad operating in a pad voltage domain, the output path having a first level shifter to up shift the output signals from the first voltage domain or the second voltage domain to the pad voltage domain; an input path to receive input signals from the I/O pad, the input path having a second level shifter to down shift the input signals from the pad voltage domain to the second voltage domain; and test logic to test signals in the first voltage domain and the second voltage domain. |
1.An input/output I/O boundary scan unit, including:An output path for routing the output signal from the first voltage domain and the signal from the second voltage domain to an I/O pad operating in the pad voltage domain, the output path having a first level shifter, To up-convert the output signal from the first voltage domain or the second voltage domain to the pad voltage domain;An input path for receiving an input signal from the I/O pad, and the input path has a second level shifter to down-convert the input signal from the pad voltage domain to the second Voltage domain; andThe test logic is used to test the signals in the first voltage domain and the second voltage domain.2.The I/O boundary scan unit according to claim 1, further comprising:The input enable path is used to process the signal in at least one of the first voltage domain and the second voltage domain, and output the input enable signal in the pad voltage domain to the I/O pads.3.The I/O boundary scan unit according to claim 1, further comprising:The output enable path is used to process a signal in at least one of the first voltage domain and the second voltage domain, and output the output enable signal in the pad voltage domain to the I/O pads.4.The I/O boundary scan unit according to claim 1, further comprising:The driving strength and pull-up control circuit is used to process the signal in at least one of the first voltage domain and the second voltage domain, and to set the driving strength in the pad voltage domain and the upper The pull control signal is output to the I/O pad.5.The I/O boundary scan unit of claim 1, wherein the first voltage domain is degradable in a low power mode.6.The I/O boundary scan unit according to claim 1, wherein the second voltage domain remains on in a low power mode.7.4. The I/O boundary scan unit according to claim 1, wherein the voltage level of the pad voltage domain is higher than the first maximum voltage level of the first voltage domain.8.8. The I/O boundary scan unit of claim 7, wherein the voltage level of the pad voltage domain is higher than the second maximum voltage level of the second voltage domain.9.The I/O boundary scan unit according to claim 1, wherein the output path comprises:Boundary scan BSCAN register, capable of operating in the first voltage domain; andThe inverter can operate in the second voltage domain.10.A method of using input/output I/O boundary scan units, including:Via the output path in the I/O boundary scan unit, the output signal from the first voltage domain and the signal from the second voltage domain are routed to the I/O pads operating in the pad voltage domain, and the output The path has a first level shifter to convert the output signal from the first voltage domain or the second voltage domain to the pad voltage domain;An input signal is received from the I/O pad through an input path, the input path having a second level shifter to down-convert the input signal from the pad voltage domain to the second voltage domain ;as well asThe test logic in the I/O boundary scan unit is used to test the signals in the first voltage domain and the second voltage domain.11.The method according to claim 10, further comprising:Enable the input path to process signals in at least one of the first voltage domain and the second voltage domain, and output the input enable signal in the pad voltage domain to the I /O pad.12.The method according to claim 10, further comprising:Enable the output path to process signals in at least one of the first voltage domain and the second voltage domain, and output the output enable signal in the pad voltage domain to the I /O pad.13.The method according to claim 10, further comprising:In at least one of the first voltage domain and the second voltage domain, using a drive strength and a pull-up control circuit to process signals; andThe drive strength and pull-up control circuit is used to output the drive strength and pull-up control signal in the pad voltage domain to the I/O pad.14.The method of claim 10, wherein the first voltage domain is degradable in a low power mode.15.The method of claim 10, wherein the second voltage domain remains on in a low power mode.16.The method according to claim 10, wherein the voltage level of the pad voltage domain is higher than the first maximum voltage level of the first voltage domain.17.The method of claim 16, wherein the voltage level of the pad voltage domain is higher than the second maximum voltage level of the second voltage domain.18.The method of claim 10, wherein the output path comprises:Boundary scan BSCAN register, capable of operating in the first voltage domain; andThe inverter can operate in the second voltage domain.19.An input/output I/O boundary scan unit, including:Device for routing the output signal from the first voltage domain and the signal from the second voltage domain to the I/O pad operating in the pad voltage domain through the output path in the I/O boundary scan unit , The output path has a first level shifter to up-convert the output signal from the first voltage domain or the second voltage domain to the pad voltage domain;A device for receiving an input signal from the I/O pad through an input path, the input path having a second level shifter to down-convert the input signal from the pad voltage domain to the The second voltage domain; andA device for using the test logic in the I/O boundary scan unit to test signals in the first voltage domain and the second voltage domain.20.The I/O boundary scan unit according to claim 19, further comprising:For enabling the input path to process signals in at least one of the first voltage domain and the second voltage domain, and output the input enable signal in the pad voltage domain to all The device of the I/O pad.21.The I/O boundary scan unit according to claim 19, further comprising:Used to enable the output path to process signals in at least one of the first voltage domain and the second voltage domain, and output the output enable signal in the pad voltage domain to all The device of the I/O pad.22.The I/O boundary scan unit according to claim 19, further comprising:A device for processing a signal using a drive strength and a pull-up control circuit in at least one of the first voltage domain and the second voltage domain; andA device for using the drive strength and pull-up control circuit to output the drive strength and pull-up control signal in the pad voltage domain to the I/O pad.23.The I/O boundary scan unit of claim 19, wherein the first voltage domain is degradable in a low power mode.24.The I/O boundary scan unit of claim 19, wherein the second voltage domain remains on in a low power mode.25.The I/O boundary scan unit according to claim 19, wherein the voltage level of the pad voltage domain is higher than the first maximum voltage level of the first voltage domain.26.26. The I/O boundary scan unit of claim 25, wherein the voltage level of the pad voltage domain is higher than the second maximum voltage level of the second voltage domain.27.The I/O boundary scan unit of claim 19, wherein the output path comprises:Boundary scan BSCAN register, capable of operating in the first voltage domain; andThe inverter can operate in the second voltage domain. |
Expanded GPIO (eGPIO)Cross references to related applicationsThis patent application requires the provisional application number 62/642,702 entitled "Extended GPIO (eGPIO)" filed on March 14, 2018, and the non-provisional application entitled "Extended GPIO (eGPIO)" filed on August 13, 2018. Priority of provisional application number 16/101,586.Technical fieldAspects of the present disclosure generally relate to input/output (I/O) of semiconductor chips, and more specifically to extended general purpose I/O (eGPIO).Background techniqueGenerally, I/O pads (also referred to as pads) of a semiconductor chip are configured to operate in a voltage domain of a higher voltage range (commonly referred to as a pad voltage domain). The core circuit device of the semiconductor chip is configured to operate in a lower voltage range voltage domain (commonly referred to as a core voltage domain). In addition, many semiconductor chips support multiple core voltage domains, some of these core voltage domains are collapsible during low power mode, while other core voltage domains remain open. Therefore, the input/output (I/O) architecture of a semiconductor chip is usually designed to provide an interface that supports routing and processing signals in the pad voltage domain and the core voltage domain.Summary of the inventionThe following presents a simplified overview of one or more embodiments to provide a basic understanding of such embodiments. This summary is not an exhaustive overview of all anticipated implementations, and is neither intended to identify key or important elements of all implementations, nor is it intended to limit the scope of any or all implementations. The sole purpose of this summary is to present some concepts of one or more embodiments in a simplified form as a prelude to the more detailed description that is presented later.In some embodiments, the input/output (I/O) boundary scan unit includes: an output path for routing the output signal from the first voltage domain and the signal from the second voltage domain to the pad voltage I/O pads operating in the domain, the output path has a first level shifter to up-convert the output signal from the first voltage domain or the second voltage domain to the pad voltage domain; the input path, the input path is used for Receiving the input signal from the I/O pad, the input path has a second level shifter to down-convert the input signal from the pad voltage domain to the second voltage domain; and a test logic for testing the Signals in the first voltage domain and the second voltage domain.In some embodiments, the I/O boundary scan unit further includes an input enable path to process signals in at least one of the first voltage domain and the second voltage domain, and input signals in the pad voltage domain. The enable signal is output to the I/O pad.In some embodiments, the I/O boundary scan unit further includes an output enable path to process signals in at least one of the first voltage domain and the second voltage domain, and output signals in the pad voltage domain. The enable signal is output to the I/O pad.In some embodiments, the I/O boundary scan unit further includes a drive strength and pull-up control circuit to process signals in at least one of the first voltage domain and the second voltage domain, and set the signal in the pad voltage domain. The drive strength and pull control signals in the output are output to the I/O pads.In some embodiments, the first voltage domain may be degraded in the low power mode. In addition, the second voltage domain can remain on in the low power mode.In some embodiments, the voltage level of the pad voltage domain is higher than the first maximum voltage level of the first voltage domain. Likewise, the voltage level of the pad voltage domain may be higher than the second maximum voltage level of the second voltage domain.In order to achieve the foregoing and related objects, one or more embodiments include the features fully described below and specifically pointed out in the claims. The following description and drawings set forth in detail certain exemplary aspects of one or more embodiments. However, these aspects only indicate a few of the various ways in which the principles of the various embodiments can be adopted, and the description of the embodiments is intended to include all these aspects and their equivalents.Description of the drawingsFigure 1 is a conventional I/O scheme.Figure 2 is another conventional I/O scheme.Figure 3 is an embodiment of an expanded general purpose I/O (eGPIO) boundary scan unit.FIG. 4 shows one embodiment of an input path 400 in an exemplary semiconductor chip.FIG. 5 shows one embodiment of an input enable path 500 in an exemplary semiconductor chip.FIG. 6 shows one embodiment of an output or output enable path 600 in an exemplary semiconductor chip.FIG. 7 shows one embodiment of the drive strength and pull-up control circuit 700 in an exemplary semiconductor chip.FIG. 8 shows an embodiment of the design of the test logic 800 in the boundary scan unit of an exemplary semiconductor chip.FIG. 9 shows a flowchart for explaining a method of using an I/O boundary scan unit.Detailed waysThe specific embodiments set forth below in conjunction with the accompanying drawings are intended as descriptions of various configurations, and are not intended to represent the only configurations in which the concepts described herein can be practiced. The detailed description includes specific details to provide a thorough understanding of various concepts. However, it is obvious to those skilled in the art that these concepts can be practiced without these specific details. In some cases, well-known structures and components are shown in the form of block diagrams to avoid confusing such concepts.Semiconductor chips usually include many inputs/outputs (I/O). The I/O of the semiconductor chip is coupled to the I/O pads (these I/O pads can be referred to as "pads") to provide an interface to transfer signals from the circuit devices inside the semiconductor chip (also called core circuit devices) ) Routing to I/O pads and/or routing signals from I/O pads to circuit devices inside the semiconductor chip. Many complex semiconductor chips support at least one low-power mode. In this low-power mode, some circuits or subsystems in the semiconductor chip can be power collapsed or shut down, while some subsystems in the semiconductor chip remain open . Subsystems that remain on during the low power mode may also be referred to as "always-on" subsystems (for example, sensor subsystems, audio subsystems, and/or wireless local area network (WLAN) connection subsystems). The I/Os of the always-on subsystem are usually assigned dedicated pads because these I/Os remain on during the low-power mode. An example of such a conventional I/O architecture is shown in FIG. 1.Figure 1 shows a conventional I/O architecture in a semiconductor chip. The semiconductor chip 100 includes an always-on subsystem 110, a first level shifter 112, a first high voltage (HV) wiring 114, a dedicated I/O 118, a subsystem 120, a second level shifter 122, a second wiring 124, and General I/O (GPIO) 128. The always-on subsystem 110 is coupled to the dedicated I/O 118 via the first level shifter 112 and the first wiring 114. Likewise, the subsystem 120 is coupled to the GPIO 128 via the second level shifter 122 and the second wiring 124. As the name implies, even when the semiconductor chip 100 is in a low power mode, the always-on subsystem 110 is always on. Unlike the always-on subsystem 110, when the semiconductor chip 100 enters a low power mode, the subsystem 120 can be power down or shut down. The dedicated I/O 118 is specifically allocated (or dedicated to) the always-on subsystem 110 because the second wiring 124 to the GPIO 128 maintains the power degraded state during the low power mode, and the GPIO 128 cannot be used for multiplexing the always-on Subsystem I/O.Although such conventional I/O architectures are easy to design, there are several problems associated with such I/O architectures. One problem with the conventional I/O architecture is that the number of I/O pads increases. However, the number of I/O pads is limited by the physical size of the chip and package. The second problem with the conventional I/O architecture is the lack of flexibility in reusing dedicated I/O as GPIO in applications that do not use one or more always-on subsystems. As a result, the pads assigned to the unused always-on subsystem cannot be reused.Figure 2 shows another conventional I/O architecture in another semiconductor chip. The semiconductor chip 200 includes: a first subsystem 210 that is always on, a first level shifter 212, a first wiring 214, a second subsystem 220 that is not always on, a second level shifter 222, and a third level shifter 226, multiplexer 230 and GPIO 228. The always-on subsystem 210 is coupled to the first input of the multiplexer 230 via the first level shifter 212 and wiring 214. The first level shifter 212 is a core-to-pad level shifter that outputs signals in the pad voltage domain, and the voltage of the pad voltage domain is generally higher than the core voltage domain. The wiring 214 includes a high voltage (HV) wiring to route signals in the pad voltage domain. The multiplexer 230 is also operated in the pad domain. The output of multiplexer 230 is coupled to GPIO 228. The multiplexer 230 may be implemented using one or more high voltage units. The subsystem 220 is coupled to the second input of the multiplexer 230 via the second level shifter 222 and the third level shifter. The second level shifter 222 may be a core-to-core level shifter, and the third level shifter 226 may be a core-to-pad level shifter.In operation, the multiplexer 230 may select a signal from the always-on subsystem 210 or a signal from the subsystem 220 based on the I/O selection signal 232. The multiplexer 230 outputs the selected signal to the GPIO 228. In this way, the always-on subsystem 210 and the non-always-on subsystem 220 can share the GPIO 228. However, a problem of the conventional I/O architecture is that it is necessary to route the core-pad interface logic in the pad voltage domain, which is usually higher than the voltage of other voltage domains in the core of the semiconductor chip 200 . Such wiring requires additional core-to-pad level shifters (for example, the third level shifter 226), high voltage combination unit (for example, multiplexer 230), and wire wiring (for example, high voltage wiring 214), In advanced technology nodes, this has increasingly become a yield risk and silicon area overhead.Therefore, there is a need in the art to provide an I/O architecture that supports multiple voltage I/O multiplexing solutions without adding a large number of high-voltage circuit devices. These high-voltage circuit devices not only occupy valuable silicon area, but also increase the risk of yield . In the following, some implementations of the new I/O architecture are described. The new I/O architecture provides multi-voltage I/O boundary scan cells to reduce high voltage wiring and expand multiplexing capabilities. Such an I/O architecture may also be referred to as extended GPIO (eGPIO).FIG. 3 shows a conceptual block diagram of an embodiment of the eGPIO boundary scan unit 300 in a semiconductor chip. The eGPIO boundary scan (BSCAN) unit 300 provides an interface between the pads of the semiconductor chip and the subsystems in the core of the semiconductor chip. The pad is in the pad voltage domain (PX). The subsystems in the core may be operated in one or more core voltage domains (eg, CX, MX, etc.) lower than the pad voltage domain. The eGPIO BSCAN unit 300 includes an input path 310, an output path 320, an output enable path 330, an input enable path 340, a test logic 350, and a drive strength and pull-up control circuit 360. It should be understood that the eGPIO BSCAN unit 300 may include additional input paths, output paths, input enable paths, output enable paths, and test logic. However, these additional input/output paths are not shown in Figure 3 to avoid blurring the view.As shown in FIG. 3, the input path 310, the output path 320, the output enable path 330, the input enable path 340, and the test logic 350 overlap with the drive strength and pull-up control circuit 360. The overlap of these blocks means that at least some of the high voltage level converters and infrastructure of the drive strength and pull-up control circuit 360 are shared with the input path 310, the output path 320, the output enable path 330, the input enable path 340, and the test logic 350 . Likewise, the test logic 350 overlaps the input path 310, the output path 320, the output enable path 330, the input enable path 340, and the drive strength and pull-up control circuit 360. The overlap of these blocks also means that at least some of the high-voltage level converters and high-voltage level converters in the test logic 350 are shared with the input path 310, output path 320, output enable path 330, input enable path 340, and drive strength and pull-up control circuit 360. basic structure.In some embodiments, the drive strength and pull-up control circuit 360 receives control signals from multiple core voltage domains. Based on the received control signal, the drive strength and pull-up control circuit 360 generates drive strength and pull-up control signals in the pad voltage domain. Additionally, the drive strength and pull-up control circuit 360 may generate control signals to be routed to other blocks (such as input path 310, output path 320, etc.) within the unit 300 to provide drive strength and pull-up control. Therefore, high-voltage wire wiring can be greatly reduced or optimized. More details of one embodiment of the drive strength and pull-up control circuit 360 are discussed below with reference to FIG. 7.In some embodiments, the input path 310 receives an input signal from the pad. When received, the input signal is in PX. Therefore, the input path 310 may include a pad-to-core level converter to down-convert the input signal to one of the core voltage domains. In this way, high-voltage wire wiring in the input path can be eliminated. More details of one embodiment of the input path 310 are discussed below with reference to FIG. 4.In some embodiments, the output path 320 receives output signals from the core. The output signal can come from a subsystem that is always on and/or can be degraded in low power mode. The output signal is in one or two of the core voltage domains. For example, if the output signal comes from an always-on subsystem, the output signal is in the MX voltage domain. If the output signal comes from a subsystem that can be power degraded in low power mode, the output signal is in the CX voltage domain. Therefore, the output path 320 may include a core-to-pad level shifter to up-convert the output signal to PX before sending the output signal to the pad. More details of one embodiment of output path 320 are discussed below with reference to FIG. 6.In some embodiments, the output enable path 330 receives control signals from the core. The control signal can be in CX and/or MX. Based on the control signal, the output enable path 330 generates an output enable signal, and before sending the output enable signal to the pad, the output enable signal is up-level converted to PX. More details of one embodiment of the output enable path 330 are discussed below with reference to FIG. 6.In some embodiments, the input enable path 340 receives control signals from the core. The control signal can be in CX and/or MX. Based on the control signal, the input enable path 340 generates an input enable signal, and before sending the input enable signal to the pad, the input enable signal is up-level converted to PX. More details of one embodiment of the input enable path 340 are discussed below with reference to FIG. 5.In some embodiments, the test logic 350 receives control signals from the core. The received control signal may include some control signals received by other blocks, such as the input enable path 340, the output enable path 330, the output path 320, and the input path 310. During the test mode, the test logic 350 is configured to use the control signals of the aforementioned blocks to test various signal paths to screen for defective signal paths (for example, stuck one or zero). Note that the test logic 350 can test signal paths in two or more core voltage domains (eg, CX and MX). More details of one embodiment of the test logic 350 are discussed below with reference to FIG. 8.Different from the conventional I/O architecture shown in FIG. 1 and FIG. 2, all control signals and data signals are introduced into the core voltage domain (for example, MX, CX) in the eGPIOBSCAN unit 300 instead of the high voltage PX. In the eGPIO BSCAN unit 300, only the final output stage signal to the pad is level-converted to PX, instead of level-converting the signal internally and performing calculations in the PX domain. In addition, the architecture shown in Figure 3 also allows I/O dedicated to the always-on subsystem to be reused for degradable I/O signals controlled by the main application processor. Additional multiplexers are provided in the input path 310, the output path 320, the output enable path 330, the input enable path 340, and the drive strength and pull-up control circuit 360 to allow switching between test mode and functional mode Switch. Therefore, the multiplexing capability of the eGPIO BSCAN unit 300 is greatly expanded to support I/O multiplexing and testing solutions for multiple power domain I/O signals. To further illustrate the concept, some implementations of each of the multiple blocks in the eGPIO BSCAN unit 300 are discussed in detail below.FIG. 4 shows one embodiment of an input path 400 in an exemplary semiconductor chip. In some embodiments, at least three (3) voltage domains are involved in the input path 400. In the current example, the three voltage domains are the pad domain (PX), the core domain (CX), and the always-on power domain, such as the memory domain (MX). The voltage range of PX is generally higher than that of other domains because PX serves I/O pads that are connected to wiring and/or other chips outside the semiconductor chip. In addition, PX cannot be downgraded. In other words, whenever the semiconductor chip is powered on, the PX remains on. In the current example, the voltage range of MX is similar to CX. However, CX can be degraded in low power mode, while MX remains on. Therefore, the circuit device that remains powered on in the MX during the low power mode may be referred to as an "island."4, the input path 400 includes a first level shifter 410, a second level shifter 420, an inverter 430, a multiplexer (MUX) 440, and a third level shifter 450. The level shifters 410 and 420 are configured to convert the signal from CX to MX. Inverter 430 and MUX 440 operate in MX. The level shifter 450 is configured to convert the signal from PX to MX.In operation, the level shifter 450 receives the input signal padside_core_in from PX and down-converts it to MX. Before inputting the internal signal core_in to the MUX 440, the level converter 410 converts the internal signal core_in from the CX level to MX. In response to the control signal from the level shifter 420, the MUX 440 can select the down-converted signal from the level shifter 450 or the level-converted internal signal from the level shifter 410. The level converter 420 receives the boundary scan input bypass control signal bsin_bypass in the CX, and converts the bsin_bypass level to MX to generate the control signal. The inverter 430 receives the low power control signal freezio and generates an inverted version of the freezio to input to the level shifters 410 and 420 to enable the level shifters 410 and 420. It should be understood that once the level converter 450 has down-converted padside_core_in from PX to MX, the remaining processing in the input path 400 is performed in MX, thereby eliminating the use of PX domain circuits in the rest of the input path 400 Device.FIG. 5 shows one embodiment of an input enable path 500 in an exemplary semiconductor chip. Similar to the input path 400 in FIG. 4, the input enable path 500 also involves the three voltage domains discussed above, namely, PX, MX, and CX. 5, the input enable path 500 includes a first level shifter 510, a second level shifter 520, an inverter 530, a multiplexer 540, and a third level shifter 550. Inverter 530 and MUX 540 are configured to operate in MX. The level converters 510 and 520 are configured to convert the signal from CX to MX. The level shifter 550 is configured to up-convert the signal from MX to PX.During operation, inverter 530 inverts freezio, and applies the inverted freezio to enable level shifters 510 and 520. The level converter 520 receives another drive strength and pull-up DFT (design for testing, design for testing) control signal test_drive_pull_ctl from the CX, converts the test_drive_pull_ctl level to MX, and then inputs the down-converted test_drive_pull_ctl to the MUX 540 to control the input selection of MUX 540. MUX540 receives two input signals. One input signal is core_ie in MX. The other input signal is the level converted test_core_ie from the level converter 510. The level converter 510 converts the test_core_ie from CX to MX. Based on test_drive_pull_ctl, MUX 540 selects one of core_ie and level-converted test_core_ie. The MUX 540 outputs the selected signal to the level shifter 550. The level converter 550 up-converts the output signal of the MUX 540 from MX to PX as padside_core_ie. It should be understood that most of the signal processing of the input enable path 500 is performed in MX, and the final result is up-converted by the level converter 550 to generate padside_core_ie ready to be sent to the pad. Therefore, the use of the PX domain circuit device can be minimized in most of the input enable path 500.FIG. 6 shows one embodiment of an output or output enable path 600 in an exemplary semiconductor chip. Similar to the input path 400 in FIG. 4, the output or output enable path 600 also involves the three voltage domains discussed above, namely, PX, MX, and CX. In addition, similar to the input enable path 500 in FIG. 5, the output or output enable path 600 includes a first level shifter 610, a second level shifter 620, an inverter 630, a multiplexer 640, and The third level shifter 650. Inverter 630 and MUX 640 are configured to operate in MX. The level converters 610 and 620 are configured to convert the signal from CX to MX. The level shifter 650 is configured to up-convert the signal from MX to PX. Additionally, a dashed box 605 in FIG. 6 shows a circuit device reused by the test logic in the output or output enable path.In some embodiments, block 605 includes OR gate 660, first MUX 670, second MUX 680, boundary scan (BSCAN) register 685, and level shifter 690. Level converter 690 converts core_out (if path 600 is configured as an output path) or core_oe (if path 600 is configured as an output enable path) from the always-on island power domain MX level to CX for DFT calculation, The level-converted signal is then input to MUX 680. The MUX 680 receives the second data input gpio_core_out (if the path 600 is configured as an output path) or gpio_core_oe (if the path 600 is configured as an output enable path) in the CX from the degradable power domain. Based on egpio_en, the MUX 680 selects one of the multiple data inputs, and forwards the selected data input to the BSCAN register 685 operating in the CX.In some embodiments, gpio_core_out (if path 600 is configured as an output path) or gpio_core_oe (if path 600 is configured as an output enable path) in CX is also input to MUX 670. The MUX 670 receives the second input signal test_core_out (if the path 600 is configured as an output path) or test_core_oe (if the path 600 is configured as an output enable path). Based on the DFT control signal test_mode, the MUX 670 selects between the DFT input signal or the function input from the CX domain, and outputs the selected signal to the level shifter 610. Both the control signals egpio_en and test_mode are input to the OR gate 660, which outputs the signal to the level shifter 620. The level-converted output signal of the OR gate 660 is input to the MUX 640 to select one of the multiple data inputs of the MUX 640 (ie, core_out or core_oe and the output of the level converter 610). The output of MUX 640 is forwarded to level shifter 650. The level converter 650 up-converts the output of the MUX 640 to PX, and then forwards the up-converted signal to the pad. It should be understood that the OR gate 660, MUX 670, and MUX 640 create a priority multiplexing scheme. When test_mode is enabled in some embodiments, the priority multiplexing scheme gives higher priority to the propagation of the DFT signal. Similar to the input path 400 in FIG. 4 and the input enable path 500 in FIG. 5, most of the processing in the output or output enable path 600 is executed in CX or MX, so that the output or output enable path 600 is PX domain circuit devices are minimized. It should also be understood that path 600 includes circuit devices capable of operating in CX (for example, BSCAN register 685) and circuit devices capable of operating in MX (for example, inverter 630, MUX 640). The following feature can provide greater flexibility in the design: having different circuit devices capable of operating in different voltage domains (eg, CX, MX) within the path 600 (and therefore the eGPIO boundary scan unit 300).FIG. 7 shows one embodiment of the drive strength and pull-up control circuit 700 in an exemplary semiconductor chip. Similar to the input path 400 in FIG. 4, the drive strength and pull-up control circuit 700 also involves the three voltage domains discussed above, namely, PX, MX, and CX. In addition, similar to the input enable path 500 in FIG. 5, the drive strength and pull-up control circuit 700 includes a first level shifter 710, a second level shifter 720, an inverter 730, and a multiplexer 740. And the third level shifter 750. Inverter 730 and MUX 740 are configured to operate in MX. The level converters 710 and 720 are configured to convert the signal from CX to MX. The level shifter 750 is configured to up-convert the signal from MX to PX.In some embodiments, the drive strength and pull-up control circuit 700 further includes the circuit device in the block 705 shown in FIG. 7. The circuit arrangement within block 705 may be configured to operate in the CX. Specifically, block 705 includes OR gate 760 and MUX 770. MUX 770 receives two inputs, namely, egio_drive_strength from the degradable power domain and test_mode_drive_strength from the DFT controller. The MUX 770 selects one of the two inputs based on the control signal test_mode_drive_strength_ctl. The MUX 770 forwards the selected input to the level shifter 710 to convert the selected signal from CX level to MX. The signal test_mode_drive_strength_ctl and the eGPIO enable signal egpio_en are input to the OR gate 760. OR gate 760 forwards its output to level shifter 720 to convert the output from CX level to MX.Similar to the input enable path 500 in FIG. 5 and the output or output enable path 600 in FIG. 6, the inverter 730 and the MUX 740 of the drive strength and pull-up control circuit 700 operate in the MX. The level converter 710 and the level converter 720 level-convert from the CX domain to the MX domain. Specifically, the inverter 730 inverts the control signal freezio, and forwards the inverted freezio to the level shifters 710 and 720 to enable the level shifters 710 and 720. The output of the level shifter 710 and drive_strength (drive strength and pull-up control signal in MX) are input to the MUX 740. The MUX 740 selects one of the multiple inputs based on the test_drive_pull_ctl signal from the level shifter 720. Finally, the level shifter 750 up-converts the output signal of the MUX 740 from MX to PX, and then forwards the up-converted signal to the pad. Again, it should be understood that most of the signal processing in the drive strength and pull-up control circuit 700 is performed in the lower voltage voltage domain (for example, MX and/or CX). Therefore, the need for complex processing circuits in the PX domain is minimized.FIG. 8 shows an embodiment of the design of the test logic 800 in the boundary scan unit of an exemplary semiconductor chip. The design of the test logic 800 only involves lower voltage core domains, for example, CX and MX in the current example. The test logic 800 includes an XOR gate 850 and four (4) level shifters 810 to 840. The outputs of the level shifters 810 to 840 are all coupled to the input terminal of the XOR gate 850. In some embodiments, XOR gate 850 operates in CX. The internal signal in the MX may be input to the level converters 810 to 840, and the level converters 810 to 840 convert the internal signal from the MX level to the CX before outputting the level converter signal to the XOR gate 850. Specifically, core_ie_mx, core_ie, drive_strength_control, and pull_control signals are input to level converters 810, 830, 820, and 840, respectively. Then, the XOR gate 850 outputs the signal bsm_dft_obs that can be used in other test circuit devices in the core of the semiconductor chip.In one embodiment, during the testing of the semiconductor chip, all of the core_ie_mx, core_ie, drive_strength_control, and pull_control signals are driven to logic 0. Under this test condition, the output signal bsm_dft_obs of the XOR gate 850 is expected to become zero. If any of the aforementioned signals is stuck at logic 1, bsm_dft_obs will become logic 1. In other embodiments, the aforementioned signals may be driven to other values or combinations of multiple values to provide additional screening of semiconductors.It should be understood that, because the internal signals of the boundary scan unit are maintained in the core voltage domain, the circuit devices in the core voltage domain (for example, CX and MX in the current example) can be used to implement the test logic 800. As discussed in detail above with reference to Figures 4 to 7, the signals inside the various parts of the boundary scan unit are processed in the core voltage domain until the signal is ready to be sent to the pad, and then the signal is level converted to PX (Higher voltage domain).FIG. 9 shows a flowchart for explaining a method of using an I/O boundary scan unit. The I/O boundary scan unit may be an eGPIO boundary scan unit, and some embodiments of the eGPIO boundary scan unit have been described in detail above. The method can be implemented using hardware, software, firmware, or any combination of the above. It should be understood that the steps of the method described below can be executed sequentially or simultaneously in various orders.The method starts in block 910. In block 910, the output signal from the first voltage domain and the signal from the second voltage domain are routed to operate in the pad voltage domain through the output path in the I/O boundary scan unit I/O pads. In some embodiments, the output path has a first level shifter to up-convert the output signal from the first voltage domain or the second voltage domain to the pad voltage domain.Then, the method moves to block 920, in which an input signal is received from the I/O pad through the input path. In some embodiments, the input path has a second level shifter to down-convert the input signal from the pad voltage domain to the second voltage domain.Finally, the method transfers to block 930, where the test logic in the I/O boundary scan unit is used to test the signals in the first voltage domain and the second voltage domain.The above description of the present disclosure is provided to enable any person skilled in the art to make or use the present disclosure. Various modifications to the present disclosure are obvious to those skilled in the art, and the general principles defined herein can be applied to other modifications without departing from the spirit or scope of the present disclosure. Therefore, the present disclosure is not intended to be limited to the examples described herein, but is intended to be consistent with the widest scope that conforms to the principles and novel features disclosed herein. |
An integrated force sensing element (100, 700) includes a piezoelectric sensor (102, 602) formed in an integrated circuit (IC) chip and a strain gauge (104, 504) at least partially overlying the piezoelectric sensor, where the piezoelectric sensor is able to flex. A human-machine interface (1000) using the integrated force sensing element is also disclosed and may include a conditioning circuit (1006), temperature gauge (1010), FRAM (1008) and a processor core (1004). |
1.An integrated force sensing element comprising:A piezoelectric sensor formed in an integrated circuit IC chip; andA strain gauge at least partially overlying the piezoelectric sensor, wherein the piezoelectric sensor is capable of bending.2.The integrated force sensing element of claim 1, wherein the IC chip is thinned to 75 microns or less.3.The integrated force sensing element of claim 2, wherein the IC chip is thinned to 50 microns or less.4.The integrated force sensing element of claim 2, wherein the IC chip is packaged in a manner that creates a cavity adjacent to the piezoelectric sensor.5.The integrated force sensing element of claim 4, wherein the IC chip is attached to a lead frame such that the piezoelectric sensor is adjacent to a gap between a bottom of the lead frame and a printed circuit board.6.The integrated force sensing element of claim 4 wherein the IC chip is attached to a lead frame that has been etched to create the cavity underlying the piezoelectric sensor.7.The integrated force sensing element of claim 1, wherein the piezoelectric sensor extends over a MEMS-filled MEMS cavity filled with a compliant material, the piezoelectric sensor further comprising a piezoelectric element deposited over the piezoelectric sensor Conformity layer.8.The integrated force sensing element of claim 1, wherein the piezoelectric sensor comprises a ferroelectric material selected from the group consisting of lead zirconate titanate PZT, aluminum nitride AlN, and zinc oxide ZnO.9.The integrated force sensing element of claim 1, wherein the strain gauge comprises silicon chromium SiCr.10.The integrated force sensing element of claim 1, wherein the strain gauge comprises four thin film resistors forming a Wheatstone bridge.11.The integrated force sensing element of claim 1, further comprising a signal conditioning circuit integrated on the IC chip to condition a first signal from the piezoelectric sensor and a second signal from the strain gauge Of the second signal.12.A human-machine interface HMI comprising:An integrated circuit (IC) chip including an integrated force sensor including a piezoelectric sensor and a strain gauge, the strain gauge at least partially overlying the piezoelectric sensor, wherein the piezoelectric sensor is capable of bending;Processor core;A conditioning circuit attached to receive output from the piezoelectric sensor and the strain gauge and to provide an adjusted signal to the processor core, the processor core being operatively connected to send control signals to Device.13.The HMI of claim 12, wherein the IC chip is thinned to 75 microns or less.14.The HMI of claim 13, wherein the piezoelectric sensor is adjacent to a cavity that allows the piezoelectric sensor to bend.15.The HMI according to claim 14, wherein the cavity is one of an etched lead frame to which the IC chip is attached and a lead frame that provides a shift between the IC chip and the printed circuit board PCB The lead frame is attached to the printed circuit board PCB.16.The HMI according to claim 14, wherein the IC chip further comprises a ferroelectric random access memory (FRAM).17.The HMI of claim 16, wherein the piezoelectric sensor and the FRAM each comprise a lead zirconate titanate PZT layer formed during FRAM processing.18.The HMI according to claim 14, further comprising a temperature sensor.19.The HMI of claim 14, further comprising a communication interface.20.The HMI according to claim 14, wherein at least one of a processor core, a conditioning circuit, an FRAM, a temperature sensor, and a communication interface is integrated on the IC chip having the piezoelectric sensor and the strain gauge. |
Integrated force sensing elementTechnical fieldThe disclosed embodiments generally relate to the field of human machine interface (HMI). More specifically, and not by way of limitation, the present invention relates to an integrated force sensing element for use in an HMI.Background techniqueResistive and capacitive touch screens have been developed for LCD displays as a form of user interface, ie, a graphical interface. These touch screens are suitable for larger array applications but are expensive to manufacture. When the array is large, the touch screen is cost-effective on a per-pixel basis. However, for decentralized "button" applications, touch screens tend to be more expensive than mechanical buttons. The prior art HMI is subject to the following restrictions:● Due to the moisture content in the human body, the capacitive touch HMI detects an increase in the dielectric constant. When wearing gloves, they often fail. If adjusted for high sensitivity, the detection of spurious water or metal objects passing through may be problematic.Resistive touch technology depends on the conductive elements separated by a thin layer of polymer. The conductive elements may form a resistive leakage path after repeated use or after long term exposure to pressure and / or humidity. Abrasion of the conductive element after repeated contact may also cause sporadic open circuit under use.● Similarly, mechanical buttons are subject to wear through repeated use. In addition, it typically requires the presence of holes in the control panel that allow water or contaminants to enter. Due to the movement, this creates additional reliability concerns beyond wear.● Except for the stability concerns, none of the aforementioned HMI methods senses the force or speed of interaction (contact). HMIs that may be associated with human sensing of force and speed of contact (ie, impact) will greatly enhance the user experience.Content of the inventionThis patent application discloses a robust integrated force sensing element and HMI that senses contact via force and speed of interaction and may act as a switch. The device has the following features:Piezoelectric sensor element,Strain gauges, andAn optional integrated circuit that detects and measures signals from the piezoelectric sensor and strain gauge.When the piezoelectric sensor is stressed (eg, when a force is applied to the piezoelectric sensor), the piezoelectric sensor pumps the charge and generates electrical energy. This sensor detects changes in stress over time and is used to detect changes in stress events. The piezoelectric sensor is connected to a high-pass circuit that helps detect initial contact with the HMI due to a rapid change in momentum (eg, impact). In appropriate arrangements, the piezoelectric sensor can also detect changes in momentum due to device movement. Piezoelectric sensors are not suitable for DC force measurement, but to distinguish the speed of stimulation. A faster stimulus produces a stronger signal at the same force level; therefore, a piezoelectric sensor can be used to distinguish the urgency of the user's actions. Strain gages are configured to take DC force measurements and detect long-term stress changes. For example, strain gages may be arranged to form a Wheatstone bridge.By combining both types of force sensors, effective force-sensitive or pressure-sensitive switches can be implemented. The circuit is connected to the low power comparator, and the circuit can remain in low power mode until the piezo element wakes up the circuit. Piezoelectric sensors are used to measure impact rather than static (DC) force because the circuit inevitably bleeds the charge generated by the piezoelectric element. In order to minimize power consumption, the strain gauge depends on the trigger from the piezoelectric circuit. Once the strain gauge is triggered, this sensor can be used to determine the absolute level of stress. Unlike previous solutions, the disclosed force-sensing elements can differentiate contact forces, differentiate impact speeds of contacts, allow multi-touch mode HMIs (eg, the first touch is used to wake up, the second touch is used to start), and the The sensor data can be used for proportional haptic feedback.In one aspect, an embodiment of an integrated force sensing element is disclosed. The integrated force sensing element includes a piezoelectric sensor formed in an integrated circuit (IC) chip and a strain gauge at least partially overlying the piezoelectric sensor, wherein the piezoelectric sensor is capable of bending.In one aspect, an embodiment of a human machine interface (HMI) is disclosed. The HMI includes an integrated circuit (IC) chip including an integrated force sensing element including a piezoelectric sensor and a strain gauge, the strain gauge at least partially overlying the piezoelectric sensor, Wherein the piezoelectric sensor is capable of bending; a processor core; and a conditioning circuit attached to receive output from the piezoelectric sensor and the strain gauge and provide an adjusted signal to the processor core, wherein The processor core is operatively connected to send control signals to the device.Advantages of the disclosed embodiments include:● lower power consumption in steady-state mode;● New HMI dimension:oo impact● stability:Minimal movement, due to repeated electrical contact without electrode wear;o is not sensitive to environmental moisture, water and metal pollutants;Easy integration with CMOS electronics.BRIEF DESCRIPTION OF THE DRAWINGS FIGThe embodiments of the present invention are illustrated by way of example and not by way of limitation in the figures of the accompanying drawings in which like references indicate similar elements. It should be noted that different references to "a" or "one" embodiment in the present invention are not necessarily to the same embodiment, and such reference may mean at least one. Additionally, when a particular feature, structure, or characteristic is described in connection with an embodiment, it is to be understood that such feature, structure, or characteristic is implemented in conjunction with other embodiments whether or not explicitly described and well within the knowledge of one of ordinary skill in the art Inside.For the purpose of illustrating one or more exemplary embodiments of the present invention, the accompanying drawings are incorporated into the specification and form a part of the specification. The various advantages and features of the invention will be understood from the following detailed description. DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS In the drawings, taken in conjunction with the appended claims and with reference to the accompanying drawings,1A and 1B depict top and side views of an example force sensing element in accordance with an embodiment of the present invention;Figure 2 depicts a side view of an etched leadframe that allows for chip flexing in a switching region in accordance with an embodiment of the present invention;Figure 3 depicts a packaging system that allows for chip flexing in a switching region according to an embodiment of the present invention;Figures 4A and 4B depict top and side views of an example force sensing element in accordance with an embodiment of the present invention;Figure 5 depicts an example circuit for sensing DC force according to an embodiment of the present invention;Figure 6 depicts an example of a ferroelectric capacitor (FeCAP) array for AC force measurement according to an embodiment of the present invention;Figure 7 depicts an example of an integrated force sensing element according to an embodiment of the present invention;Figure 8A depicts an example of a force sensing element acting as a motion sensor according to an embodiment of the present invention;Figure 8B illustrates a timing diagram associated with the force sensing element of Figure 8A;9A depicts an example of the force sensing element of FIG. 8A according to an embodiment of the present invention in which a resistor is replaced by a capacitor DAC to reduce static power consumption;Figure 9B illustrates a timing diagram associated with the force sensing element of Figure 9A;FIG. 10 depicts a block diagram of an example human interface in accordance with an embodiment of the present invention. FIG.detailed descriptionSpecific embodiments of the present invention will now be described in detail with reference to the accompanying drawings. In the following detailed description of the embodiments of the present invention, numerous specific details are set forth in order to provide a more thorough understanding of the present invention. However, it will be apparent to those skilled in the art that the present invention may be practiced without these specific details. In other instances, well-known features are not described in detail so as not to unnecessarily complicate the description.Referring now to the drawings, and more specifically to FIGS. 1A and 1B, a plan view and a side view of integrated force-sensing element 100 are shown in accordance with an embodiment of the present invention. The switch 100 includes a piezoelectric sensor 102 and a strain gauge 104. In the embodiment seen in this figure, the piezoelectric sensor 102 includes an array of ferroelectric capacitors 102A, 102B, 102C, 102D, and 102E connected in series, while the strain gage 104 includes four arrayed to form a Wheatstone bridge A collection of thin film resistors 104A, 104B, 104C, 104D. Strain gage 104 partially overlies piezoelectric sensor 102 such that both sensors receive substantially the same strain. As seen in the side view, the piezoelectric sensor 102 is formed on the dielectric film 112. The dielectric film 112 may be formed on the silicon layer 110 as shown, or may be formed in a subsequent layer. In at least one embodiment, the formation of the piezoelectric sensor 102 is coordinated with the formation of a similar element, such as a ferroelectric random access memory (FRAM). The piezoelectric element 102 has a lower electrode layer 114 and an upper electrode layer 118 that are separated by a piezoelectric film layer 116. In at least one embodiment, piezoelectric film 116 is lead zirconate titanate (PZT). In an alternative embodiment, the piezoelectric film may be aluminum nitride (AlN), zinc oxide (ZnO) or any other piezoelectric film known or unknown. The upper electrode layer 118 and the lower electrode layer 114 may be a metal such as Ir, Ti, Pt, Pd, Au, Al, Ru, Rh, a metal oxide, or a multi-layer combination thereof. In at least one embodiment, the electrode consists of a titanium aluminum nitride (TiAlN) layer and an iridium layer. The dielectric layer 120 separates the piezoelectric sensor 102 from the strain sensor 104. Strain gage 104 is formed from a thin film resistor material such as nichrome, silicon chromium, and tantalum silicon nitride. In at least one embodiment, the strain gauge 104 is formed of silicon chromium (SiCr).In order to increase the responsiveness of the integrated force sensing element 100, a cavity may be created under the element of FIG. 1 during installation. One embodiment of this cavity is shown in FIG. 2 that illustrates a side view of the lead frame 200. The cavity 202 is created in the leadframe 200 by etching and the cavity 202 is located in a region underlying the force sensing element 100 after the IC chip is mounted. To further increase the responsiveness of the switch 100, the final silicon IC thickness is reduced to 100 microns or less. In at least one embodiment, the silicon IC thickness is thinned to less than 75 microns. In at least one embodiment, the final silicon IC thickness is thinned to less than 50 microns.FIG. 3 illustrates a packaging system that allows the chip in the switch 100 area to flex. This figure illustrates a quarter-model of a quad flat no-lead (QFN) package 312 mounted to a printed circuit board (PCB) 310. The QFN package 312 includes an IC chip 314 that has been mounted to the QFN leadframe 316 and encapsulated in a molding compound 318. When the QFN leadframe 316 is soldered on the PCB 310, there is a small gap between the bottom of the QFN leadframe 316 at the center and the PCB 310, creating the desired "cavity" with the pads. Subsequently, when pressure is applied to the region 320, the IC chip 314 can be sufficiently bent to provide a spontaneous charge on the piezoelectric sensor formed in the IC chip 314. Although only two methods of creating a cavity adjacent to the switch 100 have been shown, it should be understood that alternative embodiments may be readily devised by one of ordinary skill in the art to accomplish the same purpose.4A and 4B illustrate a top view and a side view of an alternative version of the combined force sensing element 400. In this embodiment, the switch 400 does not need to rely on the cavity's creation during packaging or installation. The truth is that a MEMS cavity is etched into the silicon wafer 410, followed by a compliant material 422. Once the cavity is created and filled, the structural dielectric layer 412 is deposited prior to the piezoelectric sensor 402 being formed. The piezoelectric sensor 402 includes a first electrode 414, a piezoelectric material 416, and a second electrode 418. In addition, the dielectric layer 420 separates the second electrode 418 from the thin film resistor layer forming the strain gauge 404. Unlike the strain gauge of FIG. 1, each of the strain gauges 404A, 404B, 404C, 404D follows a serpentine path. The compliant material 424 overlies the switch 400 for protection. Conformable material layers 422 and 424 (which may be the same material or different materials) allow the diaphragm to move relative to the silicon wafer to prevent over-travel and damage. Additional information regarding the piezoelectric sensor and MEMS cavity of FIG. 4 can be found in U.S. Patent Application No. 2015/0226618, filed concurrently with Wei-Yan Shih, which is hereby incorporated by reference.The piezoelectric layer and the thin film resistor in the disclosed embodiments will receive the same strain as the top surface of the silicon. The following figures illustrate circuits used to capture and measure strain using piezoelectric sensors and strain gages. FIG. 5 illustrates a circuit 500 for DC force sensing using a strain gauge such as strain gauge 104. The circuit 500 includes four thin film resistors connected to form a Wheatstone bridge 504 with an input voltage Vin and ground connected to opposite sides of the bridge. The output terminals Vout + and Vout- of the Wheatstone bridge 504 are connected to the input terminals of the high input impedance in-amp 510. Instrumentation amplifier 510 includes input buffer amplifiers 514, 516, each of which has its inverting input connected to a voltage divider connected between the two outputs of an in-amp 510. The voltage divider comprises fixed resistors R1 and R2, each resistor connected between the inverting input of one of the input buffer amplifiers 514, 516 and the corresponding output and a variable resistor R3 connected between resistor R1 Between and R2. Circuit 500 may be periodically checked to detect long-term stress changes in the disclosed switch. The output signal of the Wheatstone bridge 504 is a signal corresponding to the strain applied to the surface of the IC chip (the Wheatstone bridge is built therein). One of ordinary skill in the art will recognize that other configurations of resistors may be used in addition to Wheatstone bridges.In contrast to the DC measurement of FIG. 5, FIG. 6 illustrates a circuit 600 for AC force sensing using a piezoelectric sensor, such as sensor 102. In this embodiment, the piezoelectric sensor includes a FeCAP array 602 with the terminal of the FeCAP array 602 connected to the input of a dedicated charge amplifier 610. Each input terminal of the charge amplifier 610 and the corresponding output terminal are connected in parallel with the capacitor CF and the resistor RF. The charge amplifier 610 balances the charge injected into each input by charging the corresponding feedback capacitor CF. Resistor RF bleeds charge away from the corresponding capacitor CF at a low rate to prevent the amplifier from drifting into saturation. Resistor RF also provides a DC offset path to the input. The RF and CF values set the amplifier cut-off frequency. The FeCAP array 602 contains several ferroelectric capacitors arranged in a series and parallel combination for optimal interface with the CMOS electronics. When AC force is injected, the series connection will increase the output voltage, and the parallel connection will increase the sensor capacitance, making the sensor robust to parasitic capacitors at the amplifier's interface. Although the disclosed embodiment shows twelve FeCAPs, it should be understood that such a situation is merely an example and not a limitation. An external inlet 612 to the array 602 is provided so that a polarization voltage can be applied during polarization. Polarization forces the micro dipoles in the ferroelectric material into alignment by subjecting the ferroelectric material to a high electric field over a short period of time.When both FeCAP and TFR are available for DC and AC force sensing, as disclosed herein, the architecture shown in FIG. 7 can be used to read both the DC and AC components of the force. Circuit 700 includes an in-amp 510 having a high impedance input. The TFR bridge 504 and the FeCAP array 602 are each connected to an input of a multiplexer 701, which in turn is connected to an input terminal of an in-amp 510. The two components of force can then be read in a time-multiplexed manner.8A illustrates a sub-microWatt power wake-up system 800 that uses motion detection as described with the FeCAP array and provides wake-up signals as an output of the system 800 in accordance with an embodiment of the present invention. When system 800 experiences motion, a voltage will appear on the FeCAP array 802 and circuit 800 measures the resulting voltage against threshold VTH. The signal is triggered when the voltage on the FeCAP array 802 exceeds a threshold. A resistive voltage divider comprising resistors R1, R2, R3 supplies the voltages VTH and VTH 'to the respective connection points. The threshold voltage VTH is provided directly to the non-inverting input of the comparator 810. Pseudo-resistor 812 is connected between VTH 'and switch S1, and switch S1 is connected to the inverting input of comparator 810. The FeCAP array 802 is attached to the connection point between the dummy resistor 812 and the switch S1. Although the FeCAP array 802 is shown as only two capacitors, it should be understood that this is for purposes of simplicity and illustration only and not limitation.Pseudo-resistor 812 is a resistive element implemented as one or more transistors and acts as a bias circuit for FeCAP array 802. Switch S0, when closed, connects the inverting and non-inverting inputs of comparator 810 and provides an auto-zero for the comparator while switch S1 allows sampling of FeCAP array 802 via comparator 810. In this figure, the comparator power consumption can be reduced by the load cycle signal CMP Enable (not specifically shown). However, resistive voltage divider will consume static power. 8B illustrates a timing diagram of the CMP Enable signal. The auto-zero signal (S0) and the sampling signal (S1) are shown below the circuit. As shown, the switch S0 is first closed during the period when the comparator 810 is enabled to provide automatic zeroing of the comparator; once the switch S0 is open, the switch S1 is closed to allow sampling of the FeCAP array 802.9A illustrates the embodiment of FIG. 8A in which the resistors R1, R2, R3 and the dummy resistor 812 are replaced with a digital-to-analog converter (DAC) 906 of a switched capacitor, which may be binary weighted DAC. Capacitors 908 are connected in parallel to provide equivalent series resistance, with the rightmost capacitor in this example providing the most significant bit. The first terminal of each of the capacitors 908 may be switched to connect to VDD or ground while the second terminal is connected to the non-inverting input of the comparator 910 and may be switched using switch S0 to connect to ground. The FeCAP array 902 is connected between the inverting input of the comparator 910 and ground. Switch S1, when closed, causes the switched capacitor DAC 906 to connect to the FeCAP array 902. The circuit 900 is operable to quantify the amount of charge that occurs on the FeCAP array 902 and may provide switches with multiple settings.The operation of the circuit 900 is discussed below with reference to FIG. 9B:● First of all, the switches S0, S1 and S2 are both open.The switch S0 is closed and the switch 912 of the DAC 906 is set to 000 ... 0 to provide a reset 920 of the DAC 906;● Switch S0 is open and switch 912 is set to 000 ... 1 to generate voltage common mode (VCM) 925, ie, VDD / 2 at the output of DAC 906;• Switch S1 is then closed to provide VDD / 2 as the offset value at the output of the FeCAP array 902;Switch S2 is closed to enable comparator 910 and provide auto-zero 930 followed by switch S1 off;The desired threshold for DAC 906 may then be provided; andThe current value of FeCAP array 902 and DAC 906 is compared 935.In at least one embodiment, a successive approximation register (SAR) algorithm is used to provide the threshold of DAC 906 for comparison. In at least one embodiment, the FeCAP array 902 may be compared to a threshold above the normal value of the FeCAP array 902 in one step, and one step compares the FeCAP array 902 with thresholds below the normal value. For example, if the FeCAP array 902 has received a negative voltage, the threshold may be set low until an appropriate value is found. Simulation has shown that the proposed architecture may consume less than 1 μW.FIG. 10 depicts a block diagram of a human interface system 1000 according to an embodiment of the present invention. As seen in this figure, the HMI system 1000 includes one or more force-sensing elements 1002, one or more processor cores 1004, a conditioning circuit 1006, an FRAM 1008, a temperature sensor and associated circuitry 1010, a communication interface 1012, and a die / Digital converter (ADC) 1014, each of which is connected to a common bus circuit 1016, either directly or via other elements. Force sensing element 1002 combines an integrated piezoelectric sensor and strain gauge, as disclosed herein. In at least one embodiment, force sensing element 102 is implemented as a push button. In at least one embodiment, force-sensing element 102 is part of a larger surface and provides an area sensitive to different levels of pressure. In at least one embodiment, the force sensing element 102 is a motion detector that detects the motion of an object in which the sensor is embedded. In at least one embodiment, the force sensing element 102 provides a multi-level output signal and multiple levels of decision are achieved by the circuit to which the output of the force sensing element 1002 is provided.The conditioning circuit 1006 receives the signal from the force sensing element 1002 and provides the conditioned signal to an analog-to-digital converter (ADC) 1014, which in turn provides the converted signal to the processor core 1004. In at least one embodiment, the ADC 1014 is included in the processor core 1004. The adjustment provided by the conditioning circuit 1006 may include amplifying the signal, as known to those skilled in the art. In one embodiment, the processor core 1004 is a microcontroller. The FRAM 1008 can be used as a memory for systems that also contain FeCAP or other ferroelectric components. In at least one embodiment, the formation of FeCAP (which forms part of force sensing element 1002) and FRAM 1008 are formed on a single chip using the same processing steps to form two components at the same time.Even when no force is applied, due to the thermoelectric properties of the piezoelectric ceramic, temperature changes can cause the voltage to occur across the electrodes of any piezoelectric transducer. Temperature also affects other properties of piezoelectric ceramics, such as elasticity, dielectric and piezoelectric coupling. In order to allow adjustment of the temperature-induced voltage, temperature sensors and associated circuitry 1010 are provided in the system. The circuitry associated with the temperature sensor may include its own conditioning circuitry (not specifically shown) and an ADC (also not shown). If the ADC 1014 is included in the processor core 1004, a multiplexer (not specifically shown) may be used to receive data from various sensors including the force-sensing element 1002 and the temperature sensor 1010 in a time-multiplexed manner . Finally, the communication interface 1012 provides the components used by the processor core 1004 to provide signals to another circuit or machine in response to input received at the force-sensing element 1002. In at least one embodiment, the signal is transmitted wirelessly. In at least one embodiment, the HMI system 1000 is integrated into a larger system such that the force sensing element 1002 can provide switching inputs to a larger system. It should be understood that the components of the HMI system 1000 may be provided as separate components. In at least one embodiment, one or more of force sensing element 1002 and other elements of HMI system 1000 are provided on a single IC chip.Although various embodiments have been shown and described in detail, the claims are not limited to any particular embodiment or example. The above description should not be taken as implying that any particular element, element, step, action, or function is essential so that it must be included in the scope of the claims. Unless expressly stated otherwise, reference to elements in the singular is not intended to mean "one and only one" but rather "one or more." All structural and functional equivalents to the elements of the embodiments described above which are already known in the art to a person skilled in the art are expressly incorporated herein by reference and are intended to be encompassed by the present claims Covered. Accordingly, those of ordinary skill in the art will recognize that the exemplary embodiments described herein may be practiced with various modifications and alterations without departing from the spirit and scope of the appended claims. |
Techniques for the design and use of a digital signal processor, including (but not limited to) for processing transmissions in a communications (e.g., CDMA) system. Stuffing instructions in a processing pipeline of a multi-threaded digital signal processor provides for operating a core processor process and a debugging process within a debugging mechanism. Writing a stuff instruction into a debugging process registry and a stuff command in a debugging process command register provides for identifying a predetermined thread of the multi-threaded digital signal processor in which to execute the stuff instruction. The instruction stuffing process issues a debugging process control resume command during a predetermined stage of executing on the predetermined thread and directs the core processor to perform the stuff instruction during the debugging process. The core processor may then execute the stuffed instruction in association with the core processor process and the debugging process. |
WHAT IS CLAIMED IS: CLAIMS 1. A method for stuffing instructions in a processing pipeline of a multi-threaded digital signal processor for improved software instruction debugging operations, comprising: writing a stuff instruction into a debugging process registry associated with a debugging process; issuing from a core processor a debugging process control resume command during a predetermined stage of executing on a predetermined thread; providing the stuff instruction to the core processor; indicating to the core processor to execute the stuff instruction during the debugging process; and executing the stuff instruction in association with the core processor process and the debugging process. 2. The method of Claim 1, further comprising writing a stuff command in a debugging process command register associated with the debugging process registry in response to the stuff instruction, the stuff command comprising identification of a predetermined thread of the multi-threaded digital signal processor in which to execute the stuff instruction. 3. The method of Claim 1, further comprising executing the stuff instruction in a user mode of operation. 4. The method of Claim 1, further comprising executing the stuff instruction in a supervisor mode of operation. 5. The method of Claim 1, further comprising writing a stuff command in a debugging process command register associated with the debugging process registry in response to the stuff instruction, the stuff command comprising identification of a plurality of predetermined threads of the multi-threaded digital signal processor in which to execute the stuff instruction. 6. The method of Claim 1, further comprising writing the stuff instruction as a branch instruction and using a current program counter value for the predetermined thread. 7. The method of Claim 1, further comprising writing the stuff instruction as start/resume instruction for selectively resetting the predetermined thread. 8. The method of Claim 1, further comprising writing the stuff instruction as a load instruction into the debugging process registry associated with the debugging process. 9. The method of Claim 1, further comprising writing the stuff instruction as a register read instruction into the debugging process registry associated with the debugging process. 10. The method of Claim 1, further comprising writing the stuff instruction as a cache read/write instruction into the debugging process registry associated with the debugging process. 11. The method of Claim 1, further comprising writing the stuff instruction as a memory read/write instruction into the debugging process registry associated with the debugging process. 12. A digital signal processor debugging system comprising circuitry and instructions for stuffing instructions in a processing pipeline of a multi-threaded digital signal processor comprising: a debugging process registry associated with a debugging process for receiving a stuff instruction; a debugging process control resume command for issuing from a core processor during a predetermined stage of executing on a predetermined thread; means for providing the stuff instruction to the core processor; indicating means for indicating to the core processor to execute the stuff instruction during the debugging process; and means for executing the stuff instruction in association with the core processor process and the debugging process. 13. The digital signal processor debugging system of Claim 12, further comprising a debugging process command register associated with the debugging process registry for receiving a stuff command in response to the stuff instruction, the stuff command comprising identification of a predetermined thread of the multithreaded digital signal processor in which to execute the stuff instruction. 14. The digital signal processor debugging system of Claim 12, further comprising circuitry and instructions for performing the instruction stuffing method in a user mode of operation. 15. The digital signal processor debugging system of Claim 12, further comprising means for executing the stuffed instruction in a supervisor mode of operation. 16. The digital signal processor debugging system of Claim 12, further comprising means for writing the stuff instruction as a branch instruction and using a current program counter value for the predetermined thread. 17. The digital signal processor debugging system of Claim 12, further comprising means for writing the stuff instruction as start/resume instruction for selectively resetting the predetermined thread. 18. The digital signal processor debugging system of Claim 12, further comprising means for writing a stuff instruction as a load instruction into the debugging process registry associated with the debugging process. 19. The digital signal processor debugging system of Claim 12, further comprising means for writing a stuff instruction as a register read instruction into the debugging process registry associated with the debugging process. 20. The digital signal processor debugging system of Claim 12, further comprising means for writing a stuff instruction as a cache read/write instruction into the debugging process registry associated with the debugging process. 21. The digital signal processor debugging system of Claim 12, further comprising means for writing a stuff instruction as a memory read/write instruction into the debugging process registry associated with the debugging process. 22. A digital signal processor for operation in support of a personal electronics device, the digital signal processor comprising: means for instruction stuffing operations during non-intrusive digital signal processor debugging operations of the digital signal processor; means for writing a stuff instruction into a debugging process registry associated with the debugging process; means for issuing from a core processor a debugging process control resume command during a predetermined stage of executing on a predetermined thread; means for indicating to the core processor to perform the stuff instruction during the debugging process; means for providing the stuff instruction to the core processor; and means for executing the stuff instruction in association with the core processor process and the debugging process. 23. The digital signal processor of Claim 22, further comprising means for writing a stuff command in a debugging process command register associated with the debugging process registry in response to the stuff instruction, the stuff command comprising identification of a predetermined thread of the multi-threaded digital signal processor in which to execute the stuff instruction. 24. The digital signal processor of Claim 22, further comprising means for executing the stuff instruction in a user mode of operation. 25. The digital signal processor of Claim 22, further comprising means for executing the stuff instruction in a supervisor mode of operation. 26. The digital signal processor of Claim 22, further comprising means for writing a stuff command in the debugging process command register associated with the debugging process registry in response to the stuff instruction, the stuff command comprising identification of a plurality of predetermined threads of the multi-threaded digital signal processor in which to execute the stuff instruction. 27. The digital signal processor of Claim 22, further comprising means for writing the stuff instruction as a branch instruction and using a current program counter value for the predetermined thread. 28. The digital signal processor of Claim 22, further comprising means for writing the stuff instruction as start/resume instruction for selectively resetting the predetermined thread. 29. The digital signal processor of Claim 22, further comprising means for writing a stuff instruction as a load instruction into the debugging process registry associated with the debugging process. 30. The digital signal processor of Claim 22, further comprising means for writing a stuff instruction as a register read instruction into the debugging process registry associated with the debugging process. 31. The digital signal processor of Claim 22, further comprising means for writing a stuff instruction as a cache read/write instruction into the debugging process registry associated with the debugging process. 32. The digital signal processor of Claim 22, further comprising means for writing a stuff instruction as a memory read/write instruction into the debugging process registry associated with the debugging process. 33. A computer usable medium having computer readable program code means embodied therein for processing instructions on a digital signal processor for computer readable program code means for instruction stuffing operations during non-intrusive digital signal processor debugging operations of the digital signal processor, the computer usable medium comprising: computer readable program code means for writing a stuff instruction into a debugging process registry associated with a debugging process; computer readable program code means for issuing from a core processor a debugging process control resume command during a predetermined stage of executing on a predetermined thread; computer readable program code means for providing the stuff instruction to the core processor; computer readable program code means for indicating to the core processor to execute the stuff instruction during the debugging process; and computer readable program code means for executing the stuff instruction in association with the core processor process and the debugging process. 34. The computer usable medium of Claim 33, further comprising computer readable program code means for writing a stuff command in a debugging process command register associated with the debugging process registry in response to the stuff instruction, the stuff command comprising identification of a predetermined thread of the multi-threaded digital signal processor in which to execute the stuff instruction. 35. The computer usable medium of Claim 33, further comprising computer readable program code means for writing the stuff instruction as start/resume instruction for selectively resetting the predetermined thread. |
METHOD AND SYSTEM FOR INSTRUCTION STUFFING OPERATIONS DURING NON-INTRUSIVE DIGITAL SIGNAL PROCESSOR DEBUGGINGFIELDThe disclosed subject matter relates to data processing systems and processes such as may find use in data communications and similar applications. More particularly, this disclosure relates to a novel and improved method and system for instruction stuffing operations during non-intrusive digital signal processor debugging operations.DESCRIPTION OF THE RELATED ARTIncreasingly, telecommunications and other types of electronic equipment and supporting video, complex audio, videoconferencing and other rich software applications involve signal processing. Signal processing requires fast mathematical calculations and data generation in complex, but repetitive algorithms. Many applications require computations in real-time, i.e., the signal is a continuous function of time, which must be sampled and converted to digital signals for numerical processing. The processor must execute algorithms performing discrete computations on the samples as they arrive.The architecture of a digital signal processor (DSP) is optimized to handle such algorithms. The characteristics of a good signal processing engine include fast, flexible arithmetic computation units, unconstrained data flow to and from the computation units, extended precision and dynamic range in the computation units, dual address generators, efficient program sequencing, and ease of programming. [0004] One promising application of DSP technology includes communications systems such as a code division multiple access (CDMA) system that supports voice and data communications, as well as text messaging and other applications, between users over a satellite or terrestrial link. The use of CDMA techniques in a multiple access communication system is disclosed in U.S. Pat. No. 4,901,307, entitled "SPREAD SPECTRUM MULTIPLE ACCESS COMMUNICATION SYSTEM USING SATELLITE OR TERRESTRIAL REPEATERS," and U.S. Pat. No. 5,103,459 entitled "SYSTEM AND METHOD FOR GENERATING WAVEFORMS IN A CDMA CELLULAR TELEHANDSET SYSTEM," both assigned to the assignee of the claimed subject matter.A CDMA system is typically designed to conform to one or more standards. One such first generation standard is the "TIA/EIA/IS-95 Terminal-Base Station Compatibility Standard for Dual-Mode Wideband Spread Spectrum Cellular System," hereinafter referred to as the IS-95 standard. The IS-95 CDMA systems are able to transmit voice data and packet data. A newer generation standard that may more efficiently transmit packet data is offered by a consortium named the "3<rd> Generation Partnership Project" (3 GPP) and embodied in a set of documents including Document Nos. 3G TS 25.211, 3G TS 25.212, 3G TS 25.213, and 3G TS 25.214, which are readily available to the public. The 3GPP standard is hereinafter referred to as the W-CDMA Standard.Complex DSP operational software employing the W-DCMA Standard, for example, requires robust development tools. Such development tools may include those for code generation, integration, testing, debugging, and evaluating application performance. In developing and operating software or complex DSP applications, such as advanced telecommunications applications, there is the need for sophisticated, yet non-intrusive debugging software. That is, debugging software applications must be not only sufficiently robust to monitor, test, and support the correction of software defects and operational problems, but also they may operate so as not to interfere with the core processor software during debugging operations. Otherwise, any problems in the core processing software may not be detected or detected properly during the use of such debugging software.Moreover, during or in association with non-intrusive debugging processes, there is frequently the need to operate a variety of diagnostic, analytical, and other processes for determining various aspects of core processor operations. Such diagnostic, analytical, and similar programs may vary according to the specific type and amount of information a use may desire or an associated debugging process may need. Accordingly, the ability to insert or stuff instructions into a debugging process dynamically could have significant advantages.Presently, however, no known way to perform instruction stuffing operations exists for debugging core processes in association with a multi-threaded digital signal processor as has been here described. Yet further, no instruction stuffing process exists that may be thread-selective by performing the functions of operating stuffed instructions on one, two, or more threads of a multi-threaded digital signal processor. Moreover, no instruction stuffing process or mechanism is known that allows a debugging process to execute instructions on the core processor in conjunction with or in association with both the core processing functions and the non-intrusive debugging process.Reasons for which instruction stuffing operations may be advantageous include for the purpose of reading and/or writing core registers and memory. Also, debugging process operations may be abstracted for user analysis, including the use of various analytical application programs. Moreover, instruction operations may allow a user to enter into the debugging process various instructions applicable to a specific type of debugging.There is a need, therefore, for a debugging process and system for operation with a DSP, which debugging process and system provides the ability for instruction stuffing operations during non-intrusive digital signal processor debugging operations.A need exists for an instruction stuffing process and mechanism that may be applicable to multi-threaded digital signal processor debugging operations.A need exists for an instruction stuffing process and mechanism that may be thread-selective, by providing the ability operate stuffed instructions on one, two, or more threads of a multi-threaded digital signal processor.Still a need exists for an instruction stuffing process or mechanism that allows a debugging process to execute instructions on the core processor in conjunction with or in association with both the core processing functions and the non-intrusive debugging process.Also, a need exists for a non-intrusive software debugging process instruction stuffing operations for processing instructions and data on a core process during non-intrusive digital signal processor debugging operations. SUMMARYTechniques for providing non-intrusive, thread-selective, debugging method and system for a digital signal processor, including a multi-threaded digital signal processor, are disclosed, which techniques provide for instruction stuffing operations during non-intrusive debugging operations. The method and system here disclosed improve both the operation of a digital signal processor and the efficient use of digital signal processor instructions for increasingly powerful software applications, including applications operating in personal computers, personal digital assistants, wireless handsets, and similar electronic devices, as well as increasing the associated digital processor speed and service quality.According to one aspect of the disclosed subject matter, a method and system for stuffing instructions in a processing pipeline of a multi-threaded digital signal processor provide for improved software instruction debugging operations. The method and system provide for operating a core processor process within a core processor associated with the digital signal processor and a debugging process within a debugging mechanism of the digital signal processor. The debugging mechanism is associated with the core processor. The disclosed subject matter includes writing a stuff instruction into a debugging process registry associated with the debugging process and a stuff command in a debugging process command register associated with the debugging process registry in response to the stuff instruction. The stuff command provides for identification of a predetermined thread of the multi-threaded digital signal processor in which to execute the stuff instruction. The present disclosure issues a debugging process control resume command from the core processor during a predetermined stage of executing on the predetermined thread and directs the core processor to perform the stuffed instruction during the debugging process. The present disclosure provides the stuffed instruction to the core processor for executing the stuffed instruction in association with the core processor process and the debugging process. [0017] These and other advantages of the disclosed subject matter, as well as additional novel features, will be apparent from the description provided herein. The intent of this summary is not to be a comprehensive description of the claimed subject matter, but rather to provide a short overview of some of the subject matter's functionality. Other systems, methods, features and advantages here provided will become apparent to one with skill in the art upon examination of the following FIGURES and detailed description. It is intended that all such additional systems, methods, features and advantages be included within this description, be within the scope of the accompanying claims.BRIEF DESCRIPTIONS OF THE DRAWINGSThe features, nature, and advantages of the disclosed subject matter may become more apparent from the detailed description set forth below when taken in conjunction with the drawings in which like reference characters identify correspondingly throughout and wherein:FIGURE 1 is a simplified block diagram of a communications system that may implement one of the various embodiments here disclosed;FIGURE 2 illustrates a DSP architecture for carrying forth the teachings of the present disclosure;FIGURE 3 provides an architecture block diagram of one embodiment of a multi-threaded digital signal processor;FIGURE 4 shows further an architectural diagram of the process flows for the control unit, the instruction unit, and other functional components of the present digital signal processor;FIGURE 5 discloses certain aspects of a digital signal processor core applying the ISDB/JTAG interface features of the present disclosure;FIGURE 6 shows an aspect of an ISDB JTAGSync circuit for performing certain aspects of the debugging procedures here disclosed;FIGURE 7 presents a process flow diagram applicable to the operating modes of the digital signal processor, including the debugging mode of operation to which the present disclosure pertains;FIGURE 8 depicts a breakpoint processing scheme applicable to the embodiment of the present disclosure;FIGURE 9 illustrates the ISDB command register contents for one embodiment of the disclosed subject matter, including an instruction stuffing register for disclosing the disclosed process; andFIGURE 10 presents a processing timing cycle chart for depicting the disclosed process for instruction stuffing in association with a non-intrusive debugging process. DETAILED DESCRIPTION OF THE SPECIFIC EMBODIMENTSThe disclosed subject matter for a non-intrusive, thread-selective, debugging method and system for a multi-threaded digital signal processor has application for multi-threaded processing of any type for which the benefits here presented may be advantageous. One application appears in telecommunications and, in particular, in wireless handsets that employ one or more digital signal processing circuits. For explaining how a wireless handset may be used, FIGURE 1 provides a simplified block diagram of a communications system 10 that may implement the presented embodiments of the disclosed interrupt processing method and system. At a transmitter unit 12, data is sent, typically in blocks, from a data source 14 to a transmit (TX) data processor 16 that formats, codes, and processes the data to generate one or more analog signals. The analog signals are then provided to a transmitter (TMTR) 18 that modulates, filters, amplifies, and up converts the baseband signals to generate a modulated signal. The modulated signal is then transmitted via an antenna 20 to one or more receiver units.At a receiver unit 22, the transmitted signal is received by an antenna 24 and provided to a receiver (RCVR) 26. Within receiver 26, the received signal is amplified, filtered, down converted, demodulated, and digitized to generate in phase (I) and (Q) samples. The samples are then decoded and processed by a receive (RX) data processor 28 to recover the transmitted data. The decoding and processing at receiver unit 22 are performed in a manner complementary to the coding and processing performed at transmitter unit 12. The recovered data is then provided to a data sink 30. [0031] The signal processing described above supports transmissions of voice, video, packet data, messaging, and other types of communication in one direction. A bidirectional communications system supports two-way data transmission. However, the signal processing for the other direction is not shown in FIGURE 1 for simplicity. Communications system 10 may be a code division multiple access (CDMA) system, a time division multiple access (TDMA) communications system (e.g., a GSM system), a frequency division multiple access (FDMA) communications system, or other multiple access communications system that supports voice and data communication between users over a terrestrial link. In a specific embodiment, communications system 10 is a CDMA system that conforms to the W-CDMA Standard. [0032] FIGURE 2 illustrates DSP 40 architecture that may serve as the transmit data processor 16 and receive data processor 28 of FIGURE 1. We emphasize that DSP 40 only represents one embodiment among a great many of possible digital signal processor embodiments that may effectively use the teachings and concepts here presented. In DSP 40, therefore, threads T0:T5 (reference numerals 42 through 52), contain sets of instructions from different threads. Circuit 54 represents the instruction access mechanism and is used for fetching instructions for threads T0:T5. Instructions for circuit 54 are queued into instruction queue 56. Instructions in instruction queue 56 are ready to be issued into processor pipeline 66 (see below). From instruction queue 56, a single thread, e.g., thread TO, may be selected by issue logic circuit 58. Register file 60 of a selected thread is read and read data is sent to execution data paths 62 for SLOTO :SLOT3. SLOTO :SLOT3, in this example, provide for the packet grouping combination employed in the present embodiment.Output from execution data paths 62 goes to register file write circuit 64, also configured to accommodate individual threads T0:T5, for returning the results from the operations of DSP 40. Thus, the data path from circuit 54 and before to register file write circuit 64 forms a processing pipeline 66. The present embodiment may employ a hybrid of a heterogeneous element processor (HEP) system using a single processor with up to six threads, T0:T5. Processor pipeline 66 has six stages, which matches the minimum number of processor cycles necessary to fetch a data item from circuit 54 to registers 60 and 64. DSP 40 concurrently executes instructions of different threads T0:T5 within a processor pipeline 66. That is, DSP 40 provides six independent program counters, an internal tagging mechanism to distinguish instructions of threads T0:T5 within processor pipeline 66, and a mechanism that triggers a thread switch. Thread-switch overhead varies from zero to only a few cycles. [0034] DSP 40, therefore, provides a general-purpose digital signal processor designed for high-performance and low-power across a wide variety of signal, image, and video processing applications. FIGURE 3 provides a brief overview of the DSP 40 architecture, including some aspects of the associated instruction set architecture for one manifestation of the disclosed subject matter. Implementations of the DSP 40 architecture support interleaved multithreading (IMT). In this execution model, the hardware supports concurrent execution of multiple hardware threads T0:T5 by interleaving instructions from different threads in the pipeline. This feature allows DSP 40 to include an aggressive clock frequency while still maintaining high core and memory utilization. IMT provides high throughput without the need for expensive compensation mechanisms such as out-of-order execution, extensive forwarding networks, and so on. Moreover, the DSP 40 may include variations of IMT, such as those variations and novel approaches disclosed in the commonly-assigned U.S. Patent Applications by M. Ahmed, et al, and entitled "Variable Interleaved Multi-threaded Processor Method and System " and "Method and System for Variable Thread Allocation and Switching in a Multi-threaded Processor. "FIGURE 3, in particular, provides a core processing architecture 70 block diagram for DSP 40 as applied to a single thread that may employ the teachings of the disclosed subject matter. Block diagram 70 depicts shared instruction cache 72 which receives instructions via Bus interface (I/F) 73 from AXI Bus 74, which instructions include mixed 16-bit and 32-bit instructions. These instructions reach to sequencer 76, user control register 78, and supervisor control register 80 of threads T0:T5. The core-level system architecture of the disclosed subject matter also includes in-silicon debugging system(ISDB) 82, which interfaces core processor 70 via JTAG interface 84, both of which are described in more detail below.Sequencer 76 provides hybrid two-way superscalar instructions and four- way VLIW instructions to S-Pipe unit 86, M-Pipe unit 88, LD[Load]-Pipe 90, and LD/ST[Store]-Pipe unit 92, all of which communicate with general registers 94. AXI Bus 74 also communicates via Bus I/F 73 with shared data cache 96 LD/ST instructions to threads T0:T5. Optional L2 Cache/TCM 98 signals include LD/ST instructions with shared data TCM 100, which LD/ST instructions further flow to threads General Registers 94. From AHB peripheral bus 102 MSM specific controller 104 communicates interrupts with T0:T5, including interrupt controller instructions, debugging instructions, and timing instructions. Global control registers 106 communicates control register instructions with threads T0:T5.DSP 40, therefore, includes six virtual DSP cores, each containing global control registers 106 and private supervisor control registers 80. Global control registers 106 are shared between all threads. Each thread shares a common data cache and a common instruction cache. Load, store, and fetch operations are serviced by a common bus interface. High performance AXI bus 74 and a lower performance AHB bus 102 are used to connect the data and instruction traffic to off-core memory and peripherals. An integrated level two memory (cache and/or TCM) input 98 is optional. Peripheral access may be through memory-mapped loads and stores. The physical address partition between AHB and AXI may be configured at the MSM level.Clearly, the presented architecture for DSP 40 may evolve and change over time. For example, the number of instruction caches that DSP 40 may use could change from six to one, or other numbers of caches. Superscalar dispatch, Ll data at TCM 100, and other architectural aspects may change. However, the present subject matter may have continued relevance in a wide variety of configurations and for a large family of modifications of DSP 40.ISDB 82, through JTAG interface 84, provides a hardware debugging process for DSP 40. ISDB 82 provides software debug features through JTAG interface 84 by sharing system or supervisor-only registers, that are divided into supervisor control registers 80 on a per thread basis, as well as global control registers 106 between all threads. The system control registers are used for per thread interrupt and exception control and per thread memory management activities. Global registers allow interacting with the ISDB 82 for debugging operations.ISDB 82 enables software developers to debug their software while DSP40 operates. ISDB 82 hardware, in combination with a software debugging process program operating in ISDB 82, may be used to debug the DSP 40 operating system software. ISDB 82 supports debugging hardware threads individually. Users may suspend thread execution, view and alter thread registers, view and alter instruction and data memory, single step threads, stuff instructions to threads, and resume thread execution.ISDB 82 may interface with a debugging process interface card to communicate with ISDB 82 debugging software residing on a program counter, yet all through JTAG interface 84. Host debugging process software may interact with the ISDB 82 by reading and writing ISDB control registers. Communication, for example, may be through a 40-bit packet which identifies the ISDB register to which read/write is to occur, as well as a 32-bit data payload. A packet format supporting this operation may be up to 64 control registers which may be 32 bits wide each. [0042] FIGURE 4 presents a diagram of the micro-architecture 110 for DSP 40 including control unit (CU) 112, which performs many of the control functions for processor pipeline 46. CU 112 schedules and issues instructions to three execution units, shift-type unit(SU) 116, multiply-type unit (MU) 118, and load/store unit (DU) 120. CU 112 also performs superscalar dependency checks. Bus interface unit (BIU 114) 122 interfaces IU 114 and DU 120 to a system bus (not shown). SLOTO and SLOTl pipelines are in DU 120, SLOT2 is in MU 118, and SLOT3 is in SU 116. CU 112 provides source operands and control buses to pipelines SLOTO :SLOT3 and handles GRF and CRF file updates. CU 112 accepts external inputs such as interrupts and reset, and supports ISDB/ETM 122. CU 112 also handles exceptions due to protection violations occurring during address translations.ISDB 82 interfaces with three domains: host debugging software throughJTAG 84, DSP 40 core through IU 114 and CU 112, and other cores present in the system through a Multi-Core Debug (MCD) signal interface. The primary interface between the host debugging software and DSP 40 core is a set of JTAG accessible registers referred to as ISDB 82 registers. The host debugging software performs various debugging process tasks by executing a sequence of ISDB 82 register reads and writes.ISDB 82 communicates with the test environment (in this case a POD or debugging process interface card communicating with the debugging process software residing on a PC) through JTAG interface 84. The host debugging process software interacts with the ISDB by reading and writing ISDB control registers. Communication occurs through a 40-bit packet which identifies the ISDB register in which to read and/of write and a 32-bit data payload for the various ISBD command, including the present instruction stuffing process.FIGURE 5 shows important aspects of ISDB/JTAG interface 110 between the debugging mechanism and the core processor of the disclosed subject matter. In association with DSP 40 core architecture 70, ISDB 82 communicates with JTAG 84 via path JTAG interface path 112, from ISDB JTAG circuit 114. ISDB JTAG circuit 114 processes data flows between JTAG 84 and ISDB 82. ISDB JTAG circuit 114 further interfaces ISDB JTAGSync circuit 116. ISDB JTAGSync circuit 116 communicates further with ISDB controller 118, IU 114 and CU 112. Particularly, ISDB JTAGSync circuit 136 interfaces IU 114, ISDB logic circuit 144, and CU ISDB Controller 146 of CU 112. CU ISDB controller 146 communicates with CU ISDB logic circuit 148, as well as ISDB controller 138. Control outputs from ISDB controller 138 include ISDB data output 154, ISDB reset signal 150, and ISDB interrupt 152. Further interfaces to ISDB controller 138 include MCD interface 156 and ETM break trigger 158.ISDB 82 provides hookups for multi-core debug at the MSM level through MCD interface 156. The MCD interface 156 consists of a pair of input signals which trigger break or resume of core processor 70 and a pair of output signals which indicate that core processor 70 is entering a debugging process or resuming program execution. The MCD break triggers may follow an edge-based protocol such that when a rising edge is detected on an external breakpoint trigger, the threads indicated in external breakpoint thread number mask suspend execution and enter debug mode. Similarly, when a rising edge is detected on the MCD external resume trigger, the threads indicated in external resume thread number mask, if in debug mode, resume normal program execution.ISDB 82 control logic is spread across two blocks: ISDB controller 138 in ISDB 82 and CU ISDB controller 146 in CU 112. ISDB controller 138 handles the tasks of implementing ISDB enable, ISDB version, and ISDB general purpose register registers. MCD external break and resume triggers 156 and ETM break trigger 158 are synchornized to the core processor 70 clock before they are forwarded to CU 112 for further processing. ISDB controller 138 also generates MCD break trigger and the MCD resume trigger based on debug mode status of core processor 70. ISDB controller 138 adds a pipeline stage for signals sent out to DSP 40, such as an ISDB interrupt, break event, and other signals. The rest of the control logic which includes breakpoint processing, micro-command generator, mailbox and status logic is handled by CU ISDB controller 146.CU 112 includes circuitry and instructions capable of handling the tasks such as (a) processing breakpoints and generating break triggers to each thread; (b) generating micro-break and micro-resume commands; (c) maintaining ISDB 82 status and mailbox registers; and (d) implementing the certain ISDB 82 registers. CU 112 includes a breakpoint processing logic (BPL) block as appears in FIGURE 8 for processing all the breakpoints and generating a macro break request to a micro- command generator of CU ISDB controller 126. The micro-command generator processes the macro break request along with instruction stuff commands, instruction step and resume commands and issues micro-break and resume commands to CU 112 for pipeline control. [0049] CU ISDB controller 128 maintains the state of ISDB 82 based on the break and resume acknowledge signals received back. The mailbox functions of CU ISDB controller 146 maintain mailbox registers used for communication between the host debug software and the DSP 40 core processor. These mailbox functions also contain ISDB 82 status registers.To demonstrate illustrative circuitry for performing the presently disclosed instruction stuffing operations in association with non-intrusive debugging operations, FIGURE 6 includes ISDB JTAGSync circuit 160. ISDB JTAGSync circuit 160 includes an ISDB test data register 162 which DSP 40 may use to read and write the ISDB control registers. ISDB JTAGSync circuit 160 provides the synchronization logic between the ISDB test data register 162 operating on DB tck and the ISDB control registers 164 operating in the DSP 40 clock domain. By reading and writing the ISDB control registers, DSP 40 performs various debugging process tasks as may be supported by the ISDB 82, including the presently disclosed instruction stuffing operations.In the implementation of FIGURE 6, ISDB JTAGSync circuit 160 receives JTAG isdb chain in signal 164 into ISDB Test Data Register 204 to generate JTAG isdb chain out signal 166. ISDB Test Data Register 162 includes read/write (RAV) bits 167, Address bits [6:0] 168, and Data bits [31 :0] 170. Values in R/W bits 167 go to AND gate 172, as do Sync circuit output 174 and CU 112_trustedDebug input 176. JTAG isdb chain update tkl signal 178 and ISDB CLK signal 180 control the operation of Sync circuit 174. Address information from Address bits 168 may be received by Address Decode circuit 176, which feeds ISDB Registers 184. ISDB Registers 184 transfer data with Data bits [31 :0] in response to a write enable signal 186 from AND gate 172.ISDB JTAGSync circuit 130 acts as the synchronization bridge between the TAP controller running on JTAG TCK in DB JT AG block and ISDB registers 184 running on DSP 40 core clock distributed in ISDB controller 138, CU 112_ISDBCtrl 146 and IU 114. The ISDB controller 138 and CU ISDB controller 146 contain the control logic of ISDB 82 which consists of a micro-command generator, breakpoint processing logic and various ISDB registers 184 (configuration, mailbox, command etc.). These blocks execute different debugging process tasks initiated by host debugging software on the DSP 40 core. The ISDB interrupt signal is sent out to the DSP subsystem where it is merged with other interrupt sources and sent back to the DSP core 70. Similarly an ISDB 82 reset is merged with other reset sources (power-on reset, software reset etc.) to trigger a reset to the core. ISDB 82 interfaces with external systems (e.g., an MSM system external to DSP 40) through an MCD signal interface. Two pairs of break and resume triggers are provided to support simultaneous debugging of DSP 40 and other cores in external system.FIGURE 7 presents a processing mode diagram 190 for the various mode control aspects of DSP 40, including operations of ISDB 82 during debugging processes. In FIGURE 7, DSP 40 supports processing modes that are both global to all threads and local to individual threads. Each DSP 40 hardware thread individually supports two execution modes, USER mode 192 and SUPERVISOR mode 194, and three non-processing modes of WAIT mode 196, OFF mode 198, and DEBUG mode 200, all as may appear in FIGURE 7. The mode of a thread is independent of other threads, for example one thread may be in WAIT mode 196 while another is in USER mode 192, and so on.The per-thread mode state diagram of FIGURE 7 is supported by various instructions or events. These include "Except" or internal exception event, an "Int" or external interrupt event, an "RTE" or software return instruction from exception mode, and "SSR" or update to SSR register instruction, a "Stop" or software stop instruction that may be entered from any mode, a "Start" or software Start Instruction that also may be entered from any mode, a "trap" or software Trap Instruction, a "Wait" or software wait Instruction, a "Resume" or software Resume Instruction, a "DE" or Debug Event, and a "DR" or Debug Instruction. While the functions in different implementations of the claimed subject matter may vary slightly from those here presented, the meanings of "Start," "Wait," "Resume," "DE," and/or "DR" may be given their broadest interpretations consistent with the scope of the claimed subject matter.Registers are available in DSP 40 in both USER mode 192 andSUPERVISOR mode 194. The user-mode registers are divided into a set of general registers and a set of control registers. General registers are used for all general purpose computation including address generation, scalar and vector arithmetic. Control registers support special-purpose functionality such as hardware loops, predicates, etc. General purpose registers are 32 bits wide and may be accessed as single registers or as aligned pairs of two registers. The general register file provides all operands for instructions, including addresses for load/store, data operands for numeric instructions, and vector operands for vector instructions.DEBUG mode 200 provides a special state where the thread is waiting for commands from ISDB 82. Whenever an ISDB Debug Event occurs, such as by the execution of a software breakpoint instruction, a break command from ISDB 82, or occurrence of a hardware breakpoint, indicated threads may enter DEBUG mode 200. While in DEBUG mode 200, the core is controlled by ISDB 82 via commands from JTAG interface 84. When the ISDB 82 releases the thread due to execution of a resume command, the thread may resume operation according to their current mode settings. When a thread is in DEBUG mode 200, it is controlled by ISDB 82 and cannot be controlled by other threads. Such control may include the execution of various instructions as may be provided through the presently disclosed instruction stuffing operations. A Wait, Resume, Start, or Stop instruction from a running thread, targeting a thread in DEBUG mode 200, may be ignored. Similarly, a Non-Maskable Interrupt (NMI) may be ignored by threads in DEBUG mode 200.A HARDWARE RESET mode (not shown in FIGURE 7) and DEBUG mode 200 are global to all threads. Whenever the hardware reset pin is asserted, regardless of any thread's processing state, DSP 40 may enter HARDWARE RESET Mode. In HARDWARE RESET mode, all registers are set to their reset values. No processing may occur until the hardware reset pin is de-asserted. When the reset pin is asserted, the processor may transition into reset mode and all registers may be reset to their HARDWARE RESET values. After the reset pin is de-asserted, thread TO may be given a soft reset interrupt. This may cause thread TO to enter SUPERVISOR mode 194 and begin executing at the reset vector location. All other threads may remain off. At this point, the software is free to control mode transitions for each thread individually. [0058] In FIGURE 8, it is seen that BPL circuit 210 of CU ISDB controller 146 includes break triggers from six different sources, including hardware breakpoints 0/1 (HWBKPTO 212 and HWBKPTl 214), software breakpoint (SWBKPT 216), JTAG interface 84 breakpoint (JTAGBKPT 218), ETM (embedded trace macro) breakpoint (ETMBKPT 220), and external breakpoint (EXTBKPT 222). Break trigger 212 through 222 and debug mode status input 214 go to encode break encoder 216 to cause DSP 40 to operate in DEBUG mode 200. Output from encoder 226 includes three (3) breakpoint information bits 228 and a breakpoint valid bit 230. Breakpoint information data 228 enters breakpoint information circuit 232 to cause a breakpoint information JTAG interface command 234. Breakpoint bit 230 also generates OR gate input 236 and reset circuit 238 input. Reset circuit 238 receives either a UCG resume thread number or a reset input 242 to generate reset control output 244 into OR gate 246. Either valid bit 236 or reset output 244 may cause OR gate 246 to generate BPL breakpoint output 248. [0059] The break triggers in BPL circuit 210 are processed along with the corresponding thread number mask to generate macro break trigger to each of the threads. The macro break trigger 248, bpl_breakTnum_ANY[0], is maintained until the corresponding thread is resumed. The number of pipeline stages that may be used in BPL circuit 210 is driven by hardware breakpoints which are precise breakpoints, i.e., the instruction that triggers hardware breakpoint match must not be executed. The thread switches to debug mode after executing the program until that instruction. The disclosed embodiment provides a macro break trigger one cycle after the break triggers arrive. For that reason the breakValid input 226 is logically OR'ed with its latched version input 242 to generate bpl_breakTnum_ANY[0] output 248. [0060] Through the use of breakpoints, the six threads of DSP 40 may individually enter and exit DEBUG mode 200. A breakpoint trigger may come from five sources which correspond to the five different types of breakpoints supported in ISDB 82. Upon hitting a breakpoint, a thread transitions from its current mode (e.g., WAIT/RUN) to DEBUG mode 200. In DEBUG mode 200, the thread waits for commands from ISDB 82. A thread in OFF mode 198 is powered down and may not accept any commands from ISDB 82. The latency of entering DEBUG mode 200 is implementation defined, such as in the present disclosure as relating to the event a power collapse. For example, an implementation may choose to complete a given operation, for example finish an outstanding load request, before entering DEBUG mode 200. In one embodiment, a thread identifier register contains an 8-bit read/write field and is used for holding a software thread identifier. This field is used by the hardware debugging process to match breakpoints.ISDB 82, therefore, has four operations: break, resume, stuff instruction, single step. From the micro-architecture point of view, there are two basic operations: break and resume. The micro-break command and micro-resume command to refer to operations of break, stuff instruction and single step. For example, the stuff instruction operation may be viewed as a micro-break command followed by micro-resume command after the stuff instruction operations. Breakpoint operations may be triggered from five sources, as herein described. Each break source may break multiple threads as specified in its corresponding tread number mask value.FIGURE 9 illustrates the ISDB command register contents for one embodiment of the disclosed subject matter. These ISDB control registers may be used by the host system to configure ISDB 82 to perform different debugging process tasks and communicate with the processor. These registers are accessible through the JTAG interface. The ISDB status register (ISDBST) indicates the current status of ISDB, including the stuff command status bits for which a "0" values indicates a stuff instruction is successful, whereas a "1" value indicates the stuff instruction caused an exception. The host system may use the ISDB configuration registers 0 and 1 (ISDBCFGO, ISDBCFGl) register to enable or disable various features of the ISDB 82. The breakpoint info register (BRKPTINFO) indicates, for the threads in debug mode, which trigger caused the breakpoint. The breakpoint PC register 0 and 1(BRKPTPCO, BRKPTPCl) is identical to BRKPTPCO, control hardware breakpoint 0 and 1, respectively. The breakpoint configuration registers (BRKPTCFGO and BRKPTCFGl) are used to configure breakpoint 0 and 1, respectively. The stuff instruction register (STFINST) allows for a 32-bit stuff instruction. The ISDB mail box registers (ISDBMBXIN and ISDBMBXOUT) are used to exchange data between the ISDB and core processor 70. The ISDB command register (ISDBCMD) is used by DSP 40 to issue various commands to the ISDB 82. This ISDB enable register (ISDBEN) enables ISDB operations and allows checking the status of the "security" ISDB enable bit and the ISDB clock. The ISDB version register (ISDBVER) reads the version of the ISDB design present in the chip. ISDB general purpose register (ISDBGPR) provides storage for general functions associated with ISDB 82.The ISDB command register provides, in the disclosed embodiment, a32-bit register whose value is output into DSP 40. The ISDB command register may be used to control external hardware, and in an MSM-specific manner. The ISDB control registers are accessed by the debugging process host software via JTAG interface 84 and are distributed across three units: ISDB 82, IU 114 and CU 112. Instead of placing all the registers in ISDB 82, the registers are placed locally in the unit where the register values are used primarily. [0064] The ISDB registers of FIGURE 9 are distributed among ISDB 82, IU114 and CU 112 the following way: ISDB 82 includes the ISDB enable register; ISDB version register; and ISDB general purpose register. The CU 112, wherein are the ISDB control mailbox, breakpoint logic, and micro-command generator blocks, includes ISDB configuration registers (ISDBCFGO & ISDBCFGl), the command register (ISDBCMD), breakpoint configuration registers (BRKPTCFGO & BRKPTCFGl), breakpoint information register (BRKPTINFO), breakpoint status register (ISDBST), breakpoint mailbox in register (ISDBMBXIN, ISDBMBXOUT). The IU 114 112 register block includes breakpoint command registers (BRKPTPCO, BRKPTPCl), breakpoint configuration registers (BRKPTCFGO, BRKPTCFGl), and, as is relevant to the present disclosure, the stuff instruction register (STFINST).Instruction stuffing, as here disclosed, provides a method and system forISDB 82 to execute instructions on the core. Instructions are stuffed for various reasons. These may include for the reasons of reading and/or writing core registers and memory, as well as for debugging process operations abstracted for the user and user-entered instructions. To stuff an instruction, the user first programs the STFINST register of the ISDB command register with the 32-bit instruction to be executed. The ISDB command register is then written, beginning with setting the command field to the STUFF code. Then, the process sets the thread number field to the thread to receive the instruction. Preferably, one bit in the thread number field may be set. The selected thread must be in DEBUG mode 200 before the instruction may be stuffed. If more than one bit in thread number is set or the selected thread is not in debug mode, the results are undefined. Then, the instruction stuffing process includes setting the privilege level of the stuffed instructions (either for use in USER mode 192 or SUPERVISOR mode 194). After issuing the STUFF command, the instruction may be executed on the chosen thread with the chosen privilege level. During instruction stuffing, the program counter (PC) does not advance. Stuffed instructions which use the PC for branches, or instructions that cause an exception may use the current PC value for the thread on which the stuffed instructions execute.In the case that a stuffed instruction causes an exception, the ISDB status register, ISDBST, may indicate that an exception occurred. The thread may remain in debug mode. The architected registers for the specific may reflect the exception state. For example, if a LOAD instruction is stuffed that causes a TLB miss exception, then an exception register (ELR) may be set to the current PC, the PC may be changed to exception vector, and a status register (SSR) may hold the correct cause code and status information. The debugging process software may query the ISDBST after stuffing an instruction that could cause an exception to see if an exception occurred. If it did, then the SSR register may be read, via stuffing a control register transfer instruction, to determine the exception cause.Once an exception has been recognized, the debugging process has a number of choices as to how to handle the situation. For example, the debugging process may choose to program a software or hardware breakpoint at the exception return point and resume the thread in order to run the handler. Also, the debugging process could redirect a thread to an operating system "helper" function, as well as to step through the handler using a single-step function. Furthermore, the debugging process may manually fix the problem (e.g., reload the TLB). The exact strategy is left to the operating system and/or debugging process implementation. [0068] Registers, cache, and memory may be accessed by stuffing the appropriate instruction sequences. The debugging process software may read/write thread registers by stuffing the appropriate control register transfer instruction to move data between a core register and the ISDB mailbox. This instruction may be stuffed using supervisor privilege level to ensure no exception occurs. Cache contents (data and cache tag value) may be read and/or written by stuffing the appropriate cache maintenance and load instructions.Memory may be read/written by stuffing the appropriate LOAD/STORE instruction. When the MMU is enabled, Loads and Stores always execute using a virtual address. The MMU provides the information may be stored in a cache memory, such as signaling as cacheable, uncacheable, etc. If it is desired to access memory from a particular source, for example, to read from a device in uncached memory, then the debugging process software ensures that the MMU is properly configured for this access. For certain debug scenarios, the debugging process software may engage the help of the operating system to configure a specific scenario. [0070] Cache contents are affected as if the stuffed instruction came from normal program flow. For example, a cacheable load that misses in the data cache may cause a line replacement. In the case that one thread is in debug mode and others are running, the cache contents may change accordingly. In the case of a load that misses in the cache or an uncached load, the stuff command may not be reported as complete in the ISDB status register until the load data returns and the operations completes normally.To read instruction memory, a similar procedure as reading data memory may take place. To write instruction memory, for example to set software breakpoints, the debugging process software may first stuff a STORE instruction to write the instruction memory. Then, the process includes stuffing a data cache clean address instruction to force the data into external memory, stuffing a barrier instruction to ensure that the change is observable in external memory, and an instruction cache invalidate address instruction to remove the old entry from the instruction cache. [0072] Instruction stuffing, as herein disclosed, may also be of use in association with resetting DSP 40. Note that executing an ISDB RESET command forces a hardware reset and causes the entire DSP 40, i.e., all threads, to reset. This may set all registers to initial values, power off threads T0:T5 and send a reset interrupt to thread TO. If, on the other hand, it is desired to reset just certain threads, this can be done using instruction stuffing. The steps include stuffing a "START" instruction with appropriate mask settings. This may cause a reset interrupt to be pending to the indicated threads. Then, the sequence includes executing an ISDB RESUME instruction on the desired threads. Performing such a sequence, therefore, makes possible an advantageous process of thread-selective resetting, without resetting all of DSP 40.FIGURE 10 presents a processing timing cycle chart for depicting the disclosed process for instruction stuffing in the disclosed non-intrusive debugging process. The signal behavior during a stuff operation on a particular thread, as depicted by FIGURE 10, shows the sequence of events on a single thread of DSP 40. Similar behavior may be seen by each thread in their corresponding pipeline stages. The stuffed instruction is provided by writing to the STFINST register of the ISDB command registers. To execute the stuffed instruction, debug software writes to the ISDB command register with the stuff command. The command also provides the specific thread for the stuffed instruction to execute. ISDB control register 138 issues a micro- resume command in the EX3 stage of thread pipeline processing for the thread on which the stuff instruction is to execute. At this point, the CU ISDB micro-resume type EX3 register is set to "0x2." This indicates that the issued micro-resume command is to perform a stuff operation. CU 112 asserts a CU debugging exception instruction at the WB stage of the following cycle. Upon receiving the CU debugging exception instruction, IU 114 clears off the old instruction buffer state and prepares to fetch from a new location similar to regular exception.CU 112 sends a stuff instruction request to IU 114 in the following RF stage and asserts a CU next issue pointer instruction in the WB stage. Upon receiving the CU next issue point instruction, IU 114 provides the stuffed instruction to CU 112 in a similar way as an UC instruction. It may be multiplexed with BU return data inside IU 114 once, instead of multiplexing on a per-thread basis. This feature saves multiplexing cost, as well as routes congestion over and instruction cache. The micro-resume command is associated with a side-band signal to indicate the privilege level of the stuffed instruction. This permits executing in either USER mode 192 or SUPERVISOR mode 194.While the stuffed instruction is being executed, CU 112 sends another instruction request to IU 114 to restore the instruction buffer with the regular program instruction. When the stuffed instruction is committed, CU 112 needs to return micro- resume status in the WB processing stage, whether the resume status is success or not, along with an acknowledgement. ISDB controller 138 then issues a micro-break command in the following RF stage to prevent CU 112 from executing the next instruction. If the resume status is not success, CU 112 may instruction IU 114 to handle the exception in normal ways. Note, however, that the only reason is that the stuffed instruction causes an exception. The current program counter may be pushed to ELR and then updated to the except handler entry point. The thread may be stopped due to the micro-break command. After receiving micro-break command acknowledge, stuff instruction may be complete. Accordingly, the micro-break command status may be always success in this case.In summary, the disclosed subject matter provides a method and system for stuffing instructions into a processing pipeline of a multi-threaded digital signal processor for improved software instruction debugging operations. The method and system provide for writing a stuff instruction into the debugging process registry. The disclosure includes writing a stuff command in a debugging process command register for executing the stuffed instruction. A predetermined thread of the multi-threaded digital signal processor in which the execution of the stuff instruction is to be executed is identified by the stuff instruction. The process and system issue a CU 112 debugging process control resume command during a predetermined stage, i.e., the EX3 stage, of executing the thread on the multi-threaded digital signal processor and set the CU 112 debugging process resume type to the predetermined stage of executing the thread for indicating that the issued resume command is to perform a stuff operation. The present disclosure also asserts a CU 112 exception command in the WB stage of following cycle and clears off the old instruction buffer state upon assertion of the CU 112 exception command. Then, the method and system prepare to fetch from a new location similar to a regular exception, while maintaining ELR notwithstanding a debugging process exception.Also, the present embodiment sends a stuff request from the CU 112 toIU 114 in a subsequent processing stage and asserts a CU 112 next issue pointer the following cycle. The stuffed instruction is provided to the CU 112 upon receiving the CU 112 next issue pointer, whereupon IU 114 provides the stuffed instruction to CU 112 in a similar way as an UC instruction. The stuffed instruction is then multiplexed with BU return data inside the IU 114 only once, instead of on a per thread basis. The micro-resume command is associated with a side-band signal to indicate the privilege level of the stuffed instruction (execute in user/supervisor mode). While the stuffed instruction is being executed, CU 112 sends another instruction request to IU 114 to restore the instruction buffer with the regular program instruction. Then, when the stuffed instruction is committed, CU 112 needs to return micro-resume status in WB, whether the resume status is success or not, along with an acknowledgement. The CU ISDB controller then issues a micro-break command in the following RF stage to prevent CU 112 from executing the next instruction. If the resume status is not success (i.e., when the stuffed instruction causes an exception), the CU 112 may control the IU 114 to handle the exception in normal ways. Then, the current PC may be stored in the ELR register of DSP 40 and the PC may be updated to the except handler entry point. The thread may then be stopped due to the micro-break command. After receiving micro-break command acknowledge, the stuff instruction is complete. [0078] The processing features and functions described herein for instruction stuffing operations in association with non-intrusive, thread-selective, debugging in a multi-threaded digital signal processor may be implemented in various manners. For example, not only may DSP 40 perform the above-described operations, but also the present embodiments may be implemented in an application specific integrated circuit (ASIC), a microcontroller, a digital signal processor, or other electronic circuits designed to perform the functions described herein. Moreover, the process and features here described may be stored in magnetic, optical, or other recording media for reading and execution by such various signal and instruction processing systems. The foregoing description of the preferred embodiments, therefore, is provided to enable any person skilled in the art to make or use the claimed subject matter. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without the use of the innovative faculty. Thus, the claimed subject matter is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein. |
Embodiments include computing devices, apparatus, and methods implemented by the apparatus for memory reduction for fixed point matrix multiply on a computing device. The computing device may implement a partial matrix multiplication using a first block of fixed point data of a first matrix and a second block of fixed point data of a second matrix using full precision resulting in a first intermediate result. The computing device may down convert the first intermediate result by converting fixed point data of the first intermediate result to fixed point data using lower precision resulting ina first down converted intermediate result. |
1.A method for storage reduction of fixed point matrix multiplication on a computing device, comprising:Partial matrix multiplication is achieved using full precision, using a first block of fixed point data of the first matrix and a second block of fixed point data of the second matrix to produce a first intermediate result;The first intermediate result is down-converted by converting the fixed-point data of the first intermediate result into fixed-point data with a lower precision to generate an intermediate result of the first down-conversion.2.The method of claim 1 further comprising:Storing an intermediate result of the first down conversion;Partial matrix multiplication is achieved with full precision, using a third block of fixed point data of the first matrix and a fourth block of fixed point data of the second matrix, producing a second intermediate result, wherein the first block and The third block represents at least one complete row of the first matrix, and the second block and the fourth block represent at least one complete column of the second matrix;Downconverting the second intermediate result by converting the fixed point data of the second intermediate result to fixed point data using lower precision to generate an intermediate result of the second down conversion;The summation intermediate result of the first down-conversion and the intermediate result of the second down-conversion are added using saturation addition, wherein the saturation addition limits the size of the resulting output portion of the resulting matrix to the output precision.3.The method of claim 2 further comprising:Receiving, in a first time period, a first block of the fixed point data and a second block of the fixed point data;During the second time period, the third block of the fixed point data and the fourth block of the fixed point data are received.4.The method of claim 1 wherein downconverting the first intermediate result comprises:Determining a maximum representable size of the intermediate result of the first down conversion;Retaining a reserved portion of the first intermediate result of the maximum representable size that is equal in size or less than the intermediate result of the first down-conversion; andRemoving the discarded portion of the first intermediate result, the discarded portion of the first intermediate result including the maximum representable size of the intermediate result that cannot be accommodated in the first down-conversion in the first intermediate result part.5.The method of claim 4 wherein determining the maximum representable size of the intermediate result of the first down-conversion comprises determining an amount of memory that can be used to store an intermediate result of the first down-conversion.6.The method of claim 4 further comprising:Performing a binary addition of the leftmost bit of the discarded portion of the first intermediate result;Determining whether a result of the binary addition of the leftmost bit of the discarded portion of the first intermediate result and the first intermediate result results in a carry;In response to determining that the binary addition of the leftmost bit of the discarded portion of the first intermediate result results in a carry, the carry is added to the rightmost bit of the reserved portion of the first intermediate result.7.The method of claim 1 wherein down-converting the first intermediate result comprises down-converting the first intermediate result using the lower precision equal to output precision.8.A storage multiplication component configured for fixed point matrix multiplication, the matrix multiplication component being configured to perform operations including:Partial matrix multiplication is achieved using full precision, using a first block of fixed point data of the first matrix and a second block of fixed point data of the second matrix to produce a first intermediate result;The first intermediate result is down-converted by converting the fixed-point data of the first intermediate result into fixed-point data with a lower precision to generate an intermediate result of the first down-conversion.9.The matrix multiplication component of claim 8 wherein the matrix multiplication component is configured to perform operations further comprising:Storing an intermediate result of the first down conversion;Partial matrix multiplication is achieved with full precision, using a third block of fixed point data of the first matrix and a fourth block of fixed point data of the second matrix, producing a second intermediate result, wherein the first block and The third block represents at least one complete row of the first matrix, and the second block and the fourth block represent at least one complete column of the second matrix;Downconverting the second intermediate result by converting the fixed point data of the second intermediate result to fixed point data using lower precision to generate an intermediate result of the second down conversion;The summation intermediate result of the first down-conversion and the intermediate result of the second down-conversion are added using saturation addition, wherein the saturation addition limits the size of the resulting output portion of the resulting matrix to the output precision.10.The matrix multiplication component of claim 9, wherein the matrix multiplication component is configured to perform operations further comprising:Receiving, in a first time period, a first block of the fixed point data and a second block of the fixed point data;During the second time period, the third block of the fixed point data and the fourth block of the fixed point data are received.11.The matrix multiplication component of claim 8 wherein the matrix multiplication component is configured to perform an operation such that downconverting the first intermediate result comprises:Determining a maximum representable size of the intermediate result of the first down conversion;Retaining a reserved portion of the first intermediate result of the maximum representable size that is equal in size or less than the intermediate result of the first down-conversion; andRemoving the discarded portion of the first intermediate result, the discarded portion of the first intermediate result including the maximum representable size of the intermediate result that cannot be accommodated in the first down-conversion in the first intermediate result part.12.The matrix multiplication component of claim 11 wherein said matrix multiplication component is configured to perform an operation such that determining said maximum representable size of said first down-converted intermediate result comprises determining that said storage can be used The amount of memory of the intermediate result of the first down conversion.13.The matrix multiplication component of claim 11 wherein the matrix multiplication component is configured to perform operations further comprising:Performing a binary addition of the leftmost bit of the discarded portion of the first intermediate result;Determining whether a result of the binary addition of the leftmost bit of the discarded portion of the first intermediate result and the first intermediate result results in a carry;In response to determining that the binary addition of the leftmost bit of the discarded portion of the first intermediate result results in a carry, the carry is added to the rightmost bit of the reserved portion of the first intermediate result.14.The matrix multiplication component of claim 8, wherein the matrix multiplication component is configured to perform an operation such that down-converting the first intermediate result comprises using the lower precision equal to output precision, The first intermediate result is described as being down-converted.15.The matrix multiplication component of claim 8 wherein said matrix multiplication component comprises a processor configured with processor executable instructions to perform operations of:Implementing the partial matrix multiplication using full precision, using the first block of the fixed point data of the first matrix and the second block of the fixed point data of the second matrix, generating the first intermediate result; as well asThe first intermediate result is down-converted by converting the fixed-point data of the first intermediate result to fixed-point data with a lower precision, and an intermediate result of the first down-conversion is generated.16.The matrix multiplication component of claim 8 wherein said matrix multiplication component comprises:a full-precision matrix multiplier configured to perform the operation of using a full precision, using a first block of the fixed point data of the first matrix and a second block of the fixed point data of the second matrix Implementing the partial matrix multiplication to generate the first intermediate result;a down converter configured to perform an operation of down-converting the first intermediate result by converting the fixed-point data of the first intermediate result to fixed-point data of a lower precision, generating the The intermediate result of the transformation.17.A matrix multiplication component configured for fixed-point matrix multiplication, including:Means for implementing partial matrix multiplication using full precision, using a first block of fixed point data of the first matrix and a second block of fixed point data of the second matrix, producing a first intermediate result;An intermediate result of the first down-conversion is generated by converting the fixed-point data of the first intermediate result into a unit that down-converts the first intermediate result using fixed-point data of lower precision.18.The matrix multiplication component of claim 17 further comprising:a unit for storing an intermediate result of the first down conversion;Generating a unit of partial matrix multiplication using a full precision, a third block using fixed point data of the first matrix and a fourth block of fixed point data of the second matrix, producing a second intermediate result, wherein The first block and the third block represent at least one complete row of the first matrix, and the second block and the fourth block represent at least one complete column of the second matrix;Generating a second down-converted intermediate result by converting the fixed-point data of the second intermediate result to a unit that down-converts the second intermediate result by using fixed-point data of lower precision;Means for adding an intermediate result of the first down-conversion and an intermediate result of the second down-conversion using saturation addition, wherein the saturation addition limits a size of a result output portion of the resulting matrix to Output accuracy.19.The matrix multiplication component of claim 18, further comprising:Means for receiving a first block of the fixed point data and a second block of the fixed point data during a first time period;And for receiving, during the second time period, a third block of the fixed point data and a unit of the fourth block of the fixed point data.20.The matrix multiplication component of claim 17 wherein the means for downconverting the first intermediate result comprises:a unit for determining a maximum representable size of an intermediate result of the first down conversion;Means for retaining a reserved portion of the first intermediate result of the maximum representable size that is equal in size or less than an intermediate result of the first down-conversion; anda unit for removing a discarded portion of the first intermediate result, the discarded portion of the first intermediate result comprising the intermediate result of the first intermediate result that cannot be accommodated in the intermediate result of the first down conversion The largest part that can represent the size.21.The matrix multiplication component of claim 20, wherein the means for determining the maximum representable size of the intermediate result of the first downconversion comprises: determining for storing the first downconversion The unit of the amount of memory of the intermediate result.22.The matrix multiplication component of claim 20, further comprising:Means for performing a binary addition of the leftmost bit of the discarded portion of the first intermediate result;a unit for determining whether a result of binary addition of the leftmost bit of the discarded portion of the first intermediate result and a result of causing a carry;The binary addition for determining the leftmost bit of the discarded portion of the first intermediate result in response to determining 1 results in a carry, the carry being added to the rightmost bit of the reserved portion of the first intermediate result Unit.23.The matrix multiplication component of claim 17, wherein the means for downconverting the first intermediate result comprises: performing the first intermediate result using the lower precision equal to output precision Downconverted unit.24.A non-transitory processor readable storage medium having processor-executable instructions stored thereon, the processor-executable instructions being configured to cause a processor of a computing device to perform operations comprising:Partial matrix multiplication is achieved using full precision, using a first block of fixed point data of the first matrix and a second block of fixed point data of the second matrix to produce a first intermediate result;The first intermediate result is down-converted by converting the fixed-point data of the first intermediate result into fixed-point data with a lower precision to generate an intermediate result of the first down-conversion.25.The non-transitory processor readable storage medium of claim 24, wherein the stored processor-executable instructions are configured to cause the processor to perform operations further comprising:Storing an intermediate result of the first down conversion;Partial matrix multiplication is achieved with full precision, using a third block of fixed point data of the first matrix and a fourth block of fixed point data of the second matrix, producing a second intermediate result, wherein the first block and The third block represents at least one complete row of the first matrix, and the second block and the fourth block represent at least one complete column of the second matrix;Downconverting the second intermediate result by converting the fixed point data of the second intermediate result to fixed point data using lower precision to generate an intermediate result of the second down conversion;The summation intermediate result of the first down-conversion and the intermediate result of the second down-conversion are added using saturation addition, which limits the size of the resulting output portion of the resulting matrix to the output precision.26.The non-transitory processor readable storage medium of claim 25, wherein the stored processor-executable instructions are configured to cause the processor to perform operations further comprising:Receiving, in a first time period, a first block of the fixed point data and a second block of the fixed point data;During the second time period, the third block of the fixed point data and the fourth block of the fixed point data are received.27.The non-transitory processor readable storage medium of claim 24, wherein the stored processor-executable instructions are configured to cause the processor to perform an operation such that the first intermediate result is down-converted include:Determining a maximum representable size of the intermediate result of the first down conversion;Retaining a reserved portion of the first intermediate result of the maximum representable size that is equal in size or less than the intermediate result of the first down-conversion; andRemoving the discarded portion of the first intermediate result, the discarded portion of the first intermediate result including the maximum representable size of the intermediate result that cannot be accommodated in the first down-conversion in the first intermediate result part.28.The non-transitory processor readable storage medium of claim 27, wherein the stored processor-executable instructions are configured to cause the processor to perform an operation such that an intermediate result of the first down-conversion is determined The maximum representable size includes determining an amount of memory that can be used to store an intermediate result of the first down conversion.29.The non-transitory processor readable storage medium of claim 27, wherein the stored processor-executable instructions are configured to cause the processor to perform operations further comprising:Performing a binary addition of the leftmost bit of the discarded portion of the first intermediate result;Determining whether a result of the binary addition of the leftmost bit of the discarded portion of the first intermediate result and the first intermediate result results in a carry;In response to determining that the binary addition of the leftmost bit of the discarded portion of the first intermediate result results in a carry, the carry is added to the rightmost bit of the reserved portion of the first intermediate result.30.The non-transitory processor readable storage medium of claim 24, wherein the stored processor-executable instructions are configured to cause the processor to perform an operation such that the first intermediate result is down-converted The method includes down-converting the first intermediate result using the lower precision equal to the output precision. |
Memory reduction method for fixed point matrix multiplicationBackground techniqueDeep neural networks are used extensively on mobile devices to perform a variety of tasks, including scene detection, face recognition, image classification, and annotation. Deep Neural Networks In order to accomplish these tasks, convolution is frequently used, and convolution operations are usually implemented using matrix multiplication. Deep neural network models are trained for floating point operations. On mobile devices, deep neural network models such as predictive models now also use fixed point calculations. However, many implementations of deep neural network models using fixed-point calculations require additional storage, which reduces the execution speed of mobile devices.Summary of the inventionVarious embodiments include circuits and methods for storage reduction of fixed point matrix multiplication on a computing device. Various embodiments may be implemented using circuitry and/or a processor executing processor-executable instructions, wherein the processor-executable instructions perform operations comprising: employing a full precision, using the first block of fixed-point data of the first matrix And a second block of fixed point data of the second matrix to implement partial matrix multiplication, which produces a first intermediate result. Subsequently, the first intermediate result can be down-converted by converting the fixed-point data of the first intermediate result into fixed-point data with low precision, which produces an intermediate result of the first down-conversion.Some embodiments may include: storing an intermediate result of the first down-conversion; implementing a partial matrix multiplication using full precision, using a third block of fixed-point data of the first matrix and a fourth block of fixed-point data of the second matrix, which generates a A second intermediate result, wherein the first block and the third block represent at least one complete row of the first matrix, and the second block and the fourth block represent at least one complete column of the second matrix. The second intermediate result can be down-converted by converting the fixed-point data of the second intermediate result to fixed-point data with low precision, which produces an intermediate result of the second down-conversion. Using the saturation addition, the intermediate result of the first down-conversion and the intermediate result of the second down-conversion are added, which limits the size of the resulting output portion of the resulting matrix to the output precision.Some embodiments may include receiving a first block of fixed point data and a second block of fixed point data during a first time period; and receiving a third block of fixed point data and a fourth block of fixed point data during the second time period .In some embodiments, down-converting the first intermediate result may include determining a maximum representable size of the intermediate result of the first down-conversion; retaining a maximum representable size that is equal to or smaller than an intermediate result of the first down-conversion in size a reserved portion of the first intermediate result; and a discarded portion of the first intermediate result, the discarded portion of the first intermediate result including a portion of the first intermediate result that cannot be accommodated in the maximum representable size of the intermediate result of the first down-conversion .In some embodiments, determining a maximum representable size of the intermediate result of the first down conversion can include determining an amount of memory that can be used to store an intermediate result of the first down conversion.Some embodiments may include: performing a binary addition of the leftmost bit of the discarded portion of the first intermediate result; determining whether the result of the binary addition of the leftmost bit of the discarded portion of the first intermediate result results in a carry; Determining the binary addition of the leftmost bit of the discarded portion of 1 and the first intermediate result results in a carry which is added to the rightmost bit of the reserved portion of the first intermediate result.In some embodiments, down-converting the first intermediate result can include down-converting the first intermediate result using the low precision equal to the output precision.Some embodiments include a processor configured with processor-executable instructions to perform the operations of one or more of the method methods outlined above. Some embodiments include circuitry configured to perform the operations of one or more of the above-described embodiments methods.Some embodiments include a computing device having means for performing the functions of one or more of the method methods outlined above.Various embodiments may include a non-transitory processor readable storage medium having processor-executable instructions stored thereon, the processor-executable instructions being configured to cause a processor to perform the method of the embodiments outlined above One or more operations.DRAWINGSThe accompanying drawings, which are incorporated in FIG.1 is a component block diagram showing a computing device suitable for implementing one embodiment.2 is a component block diagram showing an exemplary multi-core processor suitable for implementing one embodiment.3A-3F are schematic diagrams showing exemplary matrix multiplications in accordance with one embodiment.4 is a process flow diagram showing a method for implementing storage reduction for fixed point matrix multiplication, in accordance with one embodiment.5 is a process flow diagram showing a method for implementing down-conversion of intermediate results of partial matrix multiplication, in accordance with one embodiment.6 is a component block diagram showing an exemplary matrix multiplication component in accordance with one embodiment.7 is a component block diagram showing an exemplary mobile computing device suitable for use in connection with various embodiments.8 is a component block diagram showing an exemplary mobile computing device suitable for use in connection with various embodiments.9 is a component block diagram showing an exemplary server suitable for use in conjunction with various embodiments.Detailed waysVarious embodiments are now described in detail with reference to the drawings. Wherever possible, the same reference numerals are used throughout the drawings to the References to specific examples and implementations are for illustrative purposes only and are not intended to limit the scope of the claims.The terms "computing device" and "mobile computing device" are used interchangeably herein to refer to any or all of the following: cellular telephone, smart phone, personal or mobile multimedia player, personal data assistant (PDA), knee Laptops, tablets, convertible laptops/tablets (2 in 1 computers), smartbooks, ultrabooks, netbooks, handheld computers, wireless email receivers, cellular phones with multimedia Internet capabilities, mobile games A console, a wireless game controller, and similar personal electronic devices including memory and multi-core programmable processors. Furthermore, the term "computing device" may also refer to a stationary computing device including: a personal computer, a desktop computer, an all-in-one computer, a workstation, a supercomputer, a mainframe computer, an embedded computer, a server, a home theater computer, and Game console. While various embodiments are particularly useful for mobile computing devices, such as smart phones, which have limited memory and battery resources, these embodiments are generally also applicable to any electronic device that implements multiple memory devices and limited power budgets, Among other things, reducing the power consumption of the processor can extend the battery operating time of the mobile computing device.Embodiments include methods, systems, and apparatus for implementing the method of reducing or eliminating additional storage requirements for deep neural network models using fixed point calculations. Embodiments include methods for blocking and rounding fixed point calculations to simulate higher precision solutions without generating storage costs for higher precision solutions, thereby increasing execution speed with minimal impact on computational accuracy.In a fixed-point neural network, floating point can be converted to a fixed point by direct conversion or scaling. For direct conversion, the number of digits required for integers and decimals is calculated, and the number of bits for each is selected based on the desired performance. For scaling, all numbers are scaled to a positive integer within a certain range, and the deviation is used to adjust the interval in which the range falls.When scaling is used, the input is typically of lower precision than the deviation, and the accumulation used to achieve matrix multiplication must be done with greater precision than the input or output of the matrix multiplication. For example, each of the input and output can be 8 bits, but the intermediate step of the calculation can be 32 bits, which requires downconverting the output to 8 bits. Combining the higher precision requirements for fixed point calculations with respect to floating point calculations and the processor blocking technique for implementing matrix multiplication with caches requires storing some intermediate results in memory because the cache must be idle. In order to complete the intermediate operation. The amount of additional storage required to implement these operations depends on the cache block size of the M and N dimensions of the matrix used for multiplication. This is illustrated in Figure 3A and discussed in more detail below. Therefore, more memory is needed, which reduces the performance/speed of the computing device.Various embodiments and implementations can reduce or eliminate the amount of storage required to store these intermediate results. These intermediate results may be by the block of matrix A (which is defined by the cache block size for dimensions M and K) and the block of matrix B (which is defined by the cache block size for dimensions K and N) Generated by multiplication. The cache block size can be smaller than the dimensions M, K, and N. For example, dimension K can be a dimension of time, and dimensions M and N can be dimensions of a data size.Matrix multiplication of blocks of these matrices can be achieved by using an accumulation function to add the multiplied results of the elements of matrices A and B to produce intermediate results. Multiplication and accumulation can be achieved using full precision, for the blocks of matrices A and B, the size of the cache block size for dimensions K and M or N. Whether you are preparing to produce an output portion of a matrix multiplication based on intermediate results or storing intermediate results in memory, the intermediate results can be downconverted to a lower precision format. For example, the intermediate result can be down-converted from a 32-bit fixed-point value to a 16-bit fixed-point value. The result of the transform can be rounded or truncated to the most recent value that can be represented by a lower precision format.The intermediate result of the down-conversion of the matrix multiplication of the first set of blocks of matrices A and B may be stored in memory for later use in the middle of the down-conversion of the matrix multiplication of the blocks using the second set of matrices A and B As a result, the matrix multiplication is completed.In order to generate an output based on intermediate results stored in the memory, saturation addition can be used to add together the intermediate results of the matrix multiplication of the first set of matrices A and B and the second set of blocks, limiting the output value to Within the specified range.Calculate the extra amount of storage required for fixed-point matrix multiplication (which requires a higher precision intermediate value than the input or output), which can be calculated using:Additional storage = M block * N block * intermediate precision sizeThus, after down-converting the intermediate results, the intermediate precision size can reduce the amount of additional storage required compared to full precision. Using an intermediate precision equal to the size of the output precision does not result in additional storage requirements.FIG. 1 illustrates a system suitable for use in connection with various embodiments, including a computing device 10 configured to communicate with a remote computing device. Computing device 10 may include a system on chip (SoC) 12 having a processor 14, memory 16, communication interface 18, and storage memory interface 20. The computing device 10 can also include a communication component 22, such as a wired or wireless modem, a storage memory 24, and an antenna 26 for establishing a wireless communication link. Processor 14 can include any of a wide variety of hardware cores (e.g., multiple processor cores).The term "system on a chip" (SoC) is used herein to refer to a group of interconnected electronic circuits, which typically include, but are not limited to, a hardware core, a memory, and a communication interface. A hardware core can include a variety of different types of processors, such as general purpose processors, central processing units (CPUs), digital signal processors (DSPs), graphics processing units (GPUs), accelerated processing units (APUs), auxiliary processors. , single-core processors and multi-core processors. In addition, hardware cores can embody other hardware and hardware combinations, such as field-programmable gate arrays (FPGAs), application-specific integrated circuits (ASCIs), other programmable logic devices, split-gate logic devices, transistor logic devices, performance monitoring hardware, Watchdog hardware and time base. The integrated circuit can be configured such that the components of the integrated circuit are on a single piece of semiconductor material (eg, silicon). The SoC 12 may include one or more processors 14. Computing device 10 may include more than one SoC 12, thereby increasing the number of processors 14 and processor cores. Moreover, computing device 10 may also include a processor 14 that is not associated with SoC 12. Each processor 14 may be a multi-core processor as described below with reference to FIG. 2. Each of the processors 14 may be configured for a particular purpose, which may be the same or different than the other processors 14 of the computing device 10. One or more of the processor 14 and the processor core of the same or different configurations may be combined. A set of processors 14 or processor cores may be referred to as multi-processor clusters.The memory 16 of the SoC 12 may be a volatile or non-volatile memory configured to store data and processor executable code accessed by the processor 14. Computing device 10 and/or SoC 12 may include one or more memories 16 that are configured for various purposes. In one embodiment, one or more memories 16 may include volatile memory such as random access memory (RAM) or main memory or cache memory. These memories 16 may be configured to temporarily hold a limited number of data received from data sensors or subsystems, data requested from non-volatile memory, and/or processor-executable code instructions, anticipating future access based on various factors. Data loaded from non-volatile memory to memory 16, and/or intermediate processing data and/or processor-executable code instructions generated by processor 14, and temporarily stored for quick access in the future without storage in non-easy Data in the memory.The memory 16 can be configured to at least temporarily store data and processing that is loaded from another memory device (eg, another memory 16 or storage memory 24) to the memory 16 for access by one or more processors of the processor 14. Executable code. The data or processor executable code for loading into memory 16 may be loaded in response to execution of a function by processor 14. Loading data or processor executable code into memory 16 in response to execution of a function may result from an unsuccessful or missed memory access request for memory 16 (due to requested data or processor executable code) Not located in memory 16). In response to a miss, a memory access request to another memory 16 or storage memory 24 may be made to load the requested data or processor executable code from another memory 16 or storage memory 24 into memory device 16. Loading data or processor executable code into memory 16 in response to execution of a function may originate from a memory access request to another memory 16 or storage memory 24, which may be loaded into the data or processor executable code Memory 16 is available for later access.In one embodiment, memory 16 may be configured to at least temporarily store raw data that is loaded into memory 16 from an original data source device (eg, a sensor or subsystem). Raw data may be streamed from the original data source device to memory 16 and stored by the memory until the machine learning accelerator can receive and process the raw data, as discussed further herein with respect to Figures 3-19.The storage memory interface 20 and the storage memory 24 can operate in unison to allow the computing device 10 to store data and processor executable code on a non-volatile storage medium. The storage memory 24 can be configured much like an embodiment of the memory 16, wherein the storage memory 24 can store data or processor executable code for access by one or more processors in the processor 14. Even after the power of computing device 10 has been turned off, storage memory 24 (which is non-volatile) can retain this information. When the power source is turned back on and the computing device 10 is restarted, the computing device 10 can obtain the information stored on the storage memory 24. The storage memory interface 20 can control access to the storage memory 24 and allow the processor 14 to read data from and write data to the storage memory 24.Some or all of the components of computing device 10 may be arranged and/or combined differently while still providing the necessary functionality. Moreover, computing device 10 may not be limited to one of each of these components, and multiple instances of each component may be included in various configurations of computing device 10.FIG. 2 illustrates a multi-core processor 14 suitable for implementing one embodiment. Multi-core processor 14 may have multiple homogeneous or heterogeneous processor cores 200, 201, 202, 203. The processor cores 200, 201, 202, 203 may be isomorphic in that the processor cores 200, 201, 202, 203 of a single processor 14 may be configured for the same purpose and have the same or similar performance. characteristic. For example, processor 14 may be a general purpose processor, and processor cores 200, 201, 202, 203 may be homogeneous general purpose processor cores. Alternatively, processor 14 may be a graphics processing unit or a digital signal processor, and processor cores 200, 201, 202, 203 may each be a homogeneous graphics processor core or a digital signal processor core. For ease of reference, the terms "processor" and "processor core" are used interchangeably herein.The processor cores 200, 201, 202, 203 may be heterogeneous in that the processor cores 200, 201, 202, 203 of a single processor 14 may be configured for different purposes, and/or have different Performance characteristics. The heterogeneity of these heterogeneous processor cores can include different instruction set architectures, pipes, operating frequencies, and the like. Examples of these heterogeneous processor cores may include an architecture called the "big.LITTLE" architecture, in which a slow, low power processor core can be coupled to a more powerful and power consuming processor core. In a similar embodiment, SoC 12 may include multiple homogeneous or heterogeneous processors 14.In the example shown in FIG. 2, multi-core processor 14 includes four processor cores 200, 201, 202, 203 (ie, processor core 0, processor core 1, processor core 2, and processor core 3). . For ease of explanation, the examples herein may refer to the four processor cores 200, 201, 202, 203 shown in FIG. However, the four processor cores 200, 201, 202, 203 illustrated in Figure 2 and described herein are provided as an example only and are in no way meant to limit the various embodiments to a quad-core processor system. Computing device 10, SoC 12, or multi-core processor 14 may include fewer or more processor cores, individually or in combination, than four processor cores 200, 201, 202, 203 shown and described herein. .Figures 3A-3F illustrate non-limiting examples of matrix multiplication, according to one embodiment. The exemplary matrix multiplication involves multiplication or dot product of matrix A300 and matrix B 302 to produce a resulting matrix 304.The matrices 300, 302 can have separate dimensions M and N, each of which is related to a respective cache block size that is designated for the respective matrix 300, 302. The matrix 300, 302 can have a shared dimension K, which can be, for example, a dimension of time. For example, the dimension K can be related to the amount of time or clock cycle used when processing the input data to process the matrix A300. Thus, for matrix A 300, the computing device can acquire, generate, or receive input data and represent the input data for any given time K as a column of matrix A300 (one column equals the size of dimension M).In performing matrix multiplication, the computing device may generate or provide a set of weighting factors represented as rows of matrix B 302 within the same time K as the corresponding column of matrix A300. Thus, as time passes, the matrices 300, 302 can be constructed and traversed along the dimension K. The resulting matrix 304 can have dimensions of dimensions M and N.In some implementations, the cache block sizes for the dimensions M, N, and K of the matrices 300, 302 can be smaller than the dimensions M, N, and K. The cache block size can be determined based on the amount of cache that is specified for each dimension that is or can be used to perform matrix multiplication. The cache block size may limit the amount of data that can be stored on the cache in each of the matrices 300, 302 during execution of the matrix multiplication. The cache block size for any of the dimensions M, N, and K may result in multi-step processing for performing matrix multiplication.For example, in FIG. 3B, portion 306a of matrix A300 and portion 308a of matrix B 302 indicate cache block sizes for the dimensions of M, N, and K. For matrix A300, the cache block size for dimension M can be 3 units, and the cache block size for dimension K can be two units. Similarly, for matrix B 302, the cache block size for dimension N can be 5 units, and the cache block size for dimension K can be the same as the cache block size for dimension K for matrix A300. The size (ie, two units).The units of the cache block size and the units of the dimensions M, N, K of the matrix 300, 302 can be measured in various units including bits, bytes, words, and the like. For convenience and brevity, the ratio of the cache block size to the dimensions of M, N, and K for each dimension M, N, and K is shown as a ratio of 2:1. However, the ratio of the cache block size to the dimensions of M, N, and K for each dimension M, N, and K may be any ratio, which may be the same or different from each other. Additionally, the data of the matrices 300, 302 can be formatted as floating point data.FIG. 3C illustrates an implementation of partial matrix multiplication using blocks 306a, 308a of matrices 300, 302, respectively. Blocks 306a, 308a of matrices 300, 302 may be stored in a cache, and the multiplication and addition/accumulation operations may use the information of blocks 306a, 308a to implement partial matrix multiplication.The example in Figure 3C depicts an operation of partial matrix multiplication that shows the multiplication of row 310 of block 306a with column 312 of block 308a. Using a general matrix multiplication technique, each unit of row 310 can be multiplied by the corresponding unit of column 312, and the results of the multiplication are added to produce an intermediate result 314 for the operation. The multiplication and addition/accumulation of the intermediate result 314 can be achieved with full precision for the fixed point data of the matrices 300, 302 converted from the floating point data of the matrices 300, 302. The intermediate result 314 can be larger than any of row 310 and column 312. Since the intermediate result 314 is the result of partial matrix multiplication, the intermediate result is not the data to be output. In the example of FIG. 3, to be a complete output portion, the intermediate result also lacks data from the remaining rows of the matrix A300 corresponding to the rows of the block 306a and the remaining columns of the matrix B 302 corresponding to the columns of the block 308a. . Therefore, the intermediate result 314 must be stored.The intermediate result 314 can be down-converted to a smaller fixed-point value to reduce the amount of storage space required. When converting from a more precise fixed point to a lower precision fixed point, the extra memory required to calculate the matrix multiplication can be calculated by the following equation: extra storage = M block * N block * intermediate precision size.Replacing the intermediate precision size with a smaller down-transformed intermediate precision size can reduce the amount of extra storage required. In various implementations, the amount of size reduction can be determined based on the amount of available cache memory or the size of a dedicated register and/or a specified level of accuracy. The smaller the intermediate result 314 is down-converted, the higher the probability of data error. Thus, for performance and accuracy, the size to which the intermediate results 314 are downconverted may be balanced, or in various applications, one or the other may be biased. The magnitude of the down-converted intermediate precision is equal to the size of the output precision used for matrix multiplication, which eliminates the need for additional memory to store values that can later be down-converted from fixed-point intermediate result precision to lower-precision fixed-point output.To down-convert the full precision, the portion of the fixed-point intermediate result 314 and the intermediate result 314 can be removed from the lower end of the data to a representable size based on the amount of space used to store the intermediate result 314. The down-converted intermediate results 316a, 316b, 316c may include intermediate portions of reserved portions 318a, 318b, 318c, which may be left after the discarded portions 320a, 320b, 320c of the intermediate result are removed. The larger the available cache or register space, or the greater the accuracy specified, the larger the reserved portion 318a, 318b, 318c of the intermediate result, and the smaller the discarded portion 320a, 320b, 320c of the intermediate result. Similarly, the smaller the available cache or register space, or the lesser the specified accuracy, the smaller the reserved portion 318a, 318b, 318c of the intermediate result, and the larger the discarded portion 320a, 320b, 320c of the intermediate result.The down-converted intermediate precision size may be the size of the reserved portion 318a, 318b, 318c of the intermediate result. The down-conversion may include removing the discarded portions 320a, 320b, 320c of the intermediate results, which result in the reserved portions 318a, 318b, 318c of the truncated intermediate results. Further, the down-conversion may further include rounding by adding a binary bit set to "1" to the leftmost bit of the discarded portion 320a, 320b, 320c of the intermediate result. The addition of "0" and "1" may result in a value of "1", which may be discarded along with the discarded portion 320a, 320b, 320c of the intermediate result, which results in the reserved portion 318a, 318b, 318c of the intermediate result being rounded down. In. The addition of "1" and "1" may result in a value of "0" having a carry "1". The "0" bit can be discarded along with the discarded portion 320a, 320b, 320c of the intermediate result, and the carry "1" is added to the rightmost bit of the reserved portion 318a, 318b, 318c of the intermediate result, which results in an intermediate result. The reserved portions 318a, 318b, 318c are rounded up. Rounding can reduce the amount of error when only the down-converted intermediate results 316a, 316b, 316c are truncated.The process of partial matrix multiplication described herein may be repeated for the next available block 306b, 308b of the matrix 300, 302, respectively, as shown in Figure 3D. In addition, the intermediate result of the down-conversion of the partial matrix multiplication of the next available block 306b, 308b may also be stored in an available cache or special register.As shown in FIG. 3E, as the available blocks 306c, 308c of the matrix 300, 302 become available, partial matrix multiplication can be implemented for them separately. The intermediate result of the down-conversion of the partial matrix multiplication of blocks 306c, 308c may also be stored in an available cache or special register. The intermediate results of the stored downconversions of the partial matrix multiplications of blocks 306a, 306c, 308a, 308c may be combined using saturation addition. The sum obtained by saturation addition for the intermediate result of the down-conversion of the partial matrix multiplication of blocks 306a, 306c, 308a, 308c may produce the resulting output portion 322a of the matrix 304. The saturation addition can limit the size of the output portion 322a to maintain output accuracy.As shown in FIG. 3F, the remaining blocks 306d, 308d of the matrix 300, 302 can be operated to implement a final implementation of partial matrix multiplication for matrix multiplication of the matrices 300, 302. The intermediate result of the down-conversion is generated according to the final implementation of the partial matrix multiplication, and the intermediate portions of the last down-conversion can be provided by performing the saturation addition using the intermediate result of the down-conversion to generate the output portions 322b, 322c, 322d. The resulting matrix 304.In various implementations, saturation addition can be achieved based on the availability of intermediate results of the down-conversions that can be combined to produce the resulting output portion of matrix 304. In various implementations, saturation addition can be achieved based on the availability of intermediate results for all downconversions for matrix multiplication. In various implementations, the resulting output portion of matrix 304 can be produced in any order.4 illustrates an embodiment method 400 for storage reduction for fixed point matrix multiplication, in accordance with various embodiments. Software implemented in a processor (eg, processor 14 in Figures 1 and 2), utilizing dedicated hardware or circuitry, or a combination of processor and dedicated hardware (eg, in a machine learning device including other single components) The method 400 executes the software, and the method 400 is implemented in the computing device. To encompass alternative configurations enabled in various embodiments, the hardware implementing method 400 herein is referred to as a "computing device."In block 402, the computing device can receive, retrieve, or generate data for matrix A and matrix B. For example, the data of matrix A may include floating point input data for processing, acquired, or received by the computing device at any given time K. For example, the data of matrix B may include a set of floating point weighting factors that are received, generated, or received within the same time K. The floating point input data of matrix A and the floating point weighting factor of matrix B can be transformed into a fixed point format.In block 404, the computing device can implement partial matrix multiplication for the data blocks of matrix A and matrix B with full precision. These data blocks may include one or more rows and one or more columns of matrix A and matrix B, but which are smaller than all rows and all columns of matrix A and matrix B. The number of rows and columns of matrix A and matrix B may be limited by the amount of cache space allocated to implement matrix multiplication of matrix A and matrix B, specifically, for matrix A and matrix. The row of one of B, the column of the other of matrix A and matrix B, and the limit of the amount of space of the row or column of matrix A and matrix B related to time dimension K. The computing device can implement matrix multiplication of blocks for matrix A and matrix B by employing full precision such that each element of the resulting intermediate matrix is a fixed-point intermediate result of partial matrix multiplication at full precision, in block 404 Implement partial matrix multiplication.In block 406, the computing device may down-convert the fixed-point intermediate result of the partial matrix multiplication at full precision to an intermediate result of the lower precision fixed-point down-conversion. Referring now to Figure 5, an embodiment method 500 for down-converting a fixed-point intermediate result at full precision to a lower precision fixed-point down-conversion is described.In block 408, the computing device can store the intermediate results of the down conversion. The computing device can use a dedicated cache space or a dedicated register to store the intermediate results of the down-conversion. As described above, the amount of space that can be used to store the intermediate results of the down-conversion can affect the accuracy of the intermediate results of the down-conversion. In addition, the amount of space available to store the intermediate results of the down-conversion can be related to the specified performance and/or accuracy. The more space that can be used to store the intermediate results of the down-conversion, the higher the accuracy of the matrix multiplication, but the slower the execution speed for implementing the matrix multiplication. Similarly, the less space available to store the intermediate results of the down-conversion, the lower the accuracy of the result of matrix multiplication, but the faster the execution speed for implementing the matrix multiplication.The computing device may continue to implement partial matrix multiplication for block A and matrix B data blocks in full precision, or in block 402, receive, acquire, and/or generate for matrix A and matrix B. The data.Simultaneously, in decision block 410, the computing device can determine if the intermediate result of the down conversion is combinable. In order to be combinable, the intermediate result of the down-conversion may be the result of partial matrix multiplication involving the following blocks: these blocks represent at least the complete row from one of matrix A and matrix B, and from the other of matrix A and matrix B At least the full column. In this way, the intermediate results of these down-conversions can represent: the middle of the down-conversion of the matrix multiplication for at least the complete row from one of the matrix A and the matrix B and at least the complete column from the other of the matrix A and the matrix B The complete set of results.In response to determining that the intermediate result of the down-conversion is not combinable (ie, decision block 410 = "No"), the computing device may continue to implement the portion of the data block for matrix A and matrix B in block 404 with full precision. Matrix multiplication, or in block 402, receives, acquires, and/or generates data for matrix A and matrix B.In response to determining that the intermediate results of the down-conversion are combinable (ie, decision block 410 = "Yes"), in block 412, the computing device may add the intermediate results of the combinable down-conversions using saturation addition. . The saturation addition can be configured to limit the size of the resulting output portion of the resulting matrix to maintain output accuracy. In block 414, the computing device can output the output portion of the resulting matrix.FIG. 5 illustrates an embodiment method 500 for downconverting intermediate results of partial matrix multiplication, in accordance with various embodiments. The software executed in a processor (eg, processor 14 in FIGS. 1 and 2), utilizing general purpose hardware or dedicated hardware (eg, a processor executing software in a machine learning device including other individual components) may be utilized, Method 500 is implemented in a computing device. To encompass alternative configurations enabled in various embodiments, the hardware implementing method 500 is referred to herein as a "computing device."Method 500 can be implemented as part of the operation of block 406 described with reference to FIG. In block 502, the computing device can determine a maximum representable size of the intermediate result for the lower precision down conversion. The representable size may depend on the amount of cache or register space dedicated to storing intermediate results of the down-conversion. The amount of cache or register space dedicated to storing the intermediate result of the down-conversion may impose limits on the size of the data stored therein, while reducing the size of the intermediate result of the down-conversion to fit the space of the cache or registers. The down-converted intermediate result can be divided into a reserved portion of the down-converted intermediate result and a discarded portion of the down-converted intermediate result. The reserved portion of the down-converted intermediate result may include the maximum representable size of the intermediate result for the lower precision down-conversion and represents the portion of the intermediate result that may fit into the space of the cache or register. The discarded portion of the down-converted intermediate result may represent a portion of the intermediate result that is not suitable for the space of the cache or register.In optional block 504, the computing device can maximize the representable size of the intermediate result for the lower precision down conversion by adding a binary "1" bit to the leftmost bit of the dropped portion of the down-converted intermediate result. Rounding. The addition of "0" and "1" may result in a value of "1" which may be discarded along with the discarded portion of the intermediate result, which causes the reserved portion of the intermediate result to be rounded down. The addition of "1" and "1" may result in a value of "0" having a carry "1". The "0" bit can be discarded along with the discarded portion of the intermediate result, and the carry "1" is added to the rightmost bit of the retained intermediate result, which results in rounding of the reserved portion of the retained intermediate result. Rounding reduces the amount of error when truncating only the intermediate result of the down conversion.In optional decision block 506, the computing device can determine whether the result of adding a binary "1" bit to the leftmost bit of the discarded portion of the down-converted intermediate result (in optional block 504) results in a carry.In response to determining to add a binary "1" bit to the leftmost bit of the discarded portion of the down-converted intermediate result (in optional block 504) resulting in a carry (ie, optional decision block 506 = "Yes"), In optional block 508, the computing device can add the carry to the rightmost bit of the reserved portion of the retained intermediate result.In optional block 508, after the carry is added to the rightmost bit of the reserved portion, or in response to determining to add a binary "1" bit to the leftmost bit of the discarded portion of the down-converted intermediate result (in the optional box 504) does not result in a carry (ie, optional decision block 506 = "No"), in block 510, the computing device can remove the discarded portion of the down-converted intermediate result. The removal of the discarded portion of the intermediate result of the down-conversion can be accomplished by removing the offset of the discarded portion of the down-converted intermediate result.FIG. 6 illustrates an exemplary matrix multiplication component 600 for implementing various embodiments in dedicated hardware (eg, circuitry or hardware components). In various embodiments, matrix multiplication component 600 can implement method 400 described with reference to FIG. 4 and method 500 described with reference to FIG. The matrix multiplication component 600 can be a hardware component or circuit that includes an input buffer 602, a full precision matrix multiplier 604, a down converter 606, an intermediate result recognizer 608, a saturating adder 610, and an output buffer 612.The input buffer 602 can be configured to receive partial data of a matrix for multiplication. The partial data of the matrix represents all or a portion of one or a column of data of the matrix, for example, portions 306a-306d of matrix A 300 and portions 308a-308d of matrix B 302 as described herein with reference to Figures 3A-3F. In various implementations, the input buffer 602 can be partitioned into portions of the data that are designated for a particular matrix. In various implementations, multiple input buffers 602 can be implemented and specified for data for a particular matrix. Input buffer 602 can hold these portions of data until full precision matrix multiplier 604 is ready to operate using the portion of data held by input buffer 602.The full precision matrix multiplier 604 can be configured to perform the multiplication and addition of these data portions to produce full precision intermediate results, such as the intermediate results 314 described with reference to FIG. 3C. The matrix multiplication implemented by the full precision matrix multiplier 604 can represent the portion of the larger matrix multiplication of all of the data of the multiplied matrix.The down converter 606 can be configured to downconvert the intermediate result 314 to a lower precision relative to the full precision of the intermediate result 314. Downconverter 606 can remove portions of intermediate result 314 (e.g., discarded portions 320a-320c of intermediate results described with reference to Figure 3C) while retaining other portions of intermediate result 314 (e.g., intermediate described with reference to Figure 3C) Retained portions 318a-318c) of the results. The down-conversion may result in an intermediate result of the down-conversion with the most recently representable value (eg, the reserved portion 318a-318c of the truncated intermediate result). In addition, the downconverter 606 can also round off the reserved portion of the intermediate result to produce an intermediate result of the down conversion.The down-converted intermediate result may be stored in a portion of the cache memory or in the work cache 614 (which may be part of the memory 16 described with reference to FIG. 1). The intermediate result recognizer 608 can identify intermediate results of downconversions that can be combined to generate an output portion of the matrix multiplication, such as the output portions 322a-322d described with reference to Figures 3E and 3F. The intermediate result recognizer 608 can retrieve intermediate results of these down-conversions from the work cache 614. In other words, the intermediate result recognizer 608 can obtain intermediate results of the down-conversion of the matrix multiplications of the various portions of the matrix that represent at least the complete column of one of the matrices and the complete row of the other matrix.The saturation adder 610 can receive intermediate results of the combinable downconversions and implement saturation addition to produce an output portion. The output buffer 612 can preserve the result of the saturation addition until the output portion is completed, so that a portion of the result matrix of the matrix multiplication can be constructed from the output portion.In various implementations, different components of matrix multiplication component 600 can store partially completed or completed execution results in work cache 614 and retrieve stored execution results to complete an ongoing task or implementation of a new task. In various implementations, different components of matrix multiplication component 600 can include dedicated buffers or registers for storing execution results, and can execute execution results from these dedicated buffers or registers to complete ongoing tasks or new tasks. achieve.In various implementations, multiple matrix multiplication components 600 can be implemented in a processor, a system on a chip, or a computing device to perform matrix multiplication of portions of a matrix in parallel. The other matrix multiplication component 600 can use the intermediate results of the down-conversion from the different matrix multiplication components 600 to produce an output portion associated with the data portion of the intermediate result of which the matrix multiplication component 600 produces its down-conversion. For example, the first matrix multiplication component 600 can generate an intermediate result of the first down-conversion for the first portion of the first column of data of the first matrix and the first portion of the first row of data of the second matrix. The second matrix multiplication component 600 can generate an intermediate result of the second downconversion for the second portion of the first column of data of the first matrix and the second portion of the first row of data of the second matrix. To complete the matrix multiplication of the first column of the first matrix and the first row of the second matrix, the first matrix multiplication component 600 can use the intermediate result of the first down-conversion and the intermediate result of the second down-conversion to produce an output portion.Various embodiments (including but not limited to the embodiments discussed above with reference to Figures 1-6) may be implemented in a variety of computing systems, using processors and/or dedicated hardware, where such computing systems may be included An exemplary mobile computing device suitable for use in connection with various embodiments is shown in FIG. Mobile computing device 700 can include a processor 702 coupled to touch screen controller 704 and internal memory 706. Processor 702 can be one or more multi-core integrated circuits that are designated for general purpose or specific processing tasks. Internal memory 706 can be either volatile memory or non-volatile memory, but can also be secure and/or encrypted memory, or non-secure and/or non-encrypted memory, or any combination thereof. Examples of memory types that may be utilized include, but are not limited to, DDR, LPDDR, GDDR, WIDEIO, RAM, SRAM, DRAM, P-RAM, R-RAM, M-RAM, STT-RAM, and embedded DRAM. Touch screen controller 704 and processor 702 may also be coupled to touch screen panel 712, such as a resistive inductive touch screen, a capacitive sensing touch screen, an infrared inductive touch screen, and the like. Additionally, the display of computing device 700 does not need to have touch screen capabilities.Mobile computing device 700 can have one or more wireless signal transceivers 708 (eg, Peanut, Bluetooth Bluetooth, Zigbee, Wi-Fi, RF radios) and antennas 710 for transmitting and receiving communications, which are coupled to each other And/or coupled to processor 702. Transceiver 708 and antenna 710 can be used in conjunction with the circuits mentioned above to implement various wireless transmission protocol stacks and interfaces. Mobile computing device 700 can include a cellular network wireless modem chip 716 that communicates and couples to a processor via a cellular network.Mobile computing device 700 can include a peripheral device connection interface 718 that is coupled to processor 702. The peripheral device connection interface 718 can be singularly configured to accept one type of connection, or can be configured to accept multiple types of physical and communication connections, public or proprietary connections (eg, USB, Firewire, Thunderbolt, or PCIe). In addition, peripheral connection interface 718 can also be coupled to a similarly configured peripheral connection port (not shown).In addition, mobile computing device 700 can also include a speaker 714 for providing an audio output. Moreover, mobile computing device 700 can also include a housing 720 constructed using a combination of plastic, metal, or materials to include all or some of the components discussed herein. Mobile computing device 700 can include a power source 722 coupled to processor 702, such as a disposable or rechargeable battery. Additionally, the rechargeable battery can also be coupled to a peripheral device connection port to receive charging current from a source external to the mobile computing device 700. In addition, mobile computing device 700 can also include physical keys 724 for receiving user input. In addition, mobile computing device 700 can also include a power button 726 for turning mobile computing device 700 on and off.Various embodiments (including but not limited to the embodiments discussed above with reference to Figures 1-6) may be implemented in a variety of computing systems, using processors and/or dedicated hardware, where the computing systems may include A variety of mobile computing devices, such as laptop 800 as shown in FIG. Many laptop computers include a touchpad touch interface 817 that acts as a pointing device for the computer and can receive drag, scroll, and swipe gestures (similar to those described above on a computing device equipped with a touch screen display) Those gestures implemented). Generally, laptop computer 800 includes a processor 811 coupled to volatile memory 812 and a large capacity non-volatile memory (eg, hard disk drive 813 of flash memory). Additionally, computer 800 can have one or more antennas 808 for transmitting and receiving electromagnetic radiation that can be coupled to a wireless data link and/or to cellular telephone transceiver 816 coupled to processor 811. In addition, computer 800 can also include a floppy disk drive 814 and a compact disk (CD) drive 815 coupled to processor 811. In a notebook configuration, the computer housing includes a touchpad 817, a keyboard 818, and a display 819 all coupled to the processor 811. Other configurations of the computing device can include (e.g., via a universal serial bus (USB) input) a computer mouse or trackball coupled to the processor, as is known to the public, and these components can also be used in conjunction with various embodiments.Various embodiments (including but not limited to the embodiments discussed above with reference to Figures 1-6) may be implemented in a variety of computing systems, using processors and/or dedicated hardware, where such computing systems may include Any of a wide variety of commercially available servers that compress data in server cache memory. FIG. 9 depicts an exemplary server 900. Typically, such server 900 includes one or more multi-core processor components 901 coupled to volatile memory 902 and bulk non-volatile memory (e.g., disk drive 904). As shown in FIG. 9, the multi-core processor component 901 can be added to the server 900 by being inserted into the assembled rack. In addition, server 900 can also include a floppy disk drive, compact disk (CD), or digital versatile disk (DVD) disk drive 906 coupled to processor 901. In addition, server 900 can also include a network access port 903 coupled to multi-core processor component 901 for interfacing with network 905 (e.g., a local area network, Internet, public switched telephone network, and/or cellular coupled to other broadcast system computers and servers) A data network (eg, CDMA, TDMA, GSM, PCS, 3G, 4G, LTE, or any other type of cellular data network)) establishes a network interface connection.Can be written in a high-level programming language such as C, C++, C#, Smalltalk, Java, JavaScript, Visual Basic, Structured Query Language (for example, Transact-SQL), Perl, or using various other programming languages. Computer program code or "program code" running on a programmable processor to perform the operations of the various embodiments. Program code or program stored on a computer readable storage medium as used in this application may refer to machine language code (eg, object code) whose format is understandable by a processor.The method descriptions and process flow diagrams described above are provided by way of example only, and are not intended to be required or implied to perform the operations of the various embodiments in the order presented. As will be understood by one of ordinary skill in the art, the order of operations in the above-described embodiments can be performed in any order. Words such as "subsequent", "turning", "continuing" and the like are not intended to limit the order of the operations; these words are merely a description for directing the reader through the method. In addition, any singular reference to the claim element (for example, the use of the articles "a", "an" or "the" form.The various exemplary logical blocks, modules, circuits, and algorithm operations described in connection with the various embodiments can be implemented in electronic hardware, computer software, or combinations of both. To clearly illustrate this interchangeability of hardware and software, various illustrative components, blocks, modules, circuits, and operations are generally described in terms of their functionality. Whether such functionality is implemented as hardware or as software depends on the particular application and design constraints imposed on the overall system. Skilled artisans are capable of A general purpose processor, digital signal processor (DSP), application specific integrated circuit (ASIC), field programmable gate array (FPGA) or other programmable logic device, discrete gate or transistor designed to perform the functions described herein. Logic devices, discrete hardware components, or any combination thereof, implement or perform hardware for implementing the various exemplary logic elements, logic blocks, modules, and circuits described in connection with the embodiments disclosed herein. A general purpose processor may be a microprocessor, or the processor may be any conventional processor, controller, microcontroller, or state machine. The processor may also be implemented as a combination of computing devices, e.g., a combination of a DSP and a microprocessor, a plurality of microprocessors, a combination of one or more microprocessors and a DSP core, or any other such structure. Alternatively, some operations or methods may be performed by circuitry specific to a given function.In one or more embodiments, the functions described herein can be implemented in hardware, software, firmware, or any combination thereof. If implemented in software, the functions may be stored as one or more instructions or code on a non-transitory computer readable medium or non-transitory processor readable medium. The operations of the methods or algorithms disclosed herein may be embodied in a processor-executable software module, which may be located on a non-transitory computer readable storage medium or a processor readable storage medium. The non-transitory computer readable or processor readable storage medium can be any storage medium that can be accessed by a computer or processor. By way of example, and not limitation, the non-transient computer readable medium or processor readable medium may comprise RAM, ROM, EEPROM, Flash, CD-ROM or other optical disk storage, disk storage or other magnetic storage device Or any other medium that can be used to store desired program code in the form of an instruction or data structure and that can be accessed by a computer. As used herein, magnetic disks and optical disks include compact disks (CDs), laser disks, optical disks, digital versatile disks (DVDs), floppy disks, and Blu-ray disks, where disks typically replicate data magnetically, while optical disks are optically replicated using lasers. data. Combinations of the above should also be included within the scope of non-transitory computer readable media and processor readable media. Additionally, the operations of the methods or algorithms may be on one or any combination or collection of code and/or instructions on a non-transitory processor readable medium and/or computer readable medium, where the non-transitory processor may Read media and/or computer readable media may be incorporated into a computer program product.The description of the disclosed embodiments is provided above to enable any person skilled in the art to make or use the invention. Various modifications to these embodiments are obvious to those skilled in the art, and the general principles defined herein may be applied to other embodiments without departing from the scope of the claims. Therefore, the present disclosure is not limited to the embodiments shown herein, but the scope of the invention is to be accorded |
Methods for conveying timing information across a communication link between a first processor and a second processor are described, wherein the communication link is in hibernation mode, the method comprising: scheduling a time event at the first processor to convey the timing information to the second processor; initiating a link wakeup by the first processor at the occurrence of the time event; and detecting the link wakeup at the second processor, and using the detected link wakeup timing to synchronize the first and second processors with respect to the conveyed timing information. |
A method for conveying timing information across a communication link between a first processor and a second processor, wherein the communication link is in hibernation mode, comprising:scheduling a time event at the first processor to convey the timing information to the second processor;initiating a link wakeup by the first processor at the occurrence of the time event; anddetecting the link wakeup at the second processor, and using the detected link wakeup timing to synchronize the first and second processors with respect to the conveyed timing information.The method of claim 1, wherein the communication link represents a Mobile Display Digital Interface, MDDI, link.The method of claim 2, wherein the first and second processors represent MDDI client and MDDI host, respectively.The method of claim 3, wherein the timing information represents a buffer refresh time associated with a display being controlled across the MDDI link.The method of claim 1, further comprising:(d) scheduling the first event by writing to a register to enable the triggering of an interrupt that causes the first event based on the second event; and(e) triggering the second event at the second processor based on the read line position of the buffer.The method of claim 5, wherein the first event represents a link wakeup event when the communication link is in hibernation mode.A system for conveying timing information across a communication link between a first processor and a second processor, wherein the communication link is in hibernation mode, comprising:means for scheduling a time event at the first processor to convey the timing information to the second processor;means for initiating a link wakeup by the first processor at the occurrence of the time event; andmeans for detecting the link wakeup at the second processor, and using the detected link wakeup timing to synchronize the first and second processors with respect to the conveyed timing information.The system of claim 7, wherein the communication link represents a Mobile Display Digital Interface, MDDI, link.The system of claim 8, wherein the first and second processors represent MDDI client and MDDI host, respectively.The system of claim 9, wherein the timing information represents a buffer refresh time associated with a display being controlled across the MDDI link.The system of claim 7, further comprising:(d) means for scheduling the first event by writing to a register to enable the triggering of an interrupt that causes the first event based on the second event; and(e) means for triggering the second event at the second processor based on the read line position of the buffer.The system of claim 11, wherein the first event represents a link wakeup event when the communication link is in hibernation mode.A computer program product, comprising:a computer-readable medium comprising:code for causing at least one computer to perform a method according to any of claims 1 to 6 when executed. |
BACKGROUND Field of the Invention The present invention relates generally to methods and systems for updating a buffer. More particularly, the invention relates to methods and systems for updating a buffer across a communication link. Background of the Invention In the field of interconnect technologies, demand for ever increasing data rates, especially as related to video presentations, continues to grow.The Mobile Display Digital Interface (MDDI) is a cost-effective, low power consumption, transfer mechanism that enables very-high-speed data transfer over a short-range communication link between a host and a client. MDDI requires a minimum of just four wires plus power for bi-directional data transfer that delivers a maximum bandwidth of up to 3.2 Gbits per second.In one application, MDDI increases reliability and decreases power consumption in clamshell phones by significantly reducing the number of wires that run across a handset's hinge to interconnect the digital baseband controller with an LCD display and/or a camera. This reduction of wires also allows handset manufacturers to lower development costs by simplifying clamshell or sliding handset designs.In controlling an LCD display across an MDDI link, one problem that arises relates to image flickering when the display is refreshed. Typically, what is needed is either a long persistence conversion or a refresh rate that is higher than what the human eye can perceive. Long persistence conversion results in image smearing when images appear to move. Therefore, it is desirable for the display to have a high refresh rate. A typical problem that occurs, however, is image tearing. The problem is that while the display is being refreshed at a high rate, the frame buffer associated with the display is being filled at a slower rate. As a result, the display image may reflect both updated and old image information within the same frame of the display.In one solution, multiple buffers are used and image information is cycled through the multiple buffers to avoid the image tearing problem described above. This includes commonly known "double buffering" approaches. The drawback of such solution, however, is clearly in the increased cost and chip space requirements in implementation.What is needed therefore are methods and systems to enable buffer update solutions that solve the above described problems while satisfying the cost and space requirements of MDDI applications. SUMMARY The present invention relates to methods and systems for updating a buffer.In one aspect, the present invention provides a method for updating a buffer, which includes strategically writing to the buffer to enable concurrent read and write to the buffer. The method eliminates the need for double buffering, thereby resulting in implementation cost and space savings compared to conventional buffering approaches. Among other advantages, the method prevents image tearing when used to update a frame buffer associated with a display, but is not limited to such applications.In another aspect, the present invention provides efficient mechanisms to enable buffer update across a communication link. In one example, the present invention provides a method for relaying timing information across a communication link. The method, however, is not limited to relaying timing information, and may be used in more general contexts as can be understood by persons skilled in the art(s) based on the teachings herein.Further embodiments, features, and advantages of the present invention, as well as the structure and operation of the various embodiments of the present invention, are described in detail below with reference to the accompanying drawings. BRIEF DESCRIPTION OF THE DRAWINGS The accompanying drawings, which are incorporated herein and form a part of the specification, illustrate the present invention and, together with the description, further serve to explain the principles of the invention and to enable a person skilled in the pertinent art to make and use the invention.FIG. 1 is a block diagram that illustrates an example environment using a Mobile Display Digital Interface (MDDI) interface.FIG. 1A is a diagram of a digital data device interface coupled to a digital device and a peripheral device.FIG. 2 is a block diagram that illustrates an MDDI link interconnection according to an embodiment of the example of FIG. 1 .FIG. 3 is an example that illustrates the image tearing problem.FIG. 4 is a process flowchart that illustrates a method for updating a buffer according to the present invention.FIG. 5 illustrates examples of the method of FIG. 4 .FIGs. 6A, 6B illustrate buffer read/write strategies.FIG. 7 is a process flowchart that illustrates a method for conveying timing information across a communication link according to the present invention.FIG. 8 illustrates an example signal timing diagram for initiating MDDI link wakeup to convey timing information.The present invention will be described with reference to the accompanying drawings. The drawing in which an element first appears is typically indicated by the leftmost digit(s) in the corresponding reference number. DETAILED DESCRIPTION This specification discloses one or more embodiments that incorporate the features of this invention. The disclosed embodiment(s) merely exemplify the invention. The scope of the invention is not limited to the disclosed embodiment(s). The invention is defined by the claims appended hereto.The embodiment(s) described, and references in the specification to "one embodiment", "an embodiment", "an example embodiment", etc., indicate that the embodiment(s) described may include a particular feature, structure, or characteristic, but every embodiment may not necessarily include the particular feature, structure, or characteristic. Moreover, such phrases are not necessarily referring to the same embodiment. Further, when a particular feature, structure, or characteristic is described in connection with an embodiment, it is submitted that it is within the knowledge of one skilled in the art to effect such feature, structure, or characteristic in connection with other embodiments whether or not explicitly described.Embodiments of the invention may be implemented in hardware, firmware, software, or any combination thereof. Embodiments of the invention may also be implemented as instructions stored on a machine-readable medium, which may be read and executed by one or more processors. A machine-readable medium may include any mechanism for storing or transmitting information in a form readable by a machine (e.g., a computing device). For example, a machine-readable medium may include read only memory (ROM); random access memory (RAM); magnetic disk storage media; optical storage media; flash memory devices; electrical, optical, acoustical or other forms of propagated signals (e.g., carrier waves, infrared signals, digital signals, etc.), and others. Further, firmware, software, routines, instructions may be described herein as performing certain actions. However, it should be appreciated that such descriptions are merely for convenience and that such actions in fact result from computing devices, processors, controllers, or other devices executing the firmware, software, routines, instructions, etc. Mobile Display Digital Interface (MDDI) The Mobile Display Digital Interface (MDDI) is a cost-effective, low power consumption, transfer mechanism that enables very-high-speed serial data transfer over a short-range communication link between a host and a client.In the following, examples of MDDI will be presented with respect to a camera module contained in an upper clamshell of a mobile phone. However, it would be apparent to persons skilled in the relevant art(s) that any module having functionally equivalent features to the camera module could be readily substituted and used in embodiments of this invention.Further, according to embodiments of the invention, an MDDI host may comprise one of several types of devices that can benefit from using the present invention. For example, the host could be a portable computer in the form of a handheld, laptop, or similar mobile computing device. It could also be a Personal Data Assistant (PDA), a paging device, or one of many wireless telephones or modems. Alternatively, the host could be a portable entertainment or presentation device such as a portable DVD or CD player, or a game playing device. Furthermore, the host can reside as a host device or control element in a variety of other widely used or planned commercial products for which a high speed communication link is desired with a client. For example, a host could be used to transfer data at high rates from a video recording device to a storage based client for improved response, or to a high resolution larger screen for presentations. An appliance such as a refrigerator that incorporates an onboard inventory or computing system and/or Bluetooth connections to other household devices, can have improved display capabilities when operating in an internet or Bluetooth connected mode, or have reduced wiring needs for in-the-door displays (a client) and keypads or scanners (client) while the electronic computer or control systems (host) reside elsewhere in the cabinet. In general, those skilled in the art will appreciate the wide variety of modern electronic devices and appliances that may benefit from the use of this interface, as well as the ability to retrofit older devices with higher data rate transport of information utilizing limited numbers of conductors available in either newly added or existing connectors or cables. At the same time, an MDDI client may comprise a variety of devices useful for presenting information to an end user, or presenting information from a user to the host. For example, a micro-display incorporated in goggles or glasses, a projection device built into a hat or helmet, a small screen or even holographic element built into a vehicle, such as in a window or windshield, or various speaker, headphone, or sound systems for presenting high quality sound or music. Other presentation devices include projectors or projection devices used to present information for meetings, or for movies and television images. Another example would be the use of touch pads or sensitive devices, voice recognition input devices, security scanners, and so forth that may be called upon to transfer a significant amount of information from a device or system user with little actual "input" other than touch or sound from the user. In addition, docking stations for computers and car kits or desk-top kits and holders for wireless telephones may act as interface devices to end users or to other devices and equipment, and employ either clients (output or input devices such as mice) or hosts to assist in the transfer of data, especially where high speed networks are involved. However, those skilled in the art will readily recognize that the present invention is not limited to these devices, there being many other devices on the market, and proposed for use, that are intended to provide end users with high quality images and sound, either in terms of storage and transport or in terms of presentation at playback. The present invention is useful in increasing the data throughput between various elements or devices to accommodate the high data rates needed for realizing the desired user experience.FIG. 1A is a diagram of a digital data device interface 100 coupled to a digital device 150 and a peripheral device 180. Digital device 150 can include, but is not limited to, a cellular telephone, a personal data assistant, a smart phone or a personal computer. In general digital device 150 can include any type of digital device that serves as a processing unit for digital instructions and the processing of digital presentation data. Digital device 150 includes a system controller 160 and a link controller 170.Peripheral device 180 can include, but is not limited to, a camera, a bar code reader, an image scanner, an audio device, and a sensor, hi general peripheral 180 can include any type of audio, video or image capture and display device in which digital presentation data is exchanged between a peripheral and a processing unit. Peripheral 180 includes control blocks 190. When peripheral 180 is a camera, for example, control blocks 190 can include, but are not limited to lens control, flash or white LED control and shutter control. Digital presentation data can include digital data representing audio, image and multimedia data.Digital data interface device 100 transfers digital presentation data at a high rate over a communication link 105. In one example, an MDDI communication link can be used which supports bi-directional data transfer with a maximum bandwidth of 3.2 Gbits per second. Other high rates of data transfer that are higher or lower than this example rate can be supported depending on the communications link. Digital data interface device 100 includes a message interpreter module 110, a content module 120, a control module 130 and a link controller 140.Link controller 140, which is located within digital data interface 100, and link controller 170, which is located within digital device 150 establish communication link 105. Link controller 140 and link controller 170 may be MDDI link controllers.The Video Electronics Standards Association ("VESA") MDDI Standard, which is incorporated herein by reference in its entirety, describes the requirements of a highspeed digital packet interface that lets portable devices transport digital images from small portable devices to larger external displays. MDDI applies a miniature connector system and thin flexible cable ideal for linking portable computing, communications and entertainment devices to emerging products such as wearable micro displays. It also includes information on how to simplify connections between host processors and a display device, in order to reduce the cost and increase the reliability of these connections. Link controllers 140 and 170 establish communication path 105 based on the VESA MDDI Standard.U.S. Patent No. 6,760,772 , entitledGenerating and Implementing a Communication Protocol and Interface for High Data Rate Signal Transfer,issued to Zou et al. on July 6, 2004 ('772 Patent") describes a data interface for transferring digital data between a host and a client over a communication path using packet structures linked together to form a communication protocol for presentation data. Embodiments of the invention taught in the '772 Patent are directed to an MDDI interface. The signal protocol is used by link controllers, such as link controllers 140 and 170, configured to generate, transmit, and receive packets forming the communications protocol, and to form digital data into one or more types of data packets, with at least one residing in the host device and being coupled to the client through a communications path, such as communications path 105.The interface provides a cost-effective, low power, bi-directional, high-speed data transfer mechanism over a short-range "serial" type data link, which lends itself to implementation with miniature connectors and thin flexible cables. An embodiment of link controllers 140 and 170 establishes communication path 105 based on the teachings of the '772 Patent. The '772 Patent is herein incorporated by reference in its entirety.In other embodiments, link controllers 140 and 170 can both be a USB link controller or they both can include a combination of controllers, such as for example, an MDDI link controller and another type of link controller, such as, for example, a USB link controller. Alternatively, link controllers 140 and 170 can include a combination of controllers, such as an MDDI link controller and a single link for exchanging acknowledgement messages between digital data interface device 100 and digital device 150. Link controllers 140 and 170 additionally can support other types of interfaces, such as an Ethernet or RS-232 serial port interface. Additional interfaces can be supported as will be known by individuals skilled in the relevant arts based on the teachings herein.Within digital data interface device 100, message interpreter module 110 receives commands from and generates response messages through communication link 105 to system controller 160, interprets the command messages, and routes the information content of the commands to an appropriate module within digital data interface device 100.Content module 120 receives data from peripheral device 180, stores the data and transfers the data to system controller 160 through communication link 105.Control module 130 receives information from message interpreter 130, and routes information to control blocks 190 of peripheral device 180. Control module 130 can also receive information from control blocks 190 and routes the information to the message interpreter module 110.FIG. 1 is a block diagram that illustrates an example environment using an MDDI interface. In the example of FIG. 1 , MDDI is used to interconnect modules across the hinge of a clamshell phone 100.Referring to FIG. 1 , a lower clamshell section 102 of clamshell phone 100 includes a Mobile Station Modem (MSM) baseband chip 104. MSM 104 is a digital baseband controller. An upper clamshell section 114 of clamshell phone 100 includes a Liquid Crystal Display (LCD) module 116 and a camera module 118.Still referring to FIG. 1 , an MDDI link 110 connects camera module 118 to MSM 104. Typically, an MDDI link controller is integrated into each of camera module 118 and MSM 104. In the example of FIG. 1 , an MDDI Host 122 is integrated into camera module 112, while an MDDI Client 106 resides on the MSM side of the MDDI link 110. Typically, the MDDI host is the master controller of the MDDI link. In the example of FIG. 1 , pixel data from camera module 118 are received and formatted into MDDI packets by MDDI Host 122 before being transmitted onto MDDI link 110. MDDI client 106 receives the MDDI packets and re-converts them into pixel data of the same format as generated by camera module 118. The pixel data are then sent to an appropriate block in MSM 104 for processing.Still referring to FIG. 1 , an MDDI link 112 connects LCD module 116 to MSM 104. hi the example of FIG. 1 , MDDI link 112 interconnects an MDDI Host 108, integrated into MSM 104, and an MDDI Client 120 integrated into LCD module 116. In the example of FIG. 1 , image data generated by a graphics controller of MSM 104 are received and formatted into MDDI packets by MDDI Host 108 before being transmitted onto MDDI link 112. MDDI client 120 receives the MDDI packets and reconverts them into image data for use by LCD module 116. Typically, image data is buffered using a frame buffer before being used to refresh the LCD display.FIG. 2 is a block diagram that illustrates MDDI link interconnection 112 according to the example of FIG. 1 . As described above, one of the functions of MDDI link 112 is to transfer image data from MSM 104 to LCD Module 116. A frame interface (not shown in FIG. 2 ) connects MDDI link controller 120 to modules of LCD Module 116. Similarly, another frame interface (not shown in FIG. 2 ) connects MDDI link controller 108 to appropriate modules of MSM 104. Typically, MDDI link controller 108 represents the host controller of the MDDI link, while MDDI link controller 120 represents the client controller of the MDDI. Other implementations, however, may reverse the roles of the two controllers.MDDI link 112 includes a minimum of four wires, comprising two wires for data signals 202 and 204 and two wires for probe signals 206 and 208, in addition to two wires for power signals 210 and 211. Data signals 202 and 204 are bi-directional. Accordingly, data can be transmitted in either direction (from host to client and vice versa) using data signals 202 and 204. Strobe signals 206 and 208 are unidirectional, and may only be driven by the host controller of the link. Accordingly, in the example of FIG. 2 , only host controller 108 may drive strobe signals 206 and 208. Method and Systems for Updating a Buffer As described above, MDDI can be used to connect a baseband processor (MSM 104 in FIG. 2 , for example) and a graphics controller (LCD module 116 in FIG. 2 , for example). The baseband processor channels image information, typically received from a camera sensor, to the graphics controller, which uses the image information to create a display image. Typically, the graphics controller employs one or more frame buffers to store the image information received from the baseband processor before using it to generate the display image. As described above, image tearing is one problem that occurs. This happens when the image information is being read out of the frame buffer at a rate slower or faster than the rate at which it is being written to the frame buffer. Methods and systems for updating a buffer, which, among other advantages, solve the image tearing problem, will be described herein. It should be noted, however, that methods and systems according to the present invention are not limited to the specific exemplary embodiments in which they will described or to being used in an MDDI environment. Further, methods and systems of the present invention can be employed in various other applications that utilize buffering, and that may benefit from the advantages of the present invention. Image Tearing FIG. 3 illustrates two examples of image tearing that can occur while reading from and/or writing to a buffer. The diagram of FIG. 3 shows plots of read and write pointers as functions of buffer position and time. The read pointer represents the position in the buffer that is being read. The write pointer indicates the position in the buffer that is being written to. In the example of FIG. 3 , the buffer position is defined in terms of pixel position in the buffer.In the first example in FIG. 3 , the buffer is being read at a slower rate than it is written to. This is illustrated by the relative slopes of read and write pointer lines 302 and 304. Note that read and write pointer lines 302 and 304 intersect at time to. Before time to, pixels in the buffer are being read prior to being updated. After time to, pixels are being updated prior to be read. Accordingly, within the same frame (from time o to time ti), pixels in positions o to po(which corresponds to the pixel position read at time to) are read with older image information relative to pixels from position po to the last pixel in the buffer, which are read with updated image information. The result is image tearing with a lower portion of the image reflecting newer image information relative to an upper portion of the image.In the second example in FIG. 3 , the buffer is being read at a faster rate than it is written to. This is illustrated by the relative slopes of read and write pointer lines 302 and 306. Read and write pointer lines 302 and 306 intersect at time t2. Before time t2, pixels in the buffer are being updated prior to being read. After time t2, pixels are being read prior to being updated. Accordingly, within the same frame (from time t\to time t3), pixels in positions o to p2(which corresponds to the pixel position read at time t2) are read with newer image information relative to pixels from position p2to the last pixel in the buffer, which are read with old image information. The result is image tearing with an upper portion of the image reflecting newer image information relative to a lower portion of the image. Method for Updating a Buffer A method to strategically update a buffer will now be provided. The method prevents image tearing when used to update a frame buffer associated with a display. The method may also be used in other buffering applications based on its apparent advantages as will be described herein.FIG. 4 is a process flowchart 400 that illustrates a method for updating a buffer according to the present invention. Process flowchart 400 begins in step 410, which includes determining a read line position in the buffer. The read line position indicates a line currently being read from the buffer. Typically, step 410 is achieved by determining the value of a read pointer that points to the read line position in the buffer.Step 420 includes partitioning the buffer into at least a first section that is safe to update and a second section that must not be updated based on the read line position. It is noted here that partitioning the buffer does not refer here to a physical but to a logical partitioning of the buffer. Further, a logical partition of the buffer is not fixed and may change as will be understood from the teachings herein. The first section of the buffer includes lines of the buffer that have been read within the current buffer reading cycle based on the read line position. The first section also includes lines of the buffer that can be updated based on the read line position. Li other words, the first section includes lines whose content has just been read or lines that can be updated prior to the read line position reaching them based on the buffer read speed and the buffer write speed. Lines that cannot be updated prior to the read line position reaching them based on the buffer read speed and the buffer write speed belong to the second section of the buffer. In other words, lines of the second section of the buffer are those for which there is not sufficient time to update before they have to be read. Accordingly, lines of the second section of the buffer must have been updated during the last reading cycle of the buffer. [0053] Step 430 includes updating the buffer by writing data at a line of the first section which follows the second section based on the read line position. Typically, the buffer is updated at a position which is both safe to update as described above and which has already been read during the last reading cycle of the buffer. In one embodiment, step 430 includes writing data at a line of the first section which immediately follows the last line of the second section. Other variations of step 430 may also be possible as will be apparent to a person skilled in the art based on the teachings disclosed herein. Example Illustration FIG. 5 provides examples that illustrate the method described above in FIG. 4 . FIG. 5 shows three examples A, B, and C of reading a buffer 500. For purposes of illustration only, buffer 500 is shown to include 352 lines of data. A read pointer 510 indicates the read line position in the buffer. Sections labeled with the roman numeral "I" represent lines that belong to the first section of the buffer as described above. Sections labeled with the roman numeral "II" represent lines that belong to the second section of the buffer as described above.In example A, shaded area "I" represents lines of the first section of the buffer which have already been read during the current reading cycle of the buffer, hi the example, this area includes lines 1 through m-1. Read pointer 510 indicates that line m is currently being read. Accordingly, area "II" in example A represents lines of buffer 500 that cannot be updated based on the current position of read pointer 510. In other words, there is no sufficient time to update lines in area "II" based on the current position of read pointer 510 and the read and write speeds to the buffer. Note that the first section of the buffer also includes an unshaded area "I" below area "II". This area "I" belongs to the first section as it is safe to update, but should not be updated given that it has not been read during the current reading cycle of the buffer. Updating unshaded area "I" prior to reading it would result in image tearing, as described in FIG. 3 , where the upper portion of the image reflects older image information relative to the lower portion of the image.In example B, the shaded area represents lines of the buffer which have already been read during the current reading cycle of the buffer. In the example, this area includes lines 1 through 351. Read pointer 510 indicates that line 352 is currently being read. Accordingly, area "II" in example B represents lines that must have been updated given the current read line position. Lines in area "II" cannot be updated based on the current read line position and the read and write speeds to the buffer, and belong to the second section of the buffer based on the description above. Lines in area "I" belong to the first section of the buffer, and are safe to update. To update the buffer, writing can begin in area "I". Data can be written at a line in area "I" that immediately follows area "II". This corresponds to line m in example B.Example C illustrates a scenario subsequent to the one shown in B. In example C, read pointer 510 has wrapped around and is reading line m of the buffer. Accordingly, lines preceding the read pointer in the buffer belong to the first section of the buffer, and may be updated. Lines in area "II" must have been updated during the last write cycle to the buffer given the current read line position. Lines in area "II" cannot be updated, and belong to the second section of the buffer as described above. In other words, lines in area "II" must contain updated information given the read line position, as there is not sufficient time to update them before they have to be read. Shaded area "I" represents lines of the first section of the buffer that are safe to update, but should not be updated given that they have not been read during the last reading cycle of the buffer. Buffer Read/Write Strategies Buffer read/write strategies to avoid image tearing or equivalent problems related to buffer update are described herein. Buffer update strategies according to the present invention further eliminate the need for the commonly adopted "double buffering" technique. Instead, a single buffer is used, which results in both implementation cost and space savings. The present invention is not limited to the exemplary strategies described herein, and variations which are apparent to persons skilled in the art(s) are also considered to be within the scope of the present invention.FIGs. 6A and 6B illustrate exemplary buffer read/write strategies according to the present invention. The diagrams of FIGs. 6A and 6B show plots of read pointer 612 and write pointers 614 and 616 as functions of buffer position and time. In the examples of FIGs. 6A and 6B , the buffer position is defined in terms of pixel position in the buffer, which may be equivalently replaced with any other measure of buffer position, such as line number, for example.Referring to FIG. 6A , an exemplary buffer read/write strategy is depicted over two reading cycles of the buffer. In the first reading cycle, from time o to time U, the first half of the buffer is updated, while the entire buffer content is read. In the second reading cycle of the buffer, from time t1to time t2, the second half of the buffer is updated, while the entire buffer content is read. Note that the first half of the buffer, during the second reading cycle, contains updated information that were written to the buffer during the first reading cycle. The second half of the buffer, during the second cycle, is updated prior to being read as shown by write pointer 614 preceding read pointer 612 in time over the second reading cycle. Accordingly, over both reading cycles, data read from the buffer belongs to the same update cycle of the buffer, and no image tearing occurs.FIG. 6B illustrates another exemplary buffer read/write strategy over two reading cycles of the buffer. During the first reading cycle, the first half of the buffer is updated from time to to time t\. During the second reading cycle, the second half of the buffer is updated from time ti to time t2. Note that writing to the buffer starts at a time to during the first cycle such that, during the first cycle, the entire buffer is read with an initial information content and not an updated content due to the writing process. On the other hand, writing to the buffer ends at a time t2during the second cycle such that, during the second cycle, the entire buffer contains updated information content when it is read. This is shown by write pointer 616 preceding read pointer 612 in time over the second reading cycle. Accordingly, image tearing will not occur over both reading cycles in the example of FIG. 6B . Buffer Update Through a Communication Link Methods and systems for updating a buffer according to the present invention may be used in a variety of applications. In one application, as described above, the buffer update approach may be used to update a frame buffer associated with a display. In another application, the buffer is updated remotely, wherein it is written to by a first processor and is read by a second processor, and wherein the first and second processors communicate through a communication link. For example, the first and second processors represent an MSM baseband processor and an LCD module, respectively, that communicate through an MDDI link, as illustrated in FIG. 2 . In certain applications, synchronization between the first and second processors will be required.Methods and systems related to synchronization to enable buffer update across a communication link will now be provided. As will be understood by a person skilled in the art(s) based on the teachings herein, certain aspects of the methods and systems that will be presented may be applicable to synchronization problems in general, and are not limited to synchronization for enabling remote buffer update.In one aspect, synchronization between the first and second processors includes scheduling a first event at the first processor that is triggered by a second event at the second processor. This is typically done by writing to a register to enable the triggering of an interrupt that causes the first event at the first processor whenever the second event occurs at the second processor. For example, in a remote buffer update application, where the buffer is updated by the first processor and read by the second processor, the first event may represent the need to start writing to the buffer, while the second event may represent that the read pointer has finished a complete reading cycle of the buffer. The second event may then be triggered at the second processor based on the read line position in the buffer.In another aspect, methods to convey synchronization information across the communication link are provided. The methods may be employed to relay synchronization information related to buffer update, as described above, for example. FIG. 7 is a process flowchart 700 that illustrates a method for conveying timing information across a communication link between a first processor and a second processor, when the communication link is in hibernation mode. Process flowchart 700 begins in step 710, which includes scheduling a time event at the first processor to convey timing information to the second processor. The time event may be a periodic event as required by the specific application. For example, in the case of a buffer update application, the time event may be related to the read line position in the buffer.Step 720 includes initiating a link wakeup by the first processor at the occurrence of the time event. For example, in the case of a buffer update across an MDDI link, where an MDDI client is located at the LCD module side of the interconnection, the MDDI client may initiate a link wakeup by driving the data signal to a logic one to notify the MDDI host that the buffer should be updated.Subsequently, step 730 includes detecting the link wakeup at the second processor (for example, an MDDI host on the MSM side of the MDDI interconnection), and using the detected link wakeup timing to synchronize the first and second processors with respect to the timing information that is being conveyed. For example, in the case of a buffer update across an MDDI link, when the MDDI host detects the link wakeup by the MDDI client, it can synchronize itself with the MDDI client with respect to the buffer update start time.It can be appreciated by a person skilled in the art based on the teachings herein that the method described in FIG. 7 may be extended to convey any kind of timing information across a communication link, and is not limited to buffer update synchronization purposes. The advantages of such method are through saving the link and conveying information by simply waking the link up.FIG. 8 illustrates an example timing diagram 800 for initiating link wakeup to convey timing information across an MDDI interconnection. For example, the MDDI interconnection may be such as the one described above with reference to FIG. 2 with an MDDI host located at the MSM and an MDDI client located at the LCD module. The MDDI client, accordingly, would initiate a link wakeup to convey buffer update information to the MDDI host, which, in turn, would start refreshing the buffer located in the LCD module. In the example of FIG. 8 , vsync_wake signal 802 represents a value written to a register at the MDDI host to enable a wakeup at the host based on vsync signal 806. Wakeup at the host occurs whenever the value of vsync_wake 802 is high. Vsync signal 806 represents a value of a signal "vertical sync", which occurs at the client and is related to buffer update time. For example, vsync 806 goes high whenever the read pointer has wrapped and is reading from the beginning of the buffer. Link_active signal 804 represents whether or not the data signal of the MDDI interconnection is active or in hibernation. Mddi_client_wakeup signal 808 represents a signal at the client, which responds to vsync 806 to wake up the client.In the example of FIG. 8 , vsync_wake 802 is set at the host at time A. At time B, the MDDI link goes into hibernation mode. At time C, vsync 806 goes high indicating that the buffer needs to be refreshed by the host. As a result, mddi_client_wakeup 808 also goes high to wake the client up to initiate the link wakeup. The client initiates the link wakeup by driving the data signal of the interconnection, and the link goes active at time D. Subsequently, vsync_wake 802 and mddi_client_wakeup return to zero, and the host detects the link wakeup and begins to refresh the buffer at the client. Conclusion While various embodiments of the present invention have been described above, it should be understood that they have been presented by way of example only, and not limitation. It will be apparent to persons skilled in the relevant art that various changes in form and detail can be made therein without departing from the spirit and scope of the invention. Thus, the breadth and scope of the present invention should not be limited by any of the above-described exemplary embodiments, but should be defined only in accordance with the following claims and their equivalents. Further Aspects In the following further aspects are described to facilitate the understanding of the invention.In a first further aspect, a method for updating a buffer having a plurality of lines is described, comprising: (a) determining a read line position in the buffer, said read line position indicating a line currently being read from the buffer; (b) partitioning the buffer into at least a first section that is safe to update and a second section that must not be updated based on the read line position; and (c) writing data at a line of the first section to update the buffer, wherein the line follows the second section based on the read line position. Further, the read line position may be determined by determining a read pointer value. Also, the first section of the buffer may comprise at least one of: (i) lines of the buffer that may have been read in a last reading cycle of the buffer; and (ii) lines of the buffer that may be updated based on the read line position. Wherein (ii) may further comprise lines of the buffer that may be updated prior to the read line position reaching said lines based on a buffer read speed and a buffer write speed. Further, the second section of the buffer may comprise lines of the buffer that cannot be updated prior to the read line position reaching said lines based on a buffer read speed and a buffer write speed. Wherein the second section of the buffer may further comprise lines that must have been updated during a last reading cycle of the buffer. Also, the buffer may be written to by a first processor and may be read by a second processor. Also, the first and second processors may communicate remotely through a communication link. Wherein the first processor may update the buffer based on a first event at the first processor that is triggered by a second event at the second processor. The buffer updating may further comprise: (d) scheduling the first event by writing to a register to enable the triggering of an interrupt that causes the first event based on the second event; and (e) triggering the second event at the second processor based on the read line position of the buffer. Wherein the first event may represent a link wakeup event when the communication link is in hibernation mode. Also, the first and second processors may represent host and client controllers of a Mobile Display Digital Interface (MDDI) link. Further, the first controller may represent a Mobile Station Modem (MSM) baseband processor, and wherein the second controller may represent an LCD controller. Wherein the buffer may represent a frame buffer used for refreshing an LCD display. Further, wherein image tearing in the display may be substantially avoided.In another further aspect, a method for conveying timing information across a communication link between a first processor and a second processor is described, wherein the communication link might be in hibernation mode, comprising: scheduling a time event at the first processor to convey the timing information to the second processor; initiating a link wakeup by the first processor at the occurrence of the time event; and detecting the link wakeup at the second processor, and using the detected link wakeup timing to synchronize the first and second processors with respect to the conveyed timing information. Wherein the communication link may represent a Mobile Display Digital Interface (MDDI) link. Also, the first and second processors may represent MDDI client and MDDI host, respectively. Further, the timing information may represent a buffer refresh time associated with a display being controlled across the MDDI link. |
In one embodiment, an apparatus comprises a plurality of bitwise multipliers, a bitwise multiplier of the plurality of bitwise multipliers to multiply a binary synapse weight value of a neural network by a binary activation state value of a neuron of the neural network. The apparatus further comprises a plurality of majority voters, a majority voter of the plurality of majority voters to receive outputs of a first group of bitwise multipliers and to generate a majority result to indicate whether a majority of outputs of the first group of bitwise multipliers are set to a first binary value or a second binary value. The apparatus also comprises a first plurality of reconfigurable connections coupled to outputs of the plurality of bitwise multipliers and inputs of the plurality of majority voters. |
A method comprising:configuring a first plurality of reconfigurable connections to couple outputs of a plurality of bitwise multipliers to inputs of a majority voter;performing, by each bitwise multiplier of the plurality of bitwise multipliers, a bitwise multiplication of a binary synapse weight value of a neural network and a corresponding binary activation state value of a neuron of the neural network and providing the result to an output of the bitwise multiplier; anddetermining, by the majority voter, a majority result that indicates whether a majority of outputs of the bitwise multipliers are set to a first binary value or a second binary value.The method of Claim 1, further comprising setting a value of a function signal of a computational logic block to cause the computational logic block to implement at least one bitwise multiplier of the plurality of bitwise multipliers, wherein the computational logic block comprises a pair of bitwise multipliers and a full adder, wherein outputs of the computational logic block are to be coupled to the pair of bitwise multipliers when the function signal is a first value and to be coupled to the full adder when the function signal is a second value.The method of any of Claims 1-2 wherein the full adder and at least one bitwise multiplier of the computational logic block share an XOR or an XNOR gate.The method of Claim 3, wherein the majority voter comprises an adder tree including the computational logic block.The method of any of Claims 1-4, wherein the majority voter comprises an analog majority voter comprising a plurality of capacitors coupled to outputs of the plurality of bitwise multipliers.The method of any of Claims 1-5, wherein the plurality of reconfigurable connections comprise a plurality of switch blocks, wherein an input of the switch block is selectively coupled to an output of the switch block via a configurable control signal.The method of any of Claims 1-6, wherein the reconfigurable connections are to be set based on a configuration file to be loaded into a memory of the apparatus.The method of Claim 4, wherein the adder tree is to receive an additive bias value and output a bit as the majority result.The method of Claim 5, wherein the analog majority voter comprises a plurality of analog majority voters arranged in a plurality of stages.The method of Claim 5, wherein the analog majority voter comprises a first group of capacitors coupled to a single output of the first group of bitwise multipliers to implement a synapse weight magnitude based on the number of capacitors in the first group of capacitors.An apparatus comprising means for performing the method of any of Claims 1-10.Machine-readable storage including machine-readable instructions, when executed, to implement the method of any of Claims 1-10.An apparatus comprising logic, at least a portion of which is in hardware logic, the logic to perform the method of any of Claims 1-10. |
FIELD The present disclosure relates in general to the field of computer development, and more specifically, to an architecture for a neural network. BACKGROUND A neural network may include a group of neurons loosely modeled after the structure of a biological brain which includes large clusters of neurons connected by axons. In a neural network, neurons are connected to other neurons via links which may be excitatory or inhibitory in their effect on the activation state of connected neurons. A neuron may perform a function utilizing the values of its inputs to update a membrane potential of the neuron. A neuron may propagate a signal to connected neurons based on its activation state. A neural network may be trained or otherwise adapted to perform various data processing tasks, such as computer vision tasks, speech recognition tasks, or other suitable computing tasks. BRIEF DESCRIPTION OF THE DRAWINGS Figure 1A is a block diagram illustrating both an exemplary in-order pipeline and an exemplary register renaming, out-of-order issue/execution pipeline in accordance with certain embodiments.Figure 1B is a block diagram illustrating both an exemplary embodiment of an in-order architecture core and an exemplary register renaming, out-of-order issue/execution architecture core to be included in a processor in accordance with certain embodiments.Figures 2A-B illustrate a block diagram of a more specific exemplary in-order core architecture, which core would be one of several logic blocks (potentially including other cores of the same type and/or different types) in a chip in accordance with certain embodiments.Figure 3 is a block diagram of a processor that may have more than one core, may have an integrated memory controller, and may have integrated graphics in accordance with certain embodiments.Figures 4-7 are block diagrams of exemplary computer architectures in accordance with certain embodiments.Figure 8 is a block diagram contrasting the use of a software instruction converter to convert binary instructions in a source instruction set to binary instructions in a target instruction set in accordance with certain embodiments.Figure 9 illustrates a portion of an example neural network in accordance with certain embodiments.Figure 10 illustrates an example field-programmable gate array (FPGA) in accordance with certain embodiments.Figure 11 illustrates an example computational logic block in accordance with certain embodiments.Figure 12 illustrates example circuitry of an example computational logic block in accordance with certain embodiments.Figure 13 illustrates example circuitry of an example switch block in accordance with certain embodiments.Figure 14 illustrates an example arrangement of computational logic blocks and switch blocks in accordance with certain embodiments.Figure 15 illustrates an example arrangement of computational logic blocks to perform an activation function operation of a binary neural network in accordance with certain embodiments.Figure 16 illustrates an example analog majority voter coupled to a plurality of bitwise multipliers in accordance with certain embodiments.Figure 17 illustrates an example analog majority voter coupled to a plurality of bitwise multipliers in accordance with certain embodiments.Figure 18 illustrates an example analog majority voter coupled to a plurality of bitwise multipliers in accordance with certain embodiments.Figure 19 illustrates an example multi-stage analog majority voter in accordance with certain embodiments.Figure 20 illustrates an example connection scheme for an analog majority voter in accordance with certain embodiments.Figure 21 illustrates an example flow for configuring a device to implement a neural network in accordance with certain embodiments.Like reference numbers and designations in the various drawings indicate like elements. DETAILED DESCRIPTION Various computer systems and components (e.g., processors, coprocessors, cores, and other components) in which various embodiments of the disclosure may be implemented and/or by which various functions described herein may be performed are described in Figures 1-8 . Specific examples further describing various embodiments associated with a gate array architecture for a binary neural network are described in Figures 9-21 .Although the drawings depict particular computer systems, the concepts of various embodiments are applicable to any suitable integrated circuits and other logic devices. Examples of devices in which teachings of the present disclosure may be used include desktop computer systems, server computer systems, storage systems, handheld devices, tablets, other thin notebooks, systems on a chip (SOC) devices, and embedded applications. Some examples of handheld devices include cellular phones, digital cameras, media players, personal digital assistants (PDAs), and handheld PCs. Embedded applications may include a microcontroller, a digital signal processor (DSP), a system on a chip, network computers (NetPC), set-top boxes, network hubs, wide area network (WAN) switches, or any other system that can perform the functions and operations taught below. Various embodiments of the present disclosure may be used in any suitable computing environment, such as a personal computing device, a server, a mainframe, a cloud computing service provider infrastructure, a datacenter, a communications service provider infrastructure (e.g., one or more portions of an Evolved Packet Core), or other environment comprising a group of computing devices.Processor cores may be implemented in different ways, for different purposes, and in different processors. For instance, implementations of such cores may include: 1) a general purpose in-order core intended for general-purpose computing; 2) a high performance general purpose out-of-order core intended for general-purpose computing; 3) a special purpose core intended primarily for graphics and/or scientific (throughput) computing. Implementations of different processors may include: 1) a CPU including one or more general purpose in-order cores intended for general-purpose computing and/or one or more general purpose out-of-order cores intended for general-purpose computing; and 2) a coprocessor including one or more special purpose cores intended primarily for graphics and/or scientific (throughput). Such different processors lead to different computer system architectures, which may include: 1) the coprocessor on a separate chip from the CPU; 2) the coprocessor on a separate die in the same package as a CPU; 3) the coprocessor on the same die as a CPU (in which case, such a coprocessor is sometimes referred to as special purpose logic, such as integrated graphics and/or scientific (throughput) logic, or as special purpose cores); and 4) a system on a chip that may include on the same die the described CPU (sometimes referred to as the application core(s) or application processor(s)), the above described coprocessor, and additional functionality. Exemplary core architectures are described next, followed by descriptions of exemplary processors and computer architectures. Figure 1A is a block diagram illustrating both an exemplary in-order pipeline and an exemplary register renaming, out-of-order issue/execution pipeline according to embodiments of the disclosure. Figure 1B is a block diagram illustrating both an exemplary embodiment of an in-order architecture core and an exemplary register renaming, out-of-order issue/execution architecture core to be included in a processor according to embodiments of the disclosure. The solid lined boxes in Figures 1A-B illustrate the in-order pipeline and in-order core, while the optional addition of the dashed lined boxes illustrates the register renaming, out-of-order issue/execution pipeline and core. Given that the in-order aspect is a subset of the out-of-order aspect, the out-of-order aspect will be described.In Figure 1A , a processor pipeline 100 includes a fetch stage 102, a length decode stage 104, a decode stage 106, an allocation stage 108, a renaming stage 110, a scheduling (also known as a dispatch or issue) stage 112, a register read/memory read stage 114, an execute stage 116, a write back/memory write stage 118, an exception handling stage 122, and a commit stage 124. Figure 1B shows processor core 190 including a front end unit 130 coupled to an execution engine unit 150, and both are coupled to a memory unit 170. The core 190 may be a reduced instruction set computing (RISC) core, a complex instruction set computing (CISC) core, a very long instruction word (VLIW) core, or a hybrid or alternative core type. As yet another option, the core 190 may be a special-purpose core, such as, for example, a network or communication core, compression and/or decompression engine, coprocessor core, general purpose computing graphics processing unit (GPGPU) core, graphics core, or the like.The front end unit 130 includes a branch prediction unit 132 coupled to an instruction cache unit 134, which is coupled to an instruction translation lookaside buffer (TLB) 136, which is coupled to an instruction fetch unit 138, which is coupled to a decode unit 140. The decode unit 140 (or decoder) may decode instructions, and generate as an output one or more micro-operations, micro-code entry points, microinstructions, other instructions, or other control signals, which are decoded from, or which otherwise reflect, or are derived from, the original instructions. The decode unit 140 may be implemented using various different mechanisms. Examples of suitable mechanisms include, but are not limited to, look-up tables, hardware implementations, programmable logic arrays (PLAs), microcode read only memories (ROMs), etc. In one embodiment, the core 190 includes a microcode ROM or other medium that stores microcode for certain macroinstructions (e.g., in decode unit 140 or otherwise within the front end unit 130). The decode unit 140 is coupled to a rename/allocator unit 152 in the execution engine unit 150.The execution engine unit 150 includes the rename/allocator unit 152 coupled to a retirement unit 154 and a set of one or more scheduler unit(s) 156. The scheduler unit(s) 156 represents any number of different schedulers, including reservations stations, central instruction window, etc. The scheduler unit(s) 156 is coupled to the physical register file(s) unit(s) 158. Each of the physical register file(s) units 158 represents one or more physical register files, different ones of which store one or more different data types, such as scalar integer, scalar floating point, packed integer, packed floating point, vector integer, vector floating point,, status (e.g., an instruction pointer that is the address of the next instruction to be executed), etc. In one embodiment, the physical register file(s) unit 158 comprises a vector registers unit, a write mask registers unit, and a scalar registers unit. These register units may provide architectural vector registers, vector mask registers, and general purpose registers. The physical register file(s) unit(s) 158 is overlapped by the retirement unit 154 to illustrate various ways in which register renaming and out-of-order execution may be implemented (e.g., using a reorder buffer(s) and a retirement register file(s); using a future file(s), a history buffer(s), and a retirement register file(s); using a register maps and a pool of registers; etc.). The retirement unit 154 and the physical register file(s) unit(s) 158 are coupled to the execution cluster(s) 160. The execution cluster(s) 160 includes a set of one or more execution units 162 and a set of one or more memory access units 164. The execution units 162 may perform various operations (e.g., shifts, addition, subtraction, multiplication) and on various types of data (e.g., scalar floating point, packed integer, packed floating point, vector integer, vector floating point). While some embodiments may include a number of execution units dedicated to specific functions or sets of functions, other embodiments may include only one execution unit or multiple execution units that all perform all functions. The scheduler unit(s) 156, physical register file(s) unit(s) 158, and execution cluster(s) 160 are shown as being possibly plural because certain embodiments create separate pipelines for certain types of data/operations (e.g., a scalar integer pipeline, a scalar floating point/packed integer/packed floating point/vector integer/vector floating point pipeline, and/or a memory access pipeline that each have their own scheduler unit, physical register file(s) unit, and/or execution cluster - and in the case of a separate memory access pipeline, certain embodiments are implemented in which only the execution cluster of this pipeline has the memory access unit(s) 164). It should also be understood that where separate pipelines are used, one or more of these pipelines may be out-of-order issue/execution and the rest in-order.The set of memory access units 164 is coupled to the memory unit 170, which includes a data TLB unit 172 coupled to a data cache unit 174 coupled to a level 2 (L2) cache unit 176. In one exemplary embodiment, the memory access units 164 may include a load unit, a store address unit, and a store data unit, each of which is coupled to the data TLB unit 172 in the memory unit 170. The instruction cache unit 134 is further coupled to a level 2 (L2) cache unit 176 in the memory unit 170. The L2 cache unit 176 is coupled to one or more other levels of cache and eventually to a main memory.By way of example, the exemplary register renaming, out-of-order issue/execution core architecture may implement the pipeline 100 as follows: 1) the instruction fetch 138 performs the fetch and length decoding stages 102 and 104; 2) the decode unit 140 performs the decode stage 106; 3) the rename/allocator unit 152 performs the allocation stage 108 and renaming stage 110; 4) the scheduler unit(s) 156 performs the schedule stage 112; 5) the physical register file(s) unit(s) 158 and the memory unit 170 perform the register read/memory read stage 114; the execution cluster 160 perform the execute stage 116; 6) the memory unit 170 and the physical register file(s) unit(s) 158 perform the write back/memory write stage 118; 7) various units may be involved in the exception handling stage 122; and 8) the retirement unit 154 and the physical register file(s) unit(s) 158 perform the commit stage 124.The core 190 may support one or more instructions sets (e.g., the x86 instruction set (with some extensions that have been added with newer versions); the MIPS instruction set of MIPS Technologies of Sunnyvale, CA; the ARM instruction set (with optional additional extensions such as NEON) of ARM Holdings of Sunnyvale, CA), including the instruction(s) described herein. In one embodiment, the core 190 includes logic to support a packed data instruction set extension (e.g., AVX1, AVX2), thereby allowing the operations used by many multimedia applications to be performed using packed data.It should be understood that the core may support multithreading (executing two or more parallel sets of operations or threads), and may do so in a variety of ways including time sliced multithreading, simultaneous multithreading (where a single physical core provides a logical core for each of the threads that physical core is simultaneously multithreading), or a combination thereof (e.g., time sliced fetching and decoding and simultaneous multithreading thereafter such as in the Intel® Hyperthreading technology).While register renaming is described in the context of out-of-order execution, it should be understood that register renaming may be used in an in-order architecture. While the illustrated embodiment of the processor also includes separate instruction and data cache units 134/174 and a shared L2 cache unit 176, alternative embodiments may have a single internal cache for both instructions and data, such as, for example, a Level 1 (L1) internal cache, or multiple levels of internal cache. In some embodiments, the system may include a combination of an internal cache and an external cache that is external to the core and/or the processor. Alternatively, all of the cache may be external to the core and/or the processor. Figures 2A-B illustrate a block diagram of a more specific exemplary in-order core architecture, which core would be one of several logic blocks (potentially including other cores of the same type and/or different types) in a chip. The logic blocks communicate through a high-bandwidth interconnect network (e.g., a ring network) with some fixed function logic, memory I/O interfaces, and other necessary I/O logic, depending on the application.Figure 2A is a block diagram of a single processor core, along with its connection to the on-die interconnect network 202 and with its local subset of the Level 2 (L2) cache 204, according to various embodiments. In one embodiment, an instruction decoder 200 supports the x86 instruction set with a packed data instruction set extension. An L1 cache 206 allows low-latency accesses to cache memory into the scalar and vector units. While in one embodiment (to simplify the design), a scalar unit 208 and a vector unit 210 use separate register sets (respectively, scalar registers 212 and vector registers 214) and data transferred between them is written to memory and then read back in from a level 1 (L1) cache 206, alternative embodiments may use a different approach (e.g., use a single register set or include a communication path that allow data to be transferred between the two register files without being written and read back).The local subset of the L2 cache 204 is part of a global L2 cache that is divided into separate local subsets (in some embodiments one per processor core). Each processor core has a direct access path to its own local subset of the L2 cache 204. Data read by a processor core is stored in its L2 cache subset 204 and can be accessed quickly, in parallel with other processor cores accessing their own local L2 cache subsets. Data written by a processor core is stored in its own L2 cache subset 204 and is flushed from other subsets, if necessary. The ring network ensures coherency for shared data. The ring network is bi-directional to allow agents such as processor cores, L2 caches and other logic blocks to communicate with each other within the chip. In a particular embodiment, each ring data-path is 1012-bits wide per direction.Figure 2B is an expanded view of part of the processor core in Figure 2A according to embodiments. Figure 2B includes an L1 data cache 206A (part of the L1 cache 206), as well as more detail regarding the vector unit 210 and the vector registers 214. Specifically, the vector unit 210 is a 16-wide vector processing unit (VPU) (see the 16-wide arithmetic logic unit (ALU) 228), which executes one or more of integer, single-precision float, and double-precision float instructions. The VPU supports swizzling the register inputs with swizzle unit 220, numeric conversion with numeric convert units 222A-B, and replication with replication unit 224 on the memory input. Write mask registers 226 allow predicating resulting vector writes. Figure 3 is a block diagram of a processor 300 that may have more than one core, may have an integrated memory controller, and may have integrated graphics according to various embodiments. The solid lined boxes in Figure 3 illustrate a processor 300 with a single core 302A, a system agent 310, and a set of one or more bus controller units 316; while the optional addition of the dashed lined boxes illustrates an alternative processor 300 with multiple cores 302A-N, a set of one or more integrated memory controller unit(s) 314 in the system agent unit 310, and special purpose logic 308.Thus, different implementations of the processor 300 may include: 1) a CPU with the special purpose logic 308 being integrated graphics and/or scientific (throughput) logic (which may include one or more cores), and the cores 302A-N being one or more general purpose cores (e.g., general purpose in-order cores, general purpose out-of-order cores, or a combination of the two); 2) a coprocessor with the cores 302A-N being a large number of special purpose cores intended primarily for graphics and/or scientific (throughput); and 3) a coprocessor with the cores 302A-N being a large number of general purpose in-order cores. Thus, the processor 300 may be a general-purpose processor, coprocessor or special-purpose processor, such as, for example, a network or communication processor, compression and/or decompression engine, graphics processor, GPGPU (general purpose graphics processing unit), a high-throughput many integrated core (MIC) coprocessor (e.g., including 30 or more cores), embedded processor, or other fixed or configurable logic that performs logical operations. The processor may be implemented on one or more chips. The processor 300 may be a part of and/or may be implemented on one or more substrates using any of a number of process technologies, such as, for example, BiCMOS, CMOS, or NMOS.In various embodiments, a processor may include any number of processing elements that may be symmetric or asymmetric. In one embodiment, a processing element refers to hardware or logic to support a software thread. Examples of hardware processing elements include: a thread unit, a thread slot, a thread, a process unit, a context, a context unit, a logical processor, a hardware thread, a core, and/or any other element, which is capable of holding a state for a processor, such as an execution state or architectural state. In other words, a processing element, in one embodiment, refers to any hardware capable of being independently associated with code, such as a software thread, operating system, application, or other code. A physical processor (or processor socket) typically refers to an integrated circuit, which potentially includes any number of other processing elements, such as cores or hardware threads.A core may refer to logic located on an integrated circuit capable of maintaining an independent architectural state, wherein each independently maintained architectural state is associated with at least some dedicated execution resources. A hardware thread may refer to any logic located on an integrated circuit capable of maintaining an independent architectural state, wherein the independently maintained architectural states share access to execution resources. As can be seen, when certain resources are shared and others are dedicated to an architectural state, the line between the nomenclature of a hardware thread and core overlaps. Yet often, a core and a hardware thread are viewed by an operating system as individual logical processors, where the operating system is able to individually schedule operations on each logical processor.The memory hierarchy includes one or more levels of cache within the cores, a set or one or more shared cache units 306, and external memory (not shown) coupled to the set of integrated memory controller units 314. The set of shared cache units 306 may include one or more mid-level caches, such as level 2 (L2), level 3 (L3), level 4 (L4), or other levels of cache, a last level cache (LLC), and/or combinations thereof. While in one embodiment a ring based interconnect unit 312 interconnects the special purpose logic (e.g., integrated graphics logic) 308, the set of shared cache units 306, and the system agent unit 310/integrated memory controller unit(s) 314, alternative embodiments may use any number of well-known techniques for interconnecting such units. In one embodiment, coherency is maintained between one or more cache units 306 and cores 302A-N.In some embodiments, one or more of the cores 302A-N are capable of multithreading. The system agent 310 includes those components coordinating and operating cores 302A-N. The system agent unit 310 may include for example a power control unit (PCU) and a display unit. The PCU may be or include logic and components needed for regulating the power state of the cores 302A-N and the special purpose logic 308. The display unit is for driving one or more externally connected displays.The cores 302A-N may be homogenous or heterogeneous in terms of architecture instruction set; that is, two or more of the cores 302A-N may be capable of executing the same instruction set, while others may be capable of executing only a subset of that instruction set or a different instruction set.Figures 4-7 are block diagrams of exemplary computer architectures. Other system designs and configurations known in the arts for laptops, desktops, handheld PCs, personal digital assistants, engineering workstations, servers, network devices, network hubs, switches, embedded processors, digital signal processors (DSPs), graphics devices, video game devices, set-top boxes, micro controllers, cell phones, portable media players, hand held devices, and various other electronic devices, are also suitable for performing the methods described in this disclosure. In general, a huge variety of systems or electronic devices capable of incorporating a processor and/or other execution logic as disclosed herein are generally suitable. Figure 4 depicts a block diagram of a system 400 in accordance with one embodiment of the present disclosure. The system 400 may include one or more processors 410, 415, which are coupled to a controller hub 420. In one embodiment the controller hub 420 includes a graphics memory controller hub (GMCH) 490 and an Input/Output Hub (IOH) 450 (which may be on separate chips or the same chip); the GMCH 490 includes memory and graphics controllers coupled to memory 440 and a coprocessor 445; the IOH 450 couples input/output (I/O) devices 460 to the GMCH 490. Alternatively, one or both of the memory and graphics controllers are integrated within the processor (as described herein), the memory 440 and the coprocessor 445 are coupled directly to the processor 410, and the controller hub 420 is a single chip comprising the IOH 450.The optional nature of additional processors 415 is denoted in Figure 4 with broken lines. Each processor 410, 415 may include one or more of the processing cores described herein and may be some version of the processor 300.The memory 440 may be, for example, dynamic random access memory (DRAM), phase change memory (PCM), other suitable memory, or any combination thereof. The memory 440 may store any suitable data, such as data used by processors 410, 415 to provide the functionality of computer system 400. For example, data associated with programs that are executed or files accessed by processors 410, 415 may be stored in memory 440. In various embodiments, memory 440 may store data and/or sequences of instructions that are used or executed by processors 410, 415.In at least one embodiment, the controller hub 420 communicates with the processor(s) 410, 415 via a multi-drop bus, such as a frontside bus (FSB), point-to-point interface such as QuickPath Interconnect (QPI), or similar connection 495.In one embodiment, the coprocessor 445 is a special-purpose processor, such as, for example, a high-throughput MIC processor, a network or communication processor, compression and/or decompression engine, graphics processor, GPGPU, embedded processor, or the like. In one embodiment, controller hub 420 may include an integrated graphics accelerator.There can be a variety of differences between the physical resources 410, 415 in terms of a spectrum of metrics of merit including architectural, microarchitectural, thermal, power consumption characteristics, and the like.In one embodiment, the processor 410 executes instructions that control data processing operations of a general type. Embedded within the instructions may be coprocessor instructions. The processor 410 recognizes these coprocessor instructions as being of a type that should be executed by the attached coprocessor 445. Accordingly, the processor 410 issues these coprocessor instructions (or control signals representing coprocessor instructions) on a coprocessor bus or other interconnect, to coprocessor 445. Coprocessor(s) 445 accept and execute the received coprocessor instructions. Figure 5 depicts a block diagram of a first more specific exemplary system 500 in accordance with an embodiment of the present disclosure. As shown in Figure 5 , multiprocessor system 500 is a point-to-point interconnect system, and includes a first processor 570 and a second processor 580 coupled via a point-to-point interconnect 550. Each of processors 570 and 580 may be some version of the processor 300. In one embodiment of the disclosure, processors 570 and 580 are respectively processors 410 and 415, while coprocessor 538 is coprocessor 445. In another embodiment, processors 570 and 580 are respectively processor 410 and coprocessor 445.Processors 570 and 580 are shown including integrated memory controller (IMC) units 572 and 582, respectively. Processor 570 also includes as part of its bus controller units point-to-point (P-P) interfaces 576 and 578; similarly, second processor 580 includes P-P interfaces 586 and 588. Processors 570, 580 may exchange information via a point-to-point (P-P) interface 550 using P-P interface circuits 578, 588. As shown in Figure 5 , IMCs 572 and 582 couple the processors to respective memories, namely a memory 532 and a memory 534, which may be portions of main memory locally attached to the respective processors.Processors 570, 580 may each exchange information with a chipset 590 via individual P-P interfaces 552, 554 using point to point interface circuits 576, 594, 586, 598. Chipset 590 may optionally exchange information with the coprocessor 538 via a high-performance interface 539. In one embodiment, the coprocessor 538 is a special-purpose processor, such as, for example, a high-throughput MIC processor, a network or communication processor, compression and/or decompression engine, graphics processor, GPGPU, embedded processor, or the like.A shared cache (not shown) may be included in either processor or outside of both processors, yet connected with the processors via a P-P interconnect, such that either or both processors' local cache information may be stored in the shared cache if a processor is placed into a low power mode.Chipset 590 may be coupled to a first bus 516 via an interface 596. In one embodiment, first bus 516 may be a Peripheral Component Interconnect (PCI) bus, or a bus such as a PCI Express bus or another third generation I/O interconnect bus, although the scope of the present disclosure is not so limited.As shown in Figure 5 , various I/O devices 514 may be coupled to first bus 516, along with a bus bridge 518 which couples first bus 516 to a second bus 520. In one embodiment, one or more additional processor(s) 515, such as coprocessors, high-throughput MIC processors, GPGPU's, accelerators (such as, e.g., graphics accelerators or digital signal processing (DSP) units), field programmable gate arrays, or any other processor, are coupled to first bus 516. In one embodiment, second bus 520 may be a low pin count (LPC) bus. Various devices may be coupled to a second bus 520 including, for example, a keyboard and/or mouse 522, communication devices 527 and a storage unit 528 such as a disk drive or other mass storage device which may include instructions/code and data 530, in one embodiment. Further, an audio I/O 524 may be coupled to the second bus 520. Note that other architectures are contemplated by this disclosure. For example, instead of the point-to-point architecture of Figure 5 , a system may implement a multi-drop bus or other such architecture. Figure 6 depicts a block diagram of a second more specific exemplary system 600 in accordance with an embodiment of the present disclosure. Similar elements in Figures 5 and 6 bear similar reference numerals, and certain aspects of Figure 5 have been omitted from Figure 6 in order to avoid obscuring other aspects of Figure 6 .Figure 6 illustrates that the processors 570, 580 may include integrated memory and I/O control logic ("CL") 572 and 582, respectively. Thus, the CL 572, 582 include integrated memory controller units and include I/O control logic. Figure 6 illustrates that not only are the memories 532, 534 coupled to the CL 572, 582, but also that I/O devices 614 are also coupled to the control logic 572, 582. Legacy I/O devices 615 are coupled to the chipset 590. Figure 7 depicts a block diagram of a SoC 700 in accordance with an embodiment of the present disclosure. Similar elements in Figure 3 bear similar reference numerals. Also, dashed lined boxes are optional features on more advanced SoCs. In Figure 7 , an interconnect unit(s) 702 is coupled to: an application processor 710 which includes a set of one or more cores 202A-N and shared cache unit(s) 306; a system agent unit 310; a bus controller unit(s) 316; an integrated memory controller unit(s) 314; a set or one or more coprocessors 720 which may include integrated graphics logic, an image processor, an audio processor, and a video processor; an static random access memory (SRAM) unit 730; a direct memory access (DMA) unit 732; and a display unit 740 for coupling to one or more external displays. In one embodiment, the coprocessor(s) 720 include a special-purpose processor, such as, for example, a network or communication processor, compression and/or decompression engine, GPGPU, a high-throughput MIC processor, embedded processor, or the like.In some cases, an instruction converter may be used to convert an instruction from a source instruction set to a target instruction set. For example, the instruction converter may translate (e.g., using static binary translation, dynamic binary translation including dynamic compilation), morph, emulate, or otherwise convert an instruction to one or more other instructions to be processed by the core. The instruction converter may be implemented in software, hardware, firmware, or a combination thereof. The instruction converter may be on processor, off processor, or part on and part off processor. Figure 8 is a block diagram contrasting the use of a software instruction converter to convert binary instructions in a source instruction set to binary instructions in a target instruction set according to embodiments of the disclosure. In the illustrated embodiment, the instruction converter is a software instruction converter, although alternatively the instruction converter may be implemented in software, firmware, hardware, or various combinations thereof. Figure 8 shows a program in a high level language 802 may be compiled using an x86 compiler 804 to generate x86 binary code 806 that may be natively executed by a processor with at least one x86 instruction set core 816. The processor with at least one x86 instruction set core 816 represents any processor that can perform substantially the same functions as an Intel processor with at least one x86 instruction set core by compatibly executing or otherwise processing (1) a substantial portion of the instruction set of the Intel x86 instruction set core or (2) object code versions of applications or other software targeted to run on an Intel processor with at least one x86 instruction set core, in order to achieve substantially the same result as an Intel processor with at least one x86 instruction set core. The x86 compiler 804 represents a compiler that is operable to generate x86 binary code 806 (e.g., object code) that can, with or without additional linkage processing, be executed on the processor with at least one x86 instruction set core 816. Similarly, Figure 8 shows the program in the high level language 802 may be compiled using an alternative instruction set compiler 808 to generate alternative instruction set binary code 810 that may be natively executed by a processor without at least one x86 instruction set core 814 (e.g., a processor with cores that execute the MIPS instruction set of MIPS Technologies of Sunnyvale, CA and/or that execute the ARM instruction set of ARM Holdings of Sunnyvale, CA). The instruction converter 812 is used to convert the x86 binary code 806 into code that may be natively executed by the processor without an x86 instruction set core 814. This converted code is not likely to be the same as the alternative instruction set binary code 810 because an instruction converter capable of this is difficult to make; however, the converted code will accomplish the general operation and be made up of instructions from the alternative instruction set. Thus, the instruction converter 812 represents software, firmware, hardware, or a combination thereof that, through emulation, simulation or any other process, allows a processor or other electronic device that does not have an x86 instruction set processor or core to execute the x86 binary code 806. Figure 9 illustrates an example portion of a neural network 900 in accordance with certain embodiments. The neural network 900 includes neurons X1-X9. Neurons X1-X4 reside in layer 1, neurons X5-X7 reside in layer 2, and neurons X8 and X9 reside in layer 3. The layers may represent any suitable abstraction or grouping of a neural network. For example, a layer may be an input layer, a hidden layer, an output layer, a convolutional layer, a pooling layer, or other suitable layer. In various embodiments, the layers may overlap. For example, a neuron of an input layer could also be an output neuron in some embodiments. In one example, the neurons of layer 1 are input neurons, the neurons of layer 2 are hidden neurons, and the neurons of layer 3 are output neurons. The input neurons may respectively receive primary inputs (which may be held constant while the neural network 900 processes an output), Any suitable primary inputs may be used. As one example, when neural network 900 performs image processing, a primary input value may be the value of a pixel from an image (and the value of the primary input may stay constant while the image is processed). As another example, when neural network 900 performs speech processing the primary input value applied to a particular input neuron may change over time based on changes to the input speech.While a specific topology and connectivity scheme is shown in FIG. 9 , the teachings of the present disclosure may be used in neural networks having any suitable topology and/or connectivity. In the embodiment depicted, each link between two neurons has a synaptic weight indicating the strength of the relationship between the two neurons. The synapse weights are depicted as WXY, where X indicates the pre-synaptic neuron and Y indicates the post-synaptic neuron. Links between the neurons may be excitatory or inhibitory in their effect on the activation state of connected neurons. For example, a signal that propagates from X1 to X5 may increase or decrease the membrane potential of X5 depending on the value of W15. In various embodiments, the connections may be directed or undirected.In general, during each time-step of the neural network, a neuron may receive any suitable inputs, such as a bias value or one or more activation signals from one or more other neurons. The bias value applied to a neuron may be a function of a primary input applied to an input neuron and/or some other value applied to a neuron (e.g., a constant value that may be adjusted during training or other operation of the neural network). In various embodiments, each neuron may be associated with its own bias value or a bias value could be applied to multiple neurons. An activation signal may be a function of a fan-in neuron and a synapse weight for the connection between the neuron receiving the activation signal and the fan-in neuron.The neuron may perform a function utilizing the values of its inputs and its current membrane potential. For example, in some embodiments, the inputs may be added to the current membrane potential of the neuron to generate an updated membrane potential. As another example, a non-linear function, such as a sigmoid transfer function, may be applied to the inputs and the current membrane potential. Any other suitable function may be used. The neuron then updates its membrane potential based on the output of the function. The neuron may send an activation signal to each of its fan-out neurons (i.e., the neurons connected to the output of the neuron) based on its membrane potential. For example, an activation signal from X1 may be propagated to X5, X6, and X7. As another example, an activation signal from X5 may be propagated to X8 and X9 (and in some embodiments to X2, X3, and X4).In a particular embodiment, one or more memory arrays may comprise memory cells that store the synapse weights, membrane potentials, thresholds, outputs (e.g., the number of times that a neuron has spiked), bias amounts, or other values used during operation of the neural network 900. The number of bits used for each of these values may vary depending on the implementation. In the examples illustrated below, specific bit lengths and/or component sizes may be described with respect to particular elements, but in other embodiments any suitable bit lengths and/or component sizes may be used.Various embodiments of the present disclosure provide an architecture for a binary neural network. Although various parameters of a neural network (e.g., membrane potentials/neuron activations, synapse weights, biases, etc.) may be stored using any suitable format such as floating point numbers, integers, or other formats, various hardware efficiencies may be obtained by minimizing the lengths of such parameters. In a binary neural network, the neuron activations, synapse weights, and/or biases may be quantized as being either -1 or +1, thus requiring only a single bit for each parameter. Such a network may avoid complicated addition and/or multiplication used in neural networks that use, e.g., 8 or 16 bit integers to represent such parameters. Though a binary neural network may be less accurate than a similarly sized (in terms of neurons) neural network that stores parameters using greater amounts of bits, the binary neural network drastically reduces network storage and computation resources on a per-neuron basis (in some cases by 60x to 100x). Accordingly, the width and depth of a binary neural network may be increased to compensate for or overcome the loss in accuracy without exceeding the resources used by a comparable neural network.Various embodiments of the present disclosure provide binary neural network architectures using arrays of modular logic blocks coupled together via reconfigurable interconnects to form a desired neural network. The connectivity between the blocks may be reconfigurable. In one embodiment, the connectivity may be set based on a configuration file that is stored in a memory of a device that includes the arrays. In one embodiment, the device that includes the arrays of logic blocks is a field programmable gate array (FPGA) that includes arrays of logic blocks that are specially adapted to binary neural network computations, rather than standard logic blocks used by typical FPGAs, thus improving the performance of the binary neural network on the device. In various embodiments, the basic computations are performed by XNOR and XOR and the entire neural network be synthesized by a network of XOR gates and inverters or XNOR gates and inverters (along with some basic supporting logic). Various embodiments provide an array of basic computation logic blocks and switch boxes that may be programmed in field to implement a desired neural network. Figure 10 illustrates an example block diagram of a field programmable gate array (FPGA) 1000 in accordance with certain embodiments. In various embodiments, FPGA 1000 or other device with configurable logic as described herein may be included within and/or coupled to any of the computer systems or components described above (or other suitable systems or components). In a particular embodiment, the logic block arrays and interconnect logic described herein may be included on an FPGA 1000 or other computing device.An FPGA may be a semiconductor device that includes configurable logic. An FPGA may be programmed via a configuration file (e.g., a bitstream) having any suitable format that defines how the logic of the FPGA is to be configured. In various embodiments, the configuration file may be provided to the FPGA via any suitable means (e.g., via a scan chain). An FPGA may be reprogrammed any number of times after the FPGA is manufactured to implement various functions based on the connectivity of the logic blocks of the FPGA.In the depicted embodiment, FPGA 1000 includes configurable logic 1002, operational logic 1004, communication controller 1006, FPGA memory 1008, and memory controller 1010. In various embodiments, the configurable logic includes arrays of XOR or XNOR gates or computational logic blocks including XOR or XNOR gates. The configurable logic 1002 may include any other suitable logic, such as memory elements, inverters, capacitors, amplifiers, demultiplexers, majority voters, or other hardware elements). In various embodiments, the configurable logic 1002 may include any of the logic described with respect to the figures below or other similar logic. In various embodiments, the logic is configured (at least in part) through programmable interconnects that connect logic components of the FPGA to each other.Operational logic 1004 may utilize data from a configuration file stored in FPGA memory 1008 (e.g., nonvolatile flash memory, SRAM memory, DRAM memory, phase change memory, register files, or other suitable memory) defining the configuration of logic blocks and connectivity to configure the configurable logic 1002 according to the configuration file. Operational logic 1004 may perform other suitable operations of the FPGA. In various embodiments, control bits of the configuration file may operate to configure the logic (e.g., by activating or deactivating particular interconnects between portions of the configurable logic or by sending control signals to particular logic blocks). The operational logic 1004 may include any suitable logic (which may be implemented in configurable logic or fixed logic), such as one or more memory devices including any suitable type of memory (e.g., RAM), one or more transceivers, clocking circuitry, one or more processors located on the FPGA, one or more controllers, or other suitable logic.Communication controller 1006 may enable FPGA 1000 to communicate with other components (e.g., a processor) of a computer system (e.g., to receive configuration files or operational commands or to communicate neural network inputs or outputs). Memory controller 1010 may enable the FPGA to read data (e.g., operands or results) from or write data to memory of a computer system. In various embodiments, memory controller 1010 may comprise a direct memory access (DMA) controller. Figure 11 illustrates an example computational logic block (CLB) 1100 in accordance with certain embodiments. Figure 12 illustrates an example circuit implementation 1100A of CLB 1100 in accordance with certain embodiments. CLB 1100 includes logic that may be used to perform an activation function for the neural network. For example, a neuron in a binary neural network may have a non-linear activation function of: A = sign ∑ i x i w iwhereAis the calculated activation state value of the neuron,xiis the activation state value (the activation state values may have a value of +1 or -1) of fan-in neuroni, andwiis a synapse weight value (which may have a value of +1 or -1 in a binary neural network) of the synapse between the neuron and the fan-in neuroni. The contribution of a bias (which may have a value of +1 or -1) to the neuron may be included in the summation by treating it as one of the incoming activation state values and setting the corresponding weight to +1. By way of example, a logic 1 may be used to represent the value of +1 and a logic 0 may be used to represent the value of -1.Thus, calculation of the above activation function includes a series of bitwise multiplications (xi wi) and a determination of the sign (i.e., whether the result is positive or negative) of the sum of the results of the multiplications. If the sign is negative, the activation state value of the neuron is -1, if the sign is positive, the activation state value of the neuron is +1.A truth table that relies on the assumption that a logical 0 represents the value -1 and a logical 1 represents the value +1 is shown below. The output tracks the output of an XNOR gate = x ⊗ w ‾ .Accordingly, in a particular embodiment, each bitwise multiplication (xi wi) may be performed by an XNOR gate that receives the respective synapse weight value and activation state value as inputs.-1 (0)-1 (0)+1 (1)-1 (0)+1 (1)-1 (0)+1 (1)-1 (0)-1 (0)+1 (1)+1 (1)+1 (1)In an embodiment, the sign determination of the activation function may be performed by a majority voter logic block which determines whether a majority of inputs to the logic block are logic 1s (corresponding to values of +1) or logic 0s (corresponding to values of -1). In a particular embodiment, a majority voter logic block may be constructed from an adder tree.In a particular embodiment, CLB 1100 includes two bitwise multipliers (which may each be used to multiply a weight and an activation state) and a full adder (which could be coupled together with other full adders to implement a majority voter logic block). The CLB depicted in Figures 11 and 12 may function as follows:If (F==0) S = A ⊕ B ‾C 0 = C I ⊕ D ‾Else S = A ⊕ B ⊕ C IC O = AB + BC I + C I AEndifIn other words, if the control signal F is equal to 0 (i.e., not activated), then the CLB is configured as two bitwise multipliers (S equals A times B, COequals CItimes D). However, if the control signal F is equal to 1 (i.e., activated), then the CLB is configured as a full adder that outputs the sum of A, B, and CI(where S is the least significant bit and COis the carry bit).Figure 12 depicts one example circuit implementation of CLB 1100, although other embodiments may include any suitable circuit implementations that provide implementation of the functions described above. In the embodiment depicted, when F is low (i.e., logical 0), A and B are passed to XOR gate 1102A and the output of XOR gate 1102A is inverted by inverter 1108A and passed to output S. Similarly, CIand D are passed to XOR gate 1102B and the output of XOR gate 1102B is inverted by inverter 1108B and passed to output Co. When F is high (i.e., logical 1), the output of XOR gate 1102A is input (along with CI) to XOR gate 1102B to produce the output S. The majority voter logic block 1106 also determines whether the carry bit should be set to logical 1 based on whether at least two of the three inputs (A, B, and CI) have a value of logical 1. Figure 13 illustrates example circuitry of an example switch block 1300 in accordance with certain embodiments. An array of switch blocks 1300 (or other suitable reconfigurable interconnects) may be used to couple the CLBs (or other logic blocks described herein) together in order to implement the activation function described above. A switch block may comprise a plurality of inputs Iiand a plurality of outputs Oj. A particular input Iimay be coupled to a particular output Ojof the switch block 1300 when an associated control signal Cijis activated. The values of the various control signals may be stored in a memory of the device (e.g., FPGA 1000) that includes the CLBs and switch blocks 1300 and held constant during operation of the neural network. In one embodiment, the control signal values are specified in a configuration file (e.g., a bitstream) sent to the device. In particular embodiments, the configuration file may also specify values of the control signals F for each (or at least a subset) of the CLBs 1100. Figure 14 illustrates an example arrangement of computational logic blocks (CLBs) 1100 and switch blocks (SBs) 1300 in accordance with certain embodiments. As depicted, the example arrangement includes an array of CLBs 1100 and an array of SBs 1300 with each CLB coupled to one or more SBs. A switch block 1300 may be configured (e.g., via a configuration file) to couple the output of a CLB 1100 to the input of another CLB 1100. In various embodiments, some switch blocks 1300 may be local switch blocks that couple neighboring CLBs to each other and other switch blocks 1300 may be global switch blocks that couple a CLB from one region of the device to another region of the device. The arrangement shown in Figure 14 is for explanation purposes only, as the present disclosure contemplates any suitable arrangement of the CLBs and the switch blocks. SBs 1300 may also couple other logic elements (e.g., majority voter logic blocks, XOR gates, XNOR gates, amplifiers, storage elements, or other logic elements of the FPGA) together. Figure 15 illustrates an example arrangement of computational logic blocks 1100 to perform an activation function operation of a binary neural network in accordance with certain embodiments. The arrangement depicted in Figure 15 demonstrates a portion of a full-connection layer of a neural network as an example (convolutional layers may be considered a special case of a full-connection layer). The arrangement performs the operation A = sign ∑ i = 0 8 x i w i ,with weight valuesw2= 0 andw7= 0 (accordingly no bitwise multiplications are performed for these weight values and they may be omitted from the XNOR connections of the depicted circuit). The operation depicted may be, e.g., a 9x1 full connection computation or a 3x3 convolution computation.In the embodiment depicted, the control signal F supplied to CLBs 1100C-1100F is set to 0, thus configuring each of these CLBs as two one-bit multipliers (each implemented by an XNOR gate). As mentioned earlier, weight valuesw2= 0 andw7= 0, so these weight values and their corresponding activation function values may be ignored and are not supplied to the CLBs. The results of the bitwise multiplications are provided to a majority voter logic block 1502 that determines the sign of the summation of these results.In the embodiment depicted, the majority voter logic block 1502 comprises a 3:2 adder tree. In other embodiments, where the number of fan-in neurons to a particular neuron (and thus the number of bitwise multiplications performed for the activation function) is a different size, an adder tree having another appropriate size may be used.The adder tree is implemented using CLBs 1100G-1100M which each have their control signal F set to 1, thus each of these CLBs are configured as full adders. In addition to summing the results of the bitwise multiplications, the majority voter logic block adds an additive bias value B (not to be confused with a bias value applied to a particular neuron). The bias value B operates to cause the most significant bit (i.e., r3) of the sum produced by the adder to output the majority value. That is, if the sign of the result of the summation is positive, then more +1 values were supplied to the adder than -1 values, and the most significant bit will be a logical 1 (representing a positive sign). However, if the sign of the result of the summation is negative, then more -1 values were supplied to the adder than +1 values, and the most significant bit will be a logical 0 (representing a negative sign). Accordingly, the majority voter produces the majority bit of the input bits.In a particular embodiment, the biasB= ┌(2┌log2 (n +1)┐- (n+ 1))/2┐, and the position of the most significant bit is the bit at position ┌log2(n+ 1)┐, where n equals the number of fan-in neurons (plus one for a bias if applicable) considered in the summation. For example, in the embodiment depicted, the summation was for i from 0 to 8, thus n equals 9, the bias equals 3, and the sign bit is the 4th bit. As another example, when n equals 8, the bias equals 4, and the sign bit is the 4th bit. As yet another example, when n equals 7, the bias equals 0, and the sign bit is the 3rd bit.In the embodiment depicted, the additive bias equals 3. The additive bias is implemented by applying a bias control signal 1504 of logical 1 to the Ciinput of CLB 1100I (indicating "b01") and Ciinput of 1100L (indicating "b10"), thus the adder tree is biased by "b11" = 3. Any unconnected inputs to the adders of the adder tree may be coupled to a bias control signal of logical 0.Thus, in the example depicted, only four CLBs are used to implement 7 XNOR operations and six CLBs used to implement the adder tree to add the 7 XNOR results together and compute the sign bit. In this case, the bit "r3" of the adder tree result represents the activation function result (i.e., sign bit), and the remaining three output bits (r0, r1, and r2) may be ignored.In order to implement the connectivity shown (and other similar connectivities for neural network operations having various sizes, the connections between the CLBs can be configured by setting the control signals to the appropriate SBs (or other reconfigurable connection logic) to the appropriate values.The example of Figure 15 merely depicts logic to perform an activation function for a single neuron. In an actual implementation of a neural network, the sign bit (which is the activation state value for a particular neuron) may be provided (along with an appropriate synapse weight value) to a fan-out neuron for a similar calculation to be performed for that neuron. Similarly, x0- x8may each be connected to an output (sign bit) of a similar logical implementation of a fan-in neuron. Thus, the CLBs may be arranged together to form a multilayered neural network having any suitable number of neurons and connectivity. Figure 16 illustrates an example analog majority voter 1602 coupled to a plurality of bitwise multipliers 1604 in accordance with certain embodiments. In various embodiments, an analog majority voter 1602 may be used to compute the sign of the sum of the outputs of the bitwise multipliers 1604. In various embodiments, an analog majority voter 1602 may be more energy efficient, faster, and require less chip area than a comparable digital majority voter (e.g., an adder tree with an additive bias). Analog majority voter 1602 comprises a set of capacitors 1608 that are each coupled through a clocked buffer 1606 to a result of a bitwise multiplication performed by a respective XNOR gate 1604. The capacitors 1608 are connected in parallel to each other and in series with an amplifier 1610 that is operable to generate an output indicative of the sign of the sum of the outputs of the bitwise multipliers.The bitwise multipliers 1604 and the analog majority voter 1602 are operable to perform the activation function described above for n + 1 inputs (e.g., activation state values from n fan-in neurons and one neuron bias). Figure 16 depicts an example 3x3 convolution computation or 9x1 full connection operation for a binary neural network when n equals 8. For a general integer number n, if more than a half of the n productions ofxi wi(for i = 0, 1, 2, ... n) are '1' (indicating a +1 value), then the analog voter output ("adder sign") will be '1', but if more than a half of the n productions are '0' (indicating a -1 value), then the analog voter output will be '0'.Although a particular implementation is shown, other embodiments may include an analog majority voter (or comparable logic) that is implemented using any suitable circuitry. For example, a minority voter (e.g., in which XOR gates replace the XNOR gates) that produces the inverse of the sign could be used with an inverter in place of the majority voter. As another example, a dual rail system may be used in which both a minority voter and a majority voter are used (with each receiving the same inputs) and the outputs of the voters are coupled to respective inputs of a differential amplifier.The example of Figure 16 merely depicts logic to perform an activation function for a single neuron. In an actual implementation of a neural network, the adder sign bit (which is the activation state value for a particular neuron) may be provided (along with an appropriate synapse weight value) to a fan-out neuron for a similar calculation to be performed for that neuron. Similarly, x0- xnmay each be connected to an output (sign bit) of a similar logical implementation of a fan-in neuron. Thus, XNOR (or XOR) gates and analog majority voters 1602 may be arranged together to form a multilayered neural network having any suitable number of neurons and connectivity.While an analog majority voter may not produce the adder sign with 100% accuracy, the possibility of meaningful errors is very small in typical neural network implementations. Moreover, the analog majority voter could be included in the neural network training process in order to tune the weights of the neural network to reduce any negative effects the errors might cause.When an analog majority voter is used, the logic blocks used to implement the bitwise multiplications of the weights and activation states may be simplified to XNOR or XOR gates. Thus these logic blocks may be greatly simplified in comparison to computational logic blocks 1100 as they would not include the logic to implement the full adder and to switch between the full adder and the bitwise multipliers according to the value of the control input F. However, in some embodiments, computational logic blocks 1100 could implement bitwise multipliers that are then coupled to an analog majority voter 1602.As with the computational logic blocks 1100 and switch boxes 1300, an FPGA may include an array of analog majority voters that each include similarly valued capacitances and the same number of inputs. Thus, the analog majority voter may be used as a standard logic cell in the FPGA. In some embodiments, the FPGA could include arrays of XOR gates and/or XNOR gates to implement the bitwise multipliers. For example, a standard logic cell in an FPGA could include one XNOR gate or XOR gate with two inputs and one output or a collection of XNOR or XOR gates with a commensurate amount of inputs and outputs. Figure 17 illustrates an example analog majority voter 1702 coupled to a plurality of bitwise multipliers 1604 in accordance with certain embodiments. The analog majority voter 1702 is similar to the analog majority voter 1602, but uses an inverter chain 1704 comprising two or more inverters in the amplifier stage to resolve the adder sign (e.g., to determine whether the input to the amplifier stage is greater than or less than Vdd/2). Figure 18 illustrates an example analog majority voter 1800 coupled to a plurality of bitwise multipliers 1604 in accordance with certain embodiments. The analog majority voter 1800 may be used to implement multi-level weighted neural networks. For example, synapses with larger magnitude weights may be connected to multiple voter inputs (i.e., where each capacitor 1608 represents a voter input). Thus, in the embodiment depicted, w0may have a weight of 2, w1may have a weight magnitude of 3, w2may have a weight magnitude of 1, w3may have a weight magnitude of 2, and wnmay have a weight magnitude of 1. Such weight magnitudes assume that the capacitors 1608 have the same capacitance. Alternatively, a synapse with a higher magnitude weight value could be coupled to a single capacitor that is larger than the capacitor coupled to a synapse with a lower magnitude weight value. In various embodiments utilizing multi-level weighted neural networks, the synapse weight value withat is used in the bitwise multiplication may be the sign (i.e., -1 or +1) of the weight (while the magnitude of the weight is represented based on the amount of capacitance coupled to the result of the bitwise multiplication.Implementations of neural networks utilizing majority voters built from computational logic blocks 1100 could similarly implement multi-level weights by connecting the samexi wito multiple inputs of the majority voter.In various embodiments, an analog majority voter that implements multi-level weights as described above may be provided as a standard logic cell on an FPGA. The cell could have various inputs that correspond to various different weight magnitudes. As one non-limiting example, the cell could include 100 inputs that each correspond to a weight having a magnitude of 1, 100 inputs that each correspond to weights having a magnitude of 1.5, and 100 inputs that each correspond to weights having a magnitude of 2, and so on.Alternatively, each input of an analog majority voter may represent the same weight, and a larger magnitude weight may be implemented by coupling the output of a bitwise multiplier to multiple input pins of the logic cell implementing the majority voter. Figure 19 illustrates an example multi-stage analog majority voter 1900 in accordance with certain embodiments. Inputs yimay represent any suitable inputs to a majority voter, such as an output of a bitwise multiplier (e.g., that multiplies a weight value times an activation state value). Majority voter 1900 includes a first stage of analog majority voters including majority voters 1902A-C that have outputs coupled to an analog majority voter 1902D of a second stage. Other embodiments may include any number of stages of analog majority voters coupled in like manner and any number of analog majority voters in each stage (with the last stage having a single majority voter to determine the adder sign).When the number of inputs to an analog majority voter is very large (e.g., the number of fan-in neurons in a neural network could be in the thousands), the analog majority voter may require a very sensitive amplifier to distinguish whether the input majority bit is '1' or '0'. For example, there may be a system error (E) for an analog majority voter due to manufacturing process variations. Thus an n-input analog majority voter may vote (n/2+E) "1"s as "1" (instead of the optimal value of n/2). Accordingly, the system error E becomes more significant as n increases. By including multiple stages of majority voters arranged in a tree structure as shown, the number of inputs n is decreased for each analog voter, diminishing the effect of the system error E. Figure 20 illustrates an example connection scheme for an analog majority voter 2002 in accordance with certain embodiments. When the number of meaningful inputs y0to ym-1is less than the available voter inputs, the remaining inputs pins may be connected equally to '1's and '0's so that the result of the analog majority voter is not skewed. The binding may be performed during a programming stage (e.g., specified in a configuration file) when the analog voter is predesigned and the actual connectivity is not known during the chip design phase (e.g., because the connectivity is reconfigurable). Figure 21 illustrates an example flow 2100 for configuring a device to implement a neural network in accordance with certain embodiments. At 2102, neural network parameters are received. The neural network parameters may include any suitable parameters. For example, the parameters may include size parameters, such as the number of neurons in the neural network, the number of layers in the neural network, the number of neurons per layer, and/or other suitable size parameters. As another example, the parameters may include synapse weight value parameters, such as the signs and magnitudes of synapse weights of the neural network. In various embodiments, the weight value parameters may be determined during a training process of a neural network that is performed, e.g., by one or more processors of a computer system and/or via other means. As another example, the parameters may include one or more neuron bias values to be applied to each neuron at each time-step of the neural network. As another example, the parameters may include connectivity information specifying how the neurons are to be connected to each other (e.g., the connectivity information may specify or at least provide information allowing the determination of the fan-in and fan-out neurons of each neuron in the neural network).At 2104, a logic block configuration is determined for a neural network based on the neural network parameters. This may include mapping the neural network parameters onto an available device architecture. For example, an architecture of a device (e.g., an FPGA) that is to implement the neural network may include groups of available logic blocks. In some embodiments, the logic blocks (and/or a group of a specific type of logic block) may be arranged in an array on the device (i.e., the logic blocks may be located in a structured order on the device). In some embodiments, the logic blocks may be placed in a repeated pattern on the device. The logic blocks may be coupled together using reconfigurable interconnects (e.g., switch blocks 1300 or other suitable reconfigurable interconnects).In a particular embodiment, the device includes a group of computational logic blocks 1100 (or aggregations thereof) and switch blocks 1300 (with each block having any suitable number of inputs and outputs). In another embodiment, a group of majority voters (which may be analog or digital), a group of switch blocks, and a group of XOR and/or XNOR logic blocks are provided, where each XOR or XNOR logic block comprises any number of XOR or XNOR gates that are each to couple to two inputs of the logic block and to provide an output of the logic block (which may have one output for each XOR or XNOR gate). In various embodiments, any suitable logic blocks may be provided.The determination of the logic block configuration may include determining which logic blocks are to be used to implement the neural network and the connectivity that is to be enabled between the logic blocks (e.g., which control signals are to be supplied to switch blocks or other reconfigurable interconnect). In a particular embodiment, the determination of the logic block configuration may also include determining a plurality of configuration signals that are to be supplied to computational logic blocks in order to implement the desired functionality (e.g., to configure the computational logic blocks as bitwise multipliers or full adders or to provide an equal number of '1's and '0's to unused inputs of a majority voter).At 2106, a configuration file that specifies the determined logic block configuration is generated. The configuration file may be compatible with logic of the device and may cause the device to implement the logic block configuration by activating various control signals of the device. At 2108, the configuration file is loaded onto the device. For example, the configuration may be loaded onto the device via a scan chain or other suitable means.In various embodiments, a software program that is executed by a processor or other logic may receive the neural network parameters, determine the logic block configuration for the neural network, and generate the configuration file that specifies the logic block configuration. In other embodiments, a software program may perform any one or more of these operations (or other operations associated with the device that is to implement the neural network). In a particular embodiment, the software program may provide a user interface that allows a user to specify the neural network parameters and initiate the loading of the configuration file onto the device.At 2110, the neural network operates on the device. Inputs may be applied to input neurons of the neural network (e.g., via communication ports of the device), the neural network may iterate through any suitable number of time-steps, and the device may provide the outputs of the neural network. The operations may be repeated any number of times to reconfigure the device for other implementations of neural networks.Some of the blocks illustrated in FIG. 21 may be repeated, combined, modified or deleted where appropriate, and additional blocks may also be added to the flow. Additionally, blocks may be performed in any suitable order without departing from the scope of particular embodiments.A design may go through various stages, from creation to simulation to fabrication. Data representing a design may represent the design in a number of manners. First, as is useful in simulations, the hardware may be represented using a hardware description language (HDL) or another functional description language. Additionally, a circuit level model with logic and/or transistor gates may be produced at some stages of the design process. Furthermore, most designs, at some stage, reach a level of data representing the physical placement of various devices in the hardware model. In the case where conventional semiconductor fabrication techniques are used, the data representing the hardware model may be the data specifying the presence or absence of various features on different mask layers for masks used to produce the integrated circuit. In some implementations, such data may be stored in a database file format such as Graphic Data System II (GDS II), Open Artwork System Interchange Standard (OASIS), or similar format.In some implementations, software based hardware models, and HDL and other functional description language objects can include register transfer language (RTL) files, among other examples. Such objects can be machine-parsable such that a design tool can accept the HDL object (or model), parse the HDL object for attributes of the described hardware, and determine a physical circuit and/or on-chip layout from the object. The output of the design tool can be used to manufacture the physical device. For instance, a design tool can determine configurations of various hardware and/or firmware elements from the HDL object, such as bus widths, registers (including sizes and types), memory blocks, physical link paths, fabric topologies, among other attributes that would be implemented in order to realize the system modeled in the HDL object. Design tools can include tools for determining the topology and fabric configurations of system on chip (SoC) and other hardware device. In some instances, the HDL object can be used as the basis for developing models and design files that can be used by manufacturing equipment to manufacture the described hardware. Indeed, an HDL object itself can be provided as an input to manufacturing system software to cause the manufacture of the described hardware.In any representation of the design, the data representing the design may be stored in any form of a machine readable medium. A memory or a magnetic or optical storage such as a disc may be the machine readable medium to store information transmitted via optical or electrical wave modulated or otherwise generated to transmit such information. When an electrical carrier wave indicating or carrying the code or design is transmitted, to the extent that copying, buffering, or re-transmission of the electrical signal is performed, a new copy is made. Thus, a communication provider or a network provider may store on a tangible, machine-readable medium, at least temporarily, an article, such as information encoded into a carrier wave, embodying techniques of embodiments of the present disclosure.Thus, one or more aspects of at least one embodiment may be implemented by representative instructions stored on a machine-readable medium which represents various logic within the processor, which when read by a machine causes the machine to fabricate logic to perform the techniques described herein. Such representations, often referred to as "IP cores" may be stored on a non-transitory tangible machine readable medium and supplied to various customers or manufacturing facilities to load into the fabrication machines that manufacture the logic or processor.Embodiments of the mechanisms disclosed herein may be implemented in hardware, software, firmware, or a combination of such implementation approaches. Embodiments of the disclosure may be implemented as computer programs or program code executing on programmable systems comprising at least one processor, a storage system (including volatile and non-volatile memory and/or storage elements), at least one input device, and at least one output device.Program code, such as code 530 illustrated in Figure 5 , may be applied to input instructions to perform the functions described herein and generate output information. The output information may be applied to one or more output devices, in known fashion. For purposes of this application, a processing system includes any system that has a processor, such as, for example; a digital signal processor (DSP), a microcontroller, an application specific integrated circuit (ASIC), or a microprocessor.The program code may be implemented in a high level procedural or object oriented programming language to communicate with a processing system. The program code may also be implemented in assembly or machine language, if desired. In fact, the mechanisms described herein are not limited in scope to any particular programming language. In various embodiments, the language may be a compiled or interpreted language.The embodiments of methods, hardware, software, firmware or code set forth above may be implemented via instructions or code stored on a machine-accessible, machine readable, computer accessible, or computer readable medium which are executable (or otherwise accessible) by a processing element. A non-transitory machine-accessible/readable medium includes any mechanism that provides (i.e., stores and/or transmits) information in a form readable by a machine, such as a computer or electronic system. For example, a non-transitory machine-accessible medium includes random-access memory (RAM), such as static RAM (SRAM) or dynamic RAM (DRAM); ROM; magnetic or optical storage medium; flash memory devices; electrical storage devices; optical storage devices; acoustical storage devices; other form of storage devices for holding information received from transitory (propagated) signals (e.g., carrier waves, infrared signals, digital signals); etc., which are to be distinguished from the non-transitory mediums that may receive information therefrom.Instructions used to program logic to perform embodiments of the disclosure may be stored within a memory in the system, such as DRAM, cache, flash memory, or other storage. Furthermore, the instructions can be distributed via a network or by way of other computer readable media. Thus a machine-readable medium may include any mechanism for storing or transmitting information in a form readable by a machine (e.g., a computer), but is not limited to, floppy diskettes, optical disks, Compact Disc, Read-Only Memory (CD-ROMs), and magneto-optical disks, Read-Only Memory (ROMs), Random Access Memory (RAM), Erasable Programmable Read-Only Memory (EPROM), Electrically Erasable Programmable Read-Only Memory (EEPROM), magnetic or optical cards, flash memory, or a tangible, machine-readable storage used in the transmission of information over the Internet via electrical, optical, acoustical or other forms of propagated signals (e.g., carrier waves, infrared signals, digital signals, etc.). Accordingly, the computer-readable medium includes any type of tangible machine-readable medium suitable for storing or transmitting electronic instructions or information in a form readable by a machine (e.g., a computer).Logic may be used to implement any of the functionality of the various components such as FPGA 1000, CLB 1100, SB 1300, and the majority voters (and the various logical components therein), or other component or system described herein. "Logic" may refer to hardware, firmware, software and/or combinations of each to perform one or more functions. For example, each of the components of the FPGA 1000, CLB 1100, SB 1300, and the majority voters described herein may be hardware elements (e.g., circuitry). As another example, logic may include hardware, such as a micro-controller or processor, associated with a non-transitory medium to store code adapted to be executed by the micro-controller or processor. Therefore, reference to logic, in one embodiment, refers to the hardware, which is specifically configured to recognize and/or execute the code to be held on a non-transitory medium. Furthermore, in another embodiment, use of logic refers to the non-transitory medium including the code, which is specifically adapted to be executed by the microcontroller to perform predetermined operations. And as can be inferred, in yet another embodiment, the term logic (in this example) may refer to the combination of the hardware and the non-transitory medium. In various embodiments, logic may include a microprocessor or other processing element operable to execute software instructions, discrete logic such as an application specific integrated circuit (ASIC), a programmed logic device such as a field programmable gate array (FPGA), a memory device containing instructions, combinations of logic devices (e.g., as would be found on a printed circuit board), or other suitable hardware and/or software. Logic may include one or more gates or other circuit components, which may be implemented by, e.g., transistors. In some embodiments, logic may also be fully embodied as software. Software may be embodied as a software package, code, instructions, instruction sets and/or data recorded on non-transitory computer readable storage medium. Firmware may be embodied as code, instructions or instruction sets and/or data that are hard-coded (e.g., nonvolatile) in memory devices. Often, logic boundaries that are illustrated as separate commonly vary and potentially overlap. For example, first and second logic may share hardware, software, firmware, or a combination thereof, while potentially retaining some independent hardware, software, or firmware.Use of the phrase 'to' or 'configured to,' in one embodiment, refers to arranging, putting together, manufacturing, offering to sell, importing and/or designing an apparatus, hardware, logic, or element to perform a designated or determined task. In this example, an apparatus or element thereof that is not operating is still 'configured to' perform a designated task if it is designed, coupled, and/or interconnected to perform said designated task. As a purely illustrative example, a logic gate may provide a 0 or a 1 during operation. But a logic gate 'configured to' provide an enable signal to a clock does not include every potential logic gate that may provide a 1 or 0. Instead, the logic gate is one coupled in some manner that during operation the 1 or 0 output is to enable the clock. Note once again that use of the term 'configured to' does not require operation, but instead focus on the latent state of an apparatus, hardware, and/or element, where in the latent state the apparatus, hardware, and/or element is designed to perform a particular task when the apparatus, hardware, and/or element is operating.Furthermore, use of the phrases 'capable of/to,' and or 'operable to,' in one embodiment, refers to some apparatus, logic, hardware, and/or element designed in such a way to enable use of the apparatus, logic, hardware, and/or element in a specified manner. Note as above that use of to, capable to, or operable to, in one embodiment, refers to the latent state of an apparatus, logic, hardware, and/or element, where the apparatus, logic, hardware, and/or element is not operating but is designed in such a manner to enable use of an apparatus in a specified manner.A value, as used herein, includes any known representation of a number, a state, a logical state, or a binary logical state. Often, the use of logic levels, logic values, or logical values is also referred to as 1's and 0's, which simply represents binary logic states. For example, a 1 refers to a high logic level and 0 refers to a low logic level. In one embodiment, a storage cell, such as a transistor or flash cell, may be capable of holding a single logical value or multiple logical values. However, other representations of values in computer systems have been used. For example, the decimal number ten may also be represented as a binary value of 1010 and a hexadecimal letter A. Therefore, a value includes any representation of information capable of being held in a computer system.Moreover, states may be represented by values or portions of values. As an example, a first value, such as a logical one, may represent a default or initial state, while a second value, such as a logical zero, may represent a non-default state. In addition, the terms reset and set, in one embodiment, refer to a default and an updated value or state, respectively. For example, a default value potentially includes a high logical value, i.e. reset, while an updated value potentially includes a low logical value, i.e. set. Note that any combination of values may be utilized to represent any number of states.In at least one embodiment, an apparatus comprises a plurality of bitwise multipliers, a bitwise multiplier of the plurality of bitwise multipliers to multiply a binary synapse weight value of a neural network by a binary activation state value of a neuron of the neural network; a plurality of majority voters, a majority voter of the plurality of majority voters to receive outputs of a first group of bitwise multipliers and to generate a majority result to indicate whether a majority of the outputs of the first group of bitwise multipliers are set to a first binary value or a second binary value; and a first plurality of reconfigurable connections coupled to outputs of the plurality of bitwise multipliers and inputs of the plurality of majority voters.In an embodiment, the plurality of bitwise multipliers and the plurality of majority voters are implemented by a plurality of computational logic blocks, a computational logic block of the plurality of computational logic blocks comprising a pair of bitwise multipliers and a full adder, wherein outputs of the computational logic block are to be coupled to the pair of bitwise multipliers when a function signal coupled to the computational logic block is a first value and to be coupled to the full adder when the function signal is a second value. In an embodiment, the full adder and at least one bitwise multiplier of the computational logic block share an XOR or an XNOR gate. In an embodiment, the plurality of reconfigurable connections comprise a plurality of switch blocks, wherein an input of the switch block is selectively coupled to an output of the switch block via a configurable control signal. In an embodiment, the reconfigurable connections are to be set based on a configuration file to be loaded into a memory of the apparatus. In an embodiment, the majority voter comprises an adder tree including the computational logic block. In an embodiment, the adder tree is to receive an additive bias value and output a bit as the majority result. In an embodiment, the majority voter comprises an analog majority voter comprising a plurality of capacitors coupled to outputs of the first group of bitwise multipliers. In an embodiment, the analog majority voter comprises a plurality of analog majority voters arranged in a plurality of stages. In an embodiment, the analog majority voter comprises a first group of capacitors coupled to a single output of the first group of bitwise multipliers to implement a synapse weight magnitude based on the number of capacitors in the first group of capacitors.In at least one embodiment, a method comprises configuring a first plurality of reconfigurable connections to couple outputs of a plurality of bitwise multipliers to inputs of a majority voter; performing, by each bitwise multiplier of the plurality of bitwise multipliers, a bitwise multiplication of a binary synapse weight value of a neural network and a corresponding binary activation state value of a neuron of the neural network and providing the result to an output of the bitwise multiplier; and determining, by the majority voter, a majority result that indicates whether a majority of outputs of the bitwise multipliers are set to a first binary value or a second binary value.In an embodiment, the method further comprises setting a value of a function signal of a computational logic block to cause the computational logic block to implement at least one bitwise multiplier of the plurality of bitwise multipliers, wherein the computational logic block comprises a pair of bitwise multipliers and a full adder, wherein outputs of the computational logic block are to be coupled to the pair of bitwise multipliers when the function signal is a first value and to be coupled to the full adder when the function signal is a second value. In an embodiment, the full adder and at least one bitwise multiplier of the computational logic block share an XOR or an XNOR gate. In an embodiment, the majority voter comprises an adder tree including the computational logic block. In an embodiment, the majority voter comprises an analog majority voter comprising a plurality of capacitors coupled to outputs of the plurality of bitwise multipliers. In an embodiment, the plurality of reconfigurable connections comprise a plurality of switch blocks, wherein an input of the switch block is selectively coupled to an output of the switch block via a configurable control signal. In an embodiment, the reconfigurable connections are to be set based on a configuration file to be loaded into a memory of the apparatus. In an embodiment, the adder tree is to receive an additive bias value and output a bit as the majority result. In an embodiment, the analog majority voter comprises a plurality of analog majority voters arranged in a plurality of stages. In an embodiment, the analog majority voter comprises a first group of capacitors coupled to a single output of the first group of bitwise multipliers to implement a synapse weight magnitude based on the number of capacitors in the first group of capacitors.In at least one embodiment, at least one machine readable storage medium includes instructions stored thereon, the instructions when executed by a machine to cause the machine to generate a configuration file, the configuration file to be loaded onto a device to cause the device to configure a first plurality of reconfigurable connections to couple outputs of a plurality of bitwise multipliers to inputs of a majority voter perform, by each bitwise multiplier of the plurality of bitwise multipliers, a bitwise multiplication of a binary synapse weight value of a neural network and a corresponding binary activation state value of a neuron of the neural network and providing the result to an output of the bitwise multiplier; and determine, by the majority voter, a majority result that indicates whether a majority of the outputs of the bitwise multipliers are set to a first binary value or a second binary value.In an embodiment, the configuration file is further to cause the FPGA to set a value of a function signal of a computational logic block to cause the computational logic block to implement at least one bitwise multiplier of the plurality of bitwise multipliers, wherein the computational logic block comprises a pair of bitwise multipliers and a full adder, wherein outputs of the computational logic block are to be coupled to the pair of bitwise multipliers when the function signal is a first value and to be coupled to the full adder when the function signal is a second value. In an embodiment, the full adder and at least one bitwise multiplier of the computational logic block share an XOR or an XNOR gate. In an embodiment, the majority voter comprises an adder tree including the computational logic block. In an embodiment, the majority voter comprises an analog majority voter comprising a plurality of capacitors coupled to outputs of the plurality of bitwise multipliers.In at least one embodiment, a system comprises means for configuring a first plurality of reconfigurable connections to couple outputs of a plurality of bitwise multipliers to inputs of a majority voter; means for performing a bitwise multiplication of a binary synapse weight value of a neural network and a corresponding binary activation state value of a neuron of the neural network and providing the result to an output of a bitwise multiplier; and means for determining a majority result that indicates whether a majority of bitwise multiplication outputs are set to a first binary value or a second binary value.In an embodiment, the system further comprises means for generating a configuration file specifying connectivity of the reconfigurable connections. In an embodiment, the system further comprises means for receiving neural network parameters and generating the configuration file based on the neural network parameters. In an embodiment, the system further comprises means for setting a value of a function signal of a computational logic block to cause the computational logic block to implement at least one bitwise multiplier, wherein the computational logic block comprises a pair of bitwise multipliers and a full adder, wherein outputs of the computational logic block are to be coupled to the pair of bitwise multipliers when the function signal is a first value and to be coupled to the full adder when the function signal is a second value. In an embodiment, In an embodiment, the full adder and at least one bitwise multiplier of the computational logic block share an XOR or an XNOR gate.Reference throughout this specification to "one embodiment" or "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment of the present disclosure. Thus, the appearances of the phrases "in one embodiment" or "in an embodiment" in various places throughout this specification are not necessarily all referring to the same embodiment. Furthermore, the particular features, structures, or characteristics may be combined in any suitable manner in one or more embodiments.In the foregoing specification, a detailed description has been given with reference to specific exemplary embodiments. It will, however, be evident that various modifications and changes may be made thereto without departing from the broader spirit and scope of the disclosure as set forth in the appended claims. The specification and drawings are, accordingly, to be regarded in an illustrative sense rather than a restrictive sense. Furthermore, the foregoing use of embodiment and other exemplarily language does not necessarily refer to the same embodiment or the same example, but may refer to different and distinct embodiments, as well as potentially the same embodiment. |
There is disclosed an apparatus including a substrate (105, 115) defining an interior of the apparatus, a device exterior to the substrate including a gate electrode (130, 132), and a straining layer (213, 214) exterior to the gate electrode and exterior to the substrate. |
IN THE CLAIMS What is claimed: 1. An apparatus comprising: a substrate; a device over the substrate including a gate electrode over a surface of the substrate; and a straining material disposed over the gate electrode. 2. The apparatus of claim 1, wherein the gate electrode is under a strain caused by at least one of a different lattice spacing of the straining material; a thermal expansion mismatch of the straining material and a material of the gate electrode; and an intrinsic strain in the straining material. 3. The apparatus of claim 1, wherein the gate electrode comprises a material having a first lattice spacing that comprises a different lattice spacing than a second lattice spacing of the straining material. 4. The apparatus of claim 1, wherein the gate electrode is under a compressive strain caused by the straining material having a first lattice spacing being a smaller lattice spacing than the second lattice spacing of the gate electrode. 5. The apparatus of claim 1, wherein the gate electrode is under a tensile strain caused by the straining material having a first lattice spacing being a larger lattice spacing than a second lattice spacing of the gate electrode material. 6. The apparatus of claim 1, wherein the substrate further comprise a channel region. 7. The apparatus of claim 6, wherein the channel region is under a strain caused by at least one of a different lattice spacing of the straining material; <Desc/Clms Page number 20> a thermal expansion mismatch of the straining material and a material of the gate electrode; and an intrinsic strain in the straining material. 8. The apparatus of claim 7, wherein the channel region is under a tensile strain. 9. The apparatus of claim 7, wherein the channel region is under a compressive strain. 10. The apparatus of claim 1, wherein the substrate further comprises a channel region, and wherein the channel region comprises a material having a first lattice spacing that comprises a different lattice spacing than a second lattice spacing of the straining material. 11. The apparatus of claim 1, wherein the substrate further comprises a channel region, and wherein the channel region is under a compressive strain caused by a first lattice spacing of the straining material being a smaller lattice spacing than a second lattice spacing of the channel region.' 12. The apparatus of claim 1, wherein the straining material comprises an epitaxial layer of a silicon alloy material. 13. The apparatus of claim 1, wherein the straining material comprises a material selected from the group consisting of silicon (Si), silicon germanium (Siy-x Gex), silicon carbide (Siy-x Cx), nickel silicide (NiSi), titanium silicide (TiSi2), and cobalt silicide (CoSi2). 14. The apparatus of claim 1, wherein the straining material comprises silicon doped with at least one of boron, carbon, nitrogen, and phosphorous. 15. The apparatus of claim 1, wherein the straining material comprises silicon doped with at least one of aluminum, galium, germanium, arsenic, indium, tin, and antimony. <Desc/Clms Page number 21> 16. An apparatus comprising: a substrate; a device over the substrate including a gate electrode over a top surface of the substrate, and a first junction region and a second junction region in the substrate adjacent the gate electrode; and a straining material having at least one of a lattice spacing that is different than a lattice spacing of the gate electrode; a coefficient of linear thermal expansion that is different than a coefficient of linear thermal expansion of a material of the gate electrode; and an intrinsic stress; the straining material disposed over the gate electrode. 17. The apparatus of claim 16, wherein the straining material comprises silicon germanium having a lattice spacing that is larger than a lattice spacing of the substrate adapted to impart a tensile strain in the gate electrode. 18. A method comprising: forming a device on a substrate, the device including: a gate electrode on a surface of the substrate; a first junction region and a second junction region in the substrate adjacent the gate electrode; and depositing a straining layer on the gate electrode. 19. The method of claim 18, wherein depositing the straining layer comprises depositing a sufficient thickness of straining layer having a different lattice spacing than a lattice spacing of the substrate to cause a strain in the substrate. 20. The method of claim 18, wherein depositing the straining layer comprises a chemical vapor deposition sufficient to form an epitaxial layer of a straining material. 21. An apparatus comprising: a substrate defining an interior of the apparatus; <Desc/Clms Page number 22> a device exterior to the substrate comprising a gate electrode; and a straining layer exterior to the device and exterior to the substrate. 22. The apparatus of claim 21, further comprising a gate dielectric exterior to the substrate, interior to the gate electrode, and interior to the straining layer. 23. The apparatus of claim 22, wherein the gate dielectric comprises at least one of an aluminum nitride, an aluminum oxide, a silicon nitride, and a silicon oxide. 24. The apparatus of claim 21, wherein the substrate further comprises a channel. 25. The apparatus of claim 24, wherein the channel is interior to the gate electrode, and interior to the straining layer. 26. The apparatus of claim 22, wherein the substrate further comprises a channel, wherein the channel is interior to the gate dielectric, interior to the gate electrode, and interior to the straining layer. 27. The apparatus of claim 25, wherein the substrate further comprises at least two junction regions adjacent the channel. |
<Desc/Clms Page number 1> GATE-INDUCED STRAIN FOR MOS PERFORMANCE IMPROVEMENT FIELD [0001] Circuit devices and the manufacture and structure of circuit devices. BACKGROUND [0002] Increased performance of circuit devices on a substrate (e. g., integrated circuit (IC) transistors, resistors, capacitors, etc. on a semiconductor (e. g., silicon) substrate) is usually a major factor considered during design, manufacture, and operation of those devices. For example, during design and manufacture or forming of, metal oxide semiconductor (MOS) transistor semiconductor devices, such as those used in a complementary metal oxide semiconductor (CMOS), it is often desired to increase movement of electrons in N-type MOS device (NMOS) channels and to increase movement of positive charged holes in P-type MOS device (PMOS) channels. [0003] U. S. Patent Number 6,335, 233 discloses a first conductive impurity ion that is implanted into a semiconductor substrate to form a well area on which a gate electrode is formed. A first non-conductive impurity is implanted into the well area on both sides of the gate electrode to control a substrate defect therein and to form a first precipitate area to a first depth. A second conductive impurity ion is implanted into the well area on both sides of the gate electrode, so that a source/drain area is formed to a second depth being relatively shallower than the first depth. A second non-conductive impurity is implanted into the source/drain area so as to control a substrate defect therein and to form a second precipitate area. [0004] U. S. Patent Number 6,365, 472 discloses a semiconductor device that includes a lightly doped drain (LDD) structure MOS transistor wherein the formation of defects due to ion implantation at the edge of the side wall of the gate electrode is suppressed. In order to perform the ion implantation for forming <Desc/Clms Page number 2> the source and drain regions of the MOS transistor, impurity ions are implanted using the first and second side walls provided to the gate electrode as a mask, and then the heat treatment for impurity activation is performed after removing the second side wall near the source and drain regions doped with high-concentration impurity ions. By removing the second side wall prior to the heat treatment, the stress applied to the edges of the high-concentration impurity doped regions in an amorphous state is decreased. U. S. Patent Number 6,455, 364 discloses a method for fabricating a semiconductor device in which, a collector layer of a first conductivity type is formed in a region of a semiconductor substrate sandwiched by device isolation. A collector opening is formed through a first insulating layer deposited on the semiconductor substrate so that the range of the collector opening covers the collector layer and part of the device isolation. A semiconductor layer of a second conductivity type as an external base is formed on a portion of the semiconductor substrate located inside the collector opening, while junction leak prevention layers of the same conductivity type as the external base are formed in the semiconductor substrate. [0006] U. S. Patent Number 6,455, 871 discloses a method for fabricating a SiGe device using a metal oxide film. There is disclosed growing a silicon buffer layer and a SiGe buffer layer on a silicon substrate by low-temperature process, so that defects caused by the mismatch of the lattice constants being applied to the epitaxial layer from the silicon substrate are constrained in the buffer layered formed by the low-temperature process. ] U. S. Patent Application Publication Number 2002/0140031 discloses a strained silicon on insulator (SOI) structure and a method for its fabrication, in which a strained silicon layer lies directly on an insulator layer, contrary to the prior requirement for strained-Si layers to lie directly on a strain-inducing (e. g., SiGe) layer. The method generally entails the forming a silicon layer on a strain- inducing layer so as to form a multilayer structure, in which the strain-inducing <Desc/Clms Page number 3> layer has a different lattice constant than silicon so that the silicon layer is strained as a result of the lattice mismatch with the strain-inducing layer. The multilayer structure is then bonded to a substrate so that an insulating layer is between the strained silicon layer and the substrate, and so that the strained silicon layer directly contacts the insulating layer. The strain-inducing layer is then removed to expose a surface of the strained silicon layer and yield a strained silicon-on- insulator structure that comprises the substrate, the insulating layer on the substrate, and the strained silicon layer on the insulating layer. BRIEF DESCRIPTION OF THE DRAWINGS [0008] Various features, aspects, and advantages will become more thoroughly apparent from the following detailed description, appended claims, and accompanying drawings in which: [0009] Figure 1 is a schematic cross-sectional view of a portion of a semiconductor substrate after forming a well, gate dielectric, and gate electrode of NMOS and PMOS devices. [0010] Figure 2 shows a semiconductor substrate after forming straining layers on the NMOS and PMOS devices. Figure 3 shows a small lattice spacing gate electrode and a straining layer. [0012] Figure 4 shows a strained small lattice spacing gate electrode. [0013] Figure 5 shows a large lattice spacing gate electrode and a straining layer. [0014] Figure 6 shows a strained large lattice spacing gate electrode. <Desc/Clms Page number 4> [0015] Figure 7 is a flow diagram of a process for forming a CMOS structure having a device with a straining layer deposited over the electrode. DETAILED DESCRIPTION [0016] Figure 1 is a cross-sectional view of a portion of a semiconductor substrate after forming a well, gate dielectric, and gate electrode of an NMOS device and a PMOS device. Apparatus 100 (e. g., such as one or more CMOS structures) includes semiconductor substrate 102, in one embodiment a silicon substrate, or epitaxial layer of a semiconductor substrate, having active areas or cell regions defined by isolation areas such as shallow trench isolation structures 110 formed in substrate or epitaxial layer 102. For example, substrate 102 may be formed or grown from single crystal silicon, and shallow trench isolation (STI) structures 110 may be formed by defining regions (through trench etching) and growing or depositing silicon dioxide (Si02) dielectric in the trench openings (e. g., such as formed to height H 111 as shown in Figure 1). In another embodiment, STI structures 110 define active areas or cell regions for individual transistor devices (e. g., such as NMOS and PMOS devices of a CMOS structure). [0017] Figure 1 includes P-type well 105 and N-type well 115 formed in the individual active area or cell region defined by STI structures 110. For example, P-type well 105 is formed in one region of substrate 102 while N-type well 115 is formed in a second region of substrate 102. P-type well 105 is formed, such as, by introducing a dopant, such as boron (B) and/or indium (In), into an area of substrate 102 designated for an N-type device. N-type well 115 is formed, such as, by introducing a dopant, such as arsenic (As), phosphorous (P), and/or antimony (Sb) in an area of substrate 102 designated for a P-type device. P-type well 105 and N-type well 115 may have work functions corresponding to the work function of an NMOS device and PMOS device, respectively, of a CMOS circuit. <Desc/Clms Page number 5> [0018] Figure 1 illustrates substrate 102 after the forming a gate dielectric layer and gate electrode layer over the surface 136 of substrate 102, and subsequent patterning or removal of unwanted portions of the gate dielectric layer and/or gate electrode layer. For instance, as shown, gate dielectric 120 may be grown or deposited. An example of a suitable gate dielectric material that is typically grown by thermal techniques over substrate 102 is Si02. It is to be appreciated that, in addition to Si02, other gate dielectrics, such as silicon nitride (Si3N4), or aluminum oxide (A1203) may be used to further optimize the CMOS transistor devices. For example, gate dielectric materials having a high dielectric constant may be used, if desired, for example, to increase the capacitance of the gate. [0019] Figure 1 shows a structure which includes gate electrodes 130 and 132 over the surface of substrate 102, such as by deposition onto gate dielectric 120. NMOS gate electrode 130 and PMOS gate electrode 132 may each be deposited to a thickness of, for example, about 150 to about 2000 angstroms (e. g., 15-200 nanometers (nm) ). Accordingly, the thickness of NMOS gate electrode 130 and PMOS gate electrode 132 are each scalable and may be selected or chosen based on integration issues related to device performance. NMOS gate electrode 130 has a work function corresponding to the work function of an N-type device. PMOS gate electrode 132 has a work function corresponding to the work function of a P-type device. In another embodiment, NMOS gate electrode 130 and PMOS gate electrode 132 may be silicon deposited by chemical vapor deposition (CVD) and then doped to form N-type and P-type materials, respectively, such as by doping as described above with respect to forming the N-type and P-type material of N-type well 115 and P-type well 105, respectively. For instance, NMOS gate electrode 130 may be doped at the same time that the corresponding NMOS junction regions are doped (e. g. , such as NMOS junction regions 203, shown in Figure 2), and PMOS gate electrode 132 may be doped at the same time the PMOS junction regions are doped (e. g. , such as PMOS junction regions 204, shown in Figure 2) <Desc/Clms Page number 6>] Figure 1 further shows the substrate after removal of undesired portions of gate dielectric 120 and NMOS gate electrode 130 and PMOS gate electrode 132, such as by patterning a mask over a defined area for NMOS gate electrode 130 and PMOS gate electrode 132 and etching away the undesired exposed portions not covered by the mask. For example, undesired portions of gate dielectric 120 and one or more types of gate electrode material may be patterned to form gate dielectric 120 and NMOS gate electrode 130 over NMOS device 103, and to form gate dielectric 120 and PMOS electrode 132 over PMOS device 104, such as by patterning using conventional techniques, such as plasma etchant, sputter etchant, and/or a chlorine-based etch chemistry. In another embodiment, NMOS gate electrode 130 and PMOS gate electrode 132 may be polysilicon deposited by CVD and then masked and etched. [0021] Figure 2 shows the semiconductor substrate of Figure 1 after forming straining layers and junction regions of the NMOS and PMOS devices. Figure 2 shows NMOS straining layer 213 and PMOS straining layer 214 that may be formed, of a suitable material having a lattice spacing different than NMOS gate electrode 130 and PMOS gate electrode 132, respectively, to strain the individual electrodes and/or channel regions of the transistor devices. For example, NMOS straining layer 213 may be formed by depositing a material on NMOS gate electrode 130, in one embodiment, epitaxially, where NMOS straining layer 213 has a lattice spacing greater than NMOS gate electrode 130. NMOS straining layer 213 may be formed by patterning and etching the formed or deposited material. [0022] Similarly, PMOS straining layer 214 may be formed by depositing a material on PMOS gate electrode 132, in one embodiment, epitaxially, where PMOS straining layer 214 has a lattice spacing less than PMOS gate electrode 132. PMOS straining layer 214 may be formed by patterning and etching the formed or deposited material. It is contemplated that NMOS straining layer 213 may be a different material than PMOS straining layer 214. <Desc/Clms Page number 7> Figure 2 illustrates NMOS junction regions 203 and PMOS junction regions 204 (e. g. , also referred to as"source-drain regions"or"diffusion regions") that may be formed by a junction implant (e. g. , such as implanting with arsenic, phosphorous, and/or antimony for N-type junction regions 203 and boron and/or indium for P-type junction regions 204) and possibly include additionally corresponding type tip implants. In one embodiment, NMOS junction regions 203 may be formed by doping portions of P-type well 105 to form those junction regions. In another embodiment, NMOS junction regions 203 may be formed, in accordance with the characteristics of an NMOS device, by doping the material of P-type well 105, to form the N-type material in NMOS junction regions 203, as described above with respect to doping to form the N-type material of N-type well 115. In another embodiment, PMOS junction regions 204 may be formed, by doping portions of N-type well 115 to form those junction regions. In another embodiment, portions of N-type well 115 may be doped to form the P-type material in PMOS junction regions 204, in accordance with the characteristics of a PMOS device, by doping as described with respect to doping to form the P-type material of P-type well 105. [0024] Junction formation is generally known in the art. In one embodiment, junction regions 203 and 204 may be formed prior to deposition of straining layers 213 and 214. In another embodiment, straining layers 213 and 214 maybe formed prior to the formation of junction regions 203 and 204. [0025] In another embodiment, formation of NMOS straining layer 213, PMOS straining layer 214, NMOS junction regions 203, and/or PMOS junction regions 204 may occur in any order as appropriate, such as in accordance with the characteristics of the desired device. [0026] Figure 2 illustrates NMOS channel 494, and PMOS channel 492. In one embodiment, NMOS channel's 494 performance is increased by placing NMOS channel 494 in tensile strain. In another embodiment, PMOS channel's 492 performance is increased by placing PMOS channel 492 in compressive strain. In <Desc/Clms Page number 8> one embodiment, straining layer 213 places NMOS gate electrode 130 and NMOS channel 494 in tensile strain. In another embodiment, straining layer 214 places PMOS electrode 132 and PMOS channel 492 in compressive strain. [0027] Figure 3 illustrates straining layer 313 and gate electrode 330. Straining layer 313 has a lattice spacing d2 208, while gate electrode 330 has a lattice spacing di 206. As illustrated, straining layer 313 has lattice spacing d2 208 that is larger than gate electrode 330 which has lattice spacing di 206. [0028] Referring now to Figure 4, straining layer 313 has been brought into contact with gate electrode 330, such that the lattice of gate electrode 330 has matched to the lattice of straining layer 313. As illustrated, the lattice spacing of straining layer 313 has decreased slightly to d2 208 while gate electrode 330 has had its lattice spacing dl 206 increased substantially to d3 210. The amount that lattice spacing d2 208 will increase, and that lattice spacing du 206 will increase is dependent on the relative thicknesses of gate electrode 330 and straining layer 313. If straining layer 313 is relatively thicker or more massive than gate electrode 330, then d2 208 will hardly decrease at all, while dl 206 will increase substantially. Alternatively, if straining layer 313 is relatively thinner or less massive than gate electrode 330, then dl 206 will hardly increase at all, and dz 208 will decrease substantially. ] As illustrated in Figures 3 and 4, d2 208 has decreased slightly from Figures 3 to 4, while the lattice spacing for gate electrode 330 has increased from di 206 in Figure 3 to d3 210 in Figure 4. [0030] The strain placed on the lattice of gate electrode 204 equals: <Desc/Clms Page number 9> [0031] In one embodiment, the strain is less than about 10%. In another embodiment, the strain is less than about 5%. In another embodiment, the strain is less than about 2%. In another embodiment, the strain is less than about 1%. [0032] In one embodiment, gate electrode 330 is silicon, and straining layer 313 is a material having lattice spacing d2 208 between about 0.5% and about 10% larger than silicon. In one embodiment, if lattice spacing d2 208 is more than about 10% larger than lattice spacing dl 206, then gate electrode 330 may experience significant dislocations when gate electrode 330 is brought into contact with straining layer 313 as illustrated in Figure 4. [0033] In another embodiment, gate electrode 330 as shown in Figure 3 has a lattice spacing between about 0.5 and about 0.6 nm, and straining layer 313 has a larger lattice spacing than gate electrode 330 of about 0.51 to about 0.61 nm. [0034] In one embodiment, straining layer 313 may be made of silicon doped with an element having a covalent radius larger than silicon, which would cause the lattice spacing of the silicon to increase. Suitable dopants include one or more of aluminum (Al), galium (Ga), germanium (Ge), arsenic (As), indium (In), tin (Sn), antimony (Sb), thalium (Tl), lead (Pb), and/or bismuth (Bi). The amounts of the dopants may be adjusted in order to compensate for the relative size of silicon compared to the various dopants. In one embodiment, silicon has a covalent radius of 1. 11A, aluminum has a covalent radius of 1. 18A, and antimony has a covalent radius of 1. 40A. Since the covalent radius of aluminum is relatively close to the covalent radius of silicon, adding 1 % of aluminum will not have a large effect on the lattice spacing of the silicon. In contrast, adding 1% of antimony to silicon will have a larger effect than adding 1 % of aluminum to silicon, since the covalent radius of antimony is much larger than the covalent radius of silicon. [0035] For example, a large amount of aluminum is needed to dope silicon compared to a very small amount of antimony in order to achieve the same lattice <Desc/Clms Page number 10> spacing. In another embodiment, suitable dopants include arsenic (As), antimony (Sb), and/or bismuth (Bi). [0036] In another embodiment, channel (not shown) may be provided adjacent to gate electrode 330, where channel (not shown) may also be strained by straining layer 313. In one embodiment, channel (not shown) defines an interior of the apparatus, gate electrode 330 is exterior to channel, and straining layer 313 is exterior to gate electrode 330 and channel. Referring now to Figure 5, there is illustrated gate electrode 532 having lattice spacing dl 306, and straining layer 514 having lattice spacing d2 308. As shown in Figure 5, lattice spacing du 306 of gate electrode 532 is larger than lattice spacing d2 308 of straining layer 514. [0038] Referring now to Figure 6, straining layer 514 has been brought into contact with gate electrode 532 so that the lattice of gate electrode 532 aligns with the lattice of straining layer 514. Lattice spacing d2 308 of straining layer 514 has slightly increased from Figure 5 to Figure 6, while lattice spacing dl 306 of gate electrode 532 has been greatly reduced from dl 306 in Figure 5 to d3 310 in Figure 6. Similar to the discussion above regarding Figure 4, the relative amount that di 306 will be decreased and that d2 308 will be increased depends on the relative sizes and/or masses of gate electrode 532 and straining layer 514. The larger the relative size and/or mass of straining layer 514 as compared to gate electrode 532, the lesser amount that d2 308 will increase, and the greater amount that dl 306 will decrease. ] In one embodiment, gate electrode 532 is silicon, and straining layer 514 is a material having a lattice spacing less than silicon. In one embodiment, suitable materials for straining layer 514 include silicon doped with an element having a covalent radius less than the covalent radius of silicon. Adding an element with a smaller covalent radius than silicon will tend to decrease the lattice spacing of silicon. The smaller the covalent radius of the element as compared to <Desc/Clms Page number 11> silicon, the larger the effect that element will have on the lattice spacing of the silicon. For example, if silicon has a covalent radius of 1.11 , phosphorous has a covalent radius of 1. 06A, and boron has a covalent radius of 0. 82A. Adding 1 % boron to silicon will make the lattice spacing smaller than adding 1 % of phosphorous to silicon, since boron has a smaller covalent radius. [0040] In another embodiment, suitable dopants to add to silicon include one or more of boron (B), carbon (C), nitrogen (N), and/or phosphorous (P). As discussed above regarding Figure 3 and Figure 4, in order to obtain a given lattice spacing for straining layer 514, less boron would be needed as a dopant for silicon than phosphorous, given their relative covalent radii. Since phosphorous has a covalent radius much closer in size to silicon, it will not affect Silicon's lattice size as much as boron, therefore, more phosphorous would be needed to obtain a given lattice sizing. In another embodiment, suitable materials for straining layer 514 include an alloy of silicon and boron (B). [0041] In one embodiment, the strain experienced by gate electrode 532 from Figure 5 to Figure 6 is defined as: [0042] In one embodiment, strain is less than about 10%. In another embodiment, strain is less than about 5%. In another embodiment, strain is less than about 2%. In another embodiment, strain is less than about 1%. [0043] In one embodiment, if strain is greater than about 10%, then there may be significant lattice dislocations in gate electrode 532 when brought into contact with straining layer 514. [0044] In another embodiment, gate electrode 532 has a lattice spacing of between about 0.3 nm and 0.6 nm, and straining layer 514 has a smaller lattice spacing of between about 0.49 nm and about 0.59 nm. <Desc/Clms Page number 12> [0045] In another embodiment, channel (not shown) may be located adjacent to electrode 532. Channel (not shown) may also be strained by straining layer 514. In one embodiment, channel (not shown) defines an interior of the apparatus, gate electrode 532 is exterior to channel, and straining layer 514 is exterior to gate electrode 532 and channel. [0046] In one embodiment, gate electrodes 330 and/or 532, have a thickness substantially less than straining layers 313 and/or 514. In another embodiment, straining layers 313 and/or 514 have a thickness of about ten times greater than gate electrodes 330 and/or 532. [00471 Referring now to Figure 2, in one embodiment, NMOS straining layer 213 comprises silicon germanium (SiGe) (for example, about 20% to about 60% germanium) and NMOS electrode 130 and/or channel 494 comprise silicon (Si). In another embodiment, PMOS straining layer 214 comprises carbon-doped silicon, for example, carbon-doped silicon having about 1% carbon and about 99% silicon, and PMOS electrode 132 and/or channel 492 comprise silicon (Si). [0048] In another embodiment, NMOS straining layer 213 comprises a first material having a first lattice spacing, and NMOS electrode 130 and/or channel 494 comprise a second material having a second lattice spacing, where the first lattice spacing is larger than the second lattice spacing. In one embodiment, the first lattice spacing is between about 0.2% and about 2% larger than the second lattice spacing. [0049] In another embodiment, PMOS straining layer 214 comprises a first material having a first lattice spacing, and PMOS electrode 132 and/or channel 492 comprise a second material having a second lattice spacing, where the first lattice spacing is smaller than the second lattice spacing. In one embodiment, the first lattice spacing is between about 0.2% and about 2% smaller than the second lattice spacing. <Desc/Clms Page number 13> [0050] In another embodiment, suitable materials that may be used for electrodes 130 and/or 132, channels 494 and/or 492, and/or straining layers 213 and/or 214 include one or more of the following: silicon (Si), silicon germanium (SiGe), silicon carbide (SiC), nickel silicide (NiSi), titanium silicide (TiSi2), cobalt silicide (CoSi2), and may optionally be doped with one or more of boron and/or indium. For example, electrode 130 and channel 494 include materials having a lattice spacing that are different than the lattice spacing of the straining layer 213. More specifically, in operation, PMOS straining layer 214 has, in one embodiment, a smaller lattice spacing than PMOS gate electrode 132 and/or channel 492 and may cause a compressive strain in gate electrode 132 and/or channel 492. This strain is caused by PMOS gate electrode 132 and PMOS channel 492 having a lattice spacing that is a larger lattice spacing than the lattice spacing of PMOS straining layer 214. [0051] In another embodiment, straining layers may operate by way of thermal mismatch. For example, straining layer 213 may have a coefficient of linear thermal expansion that is less than the coefficient of linear thermal expansion of gate electrode 130. When gate electrode 130 and straining layer 213 are deposited at an elevated temperature, for example, about 500 C to about 700 C, there is no strain. However, as gate electrode 130 and straining layer 213 cool, gate electrode 130 will try to shrink more than straining layer 213, since gate electrode 130 has a larger coefficient of linear thermal expansion than straining layer 213. This mismatch in coefficients will cause a tensile strain in gate electrode and a compressive strain in straining layer. The relative amounts of the compressive and tensile strains will depend upon the relative thicknesses and/or masses of gate electrode 130 and straining layer 213. If straining layer 213 is much thicker than gate electrode 130, then strain on straining layer 213 will be relatively small, while tensile strain on gate electrode 130 will be relatively large. Channel 494 may also be strained. <Desc/Clms Page number 14> [0052] In operation, gate electrode 130 may be silicon having a coefficient of linear thermal expansion of about 2. 6X104/ C, and straining layer 213 may be formed of a silicon oxide, having a lesser coefficient of linear thermal expansion of about 0. 5X104/ C. When silicon oxide straining layer 213 is deposited on silicon gate electrode 130 at an elevated temperature, for example, about 800 C, there is no strain between the layers. When silicon oxide straining layer 213 and silicon gate electrode 130 are cooled to room temperature (of about 25 C), silicon oxide straining layer 213 will want to shrink less than silicon gate electrode 130 due to silicon oxide's lower coefficient of linear thermal expansion. This will cause a tensile strain in silicon gate electrode 130 and/or channel 494, and a compressive strain in silicon oxide straining layer 213. [0053] In another embodiment, gate electrode 132 may have a lower coefficient of thermal expansion than straining layer 214 to cause a compressive strain in gate electrode 132 and/or channel 492, and a tensile strain in straining layer 214. [0054] In operation, gate electrode 132 may be silicon having a coefficient of linear thermal expansion of about 2. 6X10$/ C, and straining layer 214 may be, for example, aluminum having a higher coefficient of linear thermal expansion of about 23X104/ C. When aluminum straining layer 214 is deposited on silicon gate electrode 132 at an elevated temperature, for example, about 500 C, there is no strain between the layers. As the layers cool to room temperature, (for example, about 25 C), silicon gate electrode 132 wants to shrink less than aluminum straining layer 214. This relative mismatch between the coefficients of linear thermal expansion causes a compressive strain in gate electrode 132 and/or channel 492, and a tensile strain in aluminum straining layer 214. ] In another embodiment, the tensile strain in gate electrode 130 may cause a tensile strain in channel 494. In another embodiment, the compressive strain in gate electrode 132 may cause a compressive strain in channel 492. <Desc/Clms Page number 15> [0056] In another embodiment, strain may be caused by a straining layer having an intrinsic stress. For example, straining layer 213 may be formed of a material having an intrinsic tensile stress within the material, for example a silicon nitride. When straining layer 213 is deposited on gate electrode, it may cause a compressive strain in gate electrode 130. In another embodiment, straining layer 214 may be a material having an intrinsic compressive stress, for example silicon oxide, which when straining layer 214 is deposited on gate electrode 132 may cause a tensile strain within gate electrode 132. Examples of materials having intrinsic stress include nitrides and oxides, which may cause a strain in gate electrodes 130 and/or 132 and/or channels 494 and/or 492. Typically, nitrides may have an intrinsic tensile strain, and oxides may have an intrinsic compressive strain, however, a nitride could have a compressive strain, or an oxide could have a tensile strain, by various treatments known in the art. [00571 In another embodiment, gate electrode 130 and straining layer 213 may be deposited as the same material, then straining layer 213 may be doped with a material to cause straining layer to increase in size. For example, straining layer 213 and gate electrode 130 may be deposited as silicon, then straining layer 213 may be doped with one or more of aluminum, galium germanium, arsenic, indium, tin, and/or antimony. This doping and optionally subsequent heat and/or annealing treatment may cause the lattice size of straining layer 213 to increase, which will cause a tensile strain in gate electrode 130 and/or channel 494. [0058] In another embodiment, gate electrode 132 and straining layer 214 may be deposited as the same material, for example, silicon. Subsequently, straining layer 214 may be doped with one or more of boron, carbon, nitrogen, and/or phosphorous. This doping and optional heat and/or annealing treatment will cause the lattice spacing of straining layer 214 to decrease, which will cause a compressive strain in gate electrode 132 and/or channel 492. <Desc/Clms Page number 16> [0059] In another embodiment, gate electrode 132 is silicon, and straining layer 214 is carbon-doped silicon, with a transition layer (not shown) between gate electrode 132 and straining layer 214 of having a gradually increasing percentage of carbon, to ease the growth of the carbon-doped silicon onto silicon gate electrode 132. [0060] In another embodiment, electrodes 130 and/or 132 and/or straining layers 213 and/or 214 may be formed or deposited by selective deposition, CVD deposition, and/or epitaxial deposition. For example, an epitaxial layer of single crystal semiconductor film may be formed upon a single crystal substrate, where the epitaxial layer has the same crystallographic characteristics as the substrate material, but differs in type or concentration of dopant. In another embodiment, electrodes 130 and/or 132 and/or straining layers 213 and/or 214 may be formed by selective CVD deposition, and possibly include epitaxial deposition of single crystal silicon alloy with the same crystal structure as that of the material onto which the structure is deposited (e. g. , a similar or the same crystal orientation, such as 100,110, etc. ). [0061] In another embodiment, a layer of Sil-xGex may be grown on top of a substrate of Si such that the silicon germanium has a bulk relaxed lattice constant that is larger (e. g. , such as by about 0.5 to about 2 percent) than the silicon it is grown on. The resulting lattice misfits at the block or blocks where the silicon germanium bonds to the silicon may create a strain. In other words, a strain, such as a compressive strain, may result from the silicon lattice stretched to fit into the lattice of the silicon-germanium. [0062] Suitable processes for forming or growing of silicon and silicon alloy materials include vapor phase (VPE), liquid phase (LPE), or solid phase (SPE) blocks of silicon processing. For example, one such CVD process that is applicable to VPE of silicon includes: (1) transporting reactants to the substrate surface; (2) reactants absorbed on the substrate surface; (3) chemical reaction on the surface leading to formation of a film and reaction products; (4) reaction <Desc/Clms Page number 17> products deabsorbed from the surface; and (5) transportation away of the reaction product from the surface. [0063] In addition, suitable forming of silicon and silicon alloys comprises selective epitaxial deposition, formation, or growth known in the art as Type 1 selective epitaxial deposition. Using Type 1 deposition, silicon alloy deposition would be occurring only on gate material (s) within the openings of the oxide film, and minimal, if any, growth on the oxide. [0064] Suitable selective epitaxial formation also includes Type 2 selective epitaxial deposition where selectivity of deposition is non-critical. Using Type 2 deposition, formation or growth of the silicon alloy occurs on gate material (s), as well as on the oxide film, and thus when this type of deposition is made, an interface between the epitaxial layer of silicon alloy formed on the gate material (s) and a polysilicon layer of silicon alloy formed on the oxide film is created. The angle of this interface relative to the film growth direction depends on the crystallographic orientation of the substrate. [0065] In another embodiment, Type 1 selective epitaxial deposition using a silicon source including one or more of the following: silicon, silicon germanium (SiGe), silicon carbide (SiC), nickel silicide (NiSi), titanium silicide (TiSi2), cobalt silicide (CoSi2) at suitable temperatures. Also, SiH2Ck, SiH4 may be used as a silicon source if hydrogen chloride (HCl), chlorine (C12) is present. [0066] Figure 7 is a flow diagram of a process for forming a CMOS structure having a PMOS and/or an NMOS device with a straining layer deposited on at least one gate electrode such that the straining layer imparts a strain to at least one of the electrode and the channel. At 810, NMOS and/or PMOS devices of a CMOS structure are formed on a substrate having the appropriate wells, junction regions, gate dielectrics, gate electrodes, and straining layer. At 820, a straining material is deposited over at least one gate electrode. <Desc/Clms Page number 18> Suitable straining materials include, for example, silicon, silicon germanium, doped silicon germanium, silicon carbide, silicon carbon, carbon doped silicon with lattice spacing different from the electrode, which can be deposited by an operation using one or more of CVD, epitaxial deposition, and/or selective deposition. Thus, for an NMOS device, a straining material having a lattice spacing larger than that of the NMOS electrode can be deposited to provide a tensile strain in the NMOS electrode and/or the NMOS channel. [0068] On the other hand, for a PMOS device, a straining material having a lattice spacing that is smaller than the PMOS electrode (e. g. , such as, for example, boron-doped silicon, carbon-doped silicon, nitrogen-doped silicon, and/or phosphorous-doped silicon) can be deposited onto a PMOS electrode to cause a compressive strain in the PMOS electrode and/or in the channel of the PMOS device. [0069] Although Figures 1-7 describe formation of a CMOS structure having an NMOS device and PMOS device therein, other embodiments include formation of a PMOS and/or NMOS device portion without the other PMOS and/or NMOS device. Thus, contemplated formation of independent single NMOS or PMOS devices, single NMOS or PMOS devices coupled to form a device other than a CMOS structure, multiple coupled PMOS devices, or other appropriate circuit devices on a substrate where the description above with respect to straining material formed or disposed on and electrode such that the electrode is strained are contemplated [0070] Various embodiments are described above. It will, however, be evident that various modifications and changes may be made thereto without departing from the broader spirit and scope of the claimed subject matter. The specification and drawings are, accordingly, to be regarded in an illustrative rather than a restrictive sense. |
Certain aspects of the present disclosure provide a semiconductor device. One example semiconductor device generally includes a semiconductor region, an insulative layer, a first terminal, and a first non-insulative region coupled to the first terminal, the insulative layer being disposed between the first non-insulative region and the semiconductor region. In certain aspects, the insulative layer is disposed adjacent to a first side of the semiconductor region. In certain aspects, the semiconductor device also includes a second terminal, and a first silicide layer coupled to the second terminal and disposed adjacent to a second side of the semiconductor region, the first side and the second side being opposite sides of the semiconductor region. |
CLAIMSWhat is claimed is:1. A semiconductor variable capacitor comprising:a semiconductor region having a first region, a second region, and a third region, the third region being disposed between the first and second regions and having at least one of a different doping type or a different doping concentration than at least one of the first region or the second region;an insulative layer;a first terminal;a first non-insulative region coupled to the first terminal, the insulative layer being disposed between the first non-insulative region and the semiconductor region, wherein the insulative layer is disposed adjacent to a first side of the semiconductor region;a second terminal; anda first silicide layer coupled to the second terminal and disposed adjacent to a second side of the semiconductor region, the first side and the second side being opposite sides of the semiconductor region.2. The semiconductor variable capacitor of claim 1, wherein the semiconductor variable capacitor is a varactor, wherein the first terminal is an anode of the varactor, and wherein the second terminal is a cathode of the varactor.3. The semiconductor variable capacitor of claim 1, wherein the first silicide layer is disposed adjacent to the first region.4. The semiconductor variable capacitor of claim 3, further comprising:a third terminal; anda second silicide layer coupled to the third terminal and disposed adjacent to the second side of the semiconductor region.5. The semiconductor variable capacitor of claim 4, wherein the second silicide layer is disposed adjacent to the second region.6. The semiconductor variable capacitor of claim 3, wherein the first silicide layer is also disposed adjacent to the third region.7. The semiconductor variable capacitor of claim 1, further comprising:a third terminal;a second silicide layer connected to the third terminal and disposed adjacent to the first region and adjacent to the first side of the semiconductor region;a fourth terminal; anda third silicide layer connected to the fourth terminal and disposed adjacent to the second region and adjacent to the first side of the semiconductor region, wherein the first silicide layer is disposed adjacent to the third region.8. The semiconductor variable capacitor of claim 7, wherein a capacitance between the first terminal and the second terminal is configured to be adjusted by applying a control voltage to at least one of the third terminal or the fourth terminal with respect to the first terminal or the second terminal.9. The semiconductor variable capacitor of claim 7, further comprising:a first buried oxide (BOX) region disposed adjacent to the first region and adjacent to the second side of the semiconductor region; anda second BOX region disposed adjacent to the second region and adjacent to the second side of the semiconductor region.10. The semiconductor variable capacitor of claim 7, wherein the first region and the second region have a different doping type than the third region.11. The semiconductor variable capacitor of claim 1, wherein the semiconductor region comprises a monocrystalline semiconductor.12. The semiconductor variable capacitor of claim 1, further comprising a BOX region disposed adjacent to the second side of the semiconductor region.13. A semiconductor variable capacitor comprising: a semiconductor region;an insulative layer;a first terminal;a first non-insulative region coupled to the first terminal, the insulative layer being disposed between the first non-insulative region and only a portion of the semiconductor region, wherein the insulative layer is disposed adjacent to a first side of the semiconductor region;a second terminal; anda first silicide layer coupled to the second terminal and disposed adjacent to a second side of the semiconductor region, the first side and the second side being opposite sides of the semiconductor region.14. The semiconductor variable capacitor of claim 13, wherein the semiconductor variable capacitor is a varactor, wherein the first terminal is an anode of the varactor, and wherein the second terminal is a cathode of the varactor.15. The semiconductor variable capacitor of claim 13, wherein the semiconductor region comprises:a first region;a second region; anda third region disposed between the first and second regions and having at least one of a different doping type or a different doping concentration than at least one of the first region or the second region.16. The semiconductor variable capacitor of claim 15, wherein the first silicide layer is disposed adjacent to the first region.17. The semiconductor variable capacitor of claim 16, further comprising:a third terminal; anda second silicide layer coupled to the third terminal and disposed adjacent to the second side of the semiconductor region.18. The semiconductor variable capacitor of claim 17, wherein a capacitance between the first terminal and the second terminal is configured to be adjusted by applying a control voltage to the third terminal with respect to the first terminal or the second terminal.19. The semiconductor variable capacitor of claim 17, wherein the second silicide layer is disposed adjacent to the second region.20. The semiconductor variable capacitor of claim 15, further comprising:a third terminal;a second silicide layer coupled to the third terminal and disposed adjacent to the first region and adjacent to the first side of the semiconductor region;a fourth terminal; anda third silicide layer coupled to the fourth terminal and disposed adjacent to the second region and adjacent to the first side of the semiconductor region, wherein the first silicide layer is disposed adjacent to the third region.21. The semiconductor variable capacitor of claim 13, wherein the semiconductor region comprises a monocrystalline semiconductor.22. The semiconductor variable capacitor of claim 13, further comprising a buried oxide (BOX) region disposed adjacent to the second side of the semiconductor region.23. A semiconductor variable capacitor comprising:a semiconductor region;a buried oxide (BOX) region;a first non-insulative region, the BOX region being disposed between the first non-insulative region and the semiconductor region, wherein the BOX region is disposed adjacent to a first side of the semiconductor region; anda first silicide layer disposed adjacent to the first side of the semiconductor region.24. The semiconductor variable capacitor of claim 23, wherein the semiconductor region comprises:a first region;a second region; anda third region, the third region being disposed between the first and second regions and having at least one of a different doping type or a different doping concentration than at least one of the first region or the second region.25. The semiconductor variable capacitor of claim 24, wherein the first silicide layer is disposed adjacent to the first region.26. The semiconductor variable capacitor of claim 24, further comprising:a first terminal coupled to the first non-insulative region; anda second terminal coupled to the first silicide layer.27. The semiconductor variable capacitor of claim 26, further comprising:a third terminal; anda second silicide layer coupled to the third terminal and disposed adjacent to the first side of the semiconductor region.28. The semiconductor variable capacitor of claim 27, wherein a capacitance between the first terminal and the second terminal is configured to be adjusted by applying a control voltage to the third terminal with respect to the first terminal or the second terminal.29. The semiconductor variable capacitor of claim 27, wherein the second silicide layer is disposed adjacent to the second region.30. The semiconductor variable capacitor of claim 23, further comprising:a silicide blocking layer disposed adjacent to a second side of the semiconductor region, the first side and the second side being opposite sides of the semiconductor region. |
BACK SILICIDED VARIABLE CAPACITOR DEVICESCLAIM OF PRIORITY[0001] The present Application for Patent claims priority to Application No. 15/957,484 entitled“BACK SILICIDED VARIABLE CAPACITOR DEVICES” filed April 19, 2018, and assigned to the assignee hereof and hereby expressly incorporated by reference herein.TECHNICAL FIELD[0002] Certain aspects of the present disclosure generally relate to electronic circuits and, more particularly, to semiconductor devices.BACKGROUND[0003] Semiconductor capacitors are fundamental components for integrated circuits. A variable capacitor is a capacitor whose capacitance may be intentionally and repeatedly changed under the influence of a bias voltage. A variable capacitor is often used in inductor-capacitor (LC) circuits to set the resonance frequency of an oscillator, or as a variable reactance, e.g., for impedance matching in antenna tuners. One example type of variable capacitor is referred to as a transcap (TC) device, which is a metal- oxide semiconductor (MOS) based variable capacitor having at least three terminals, one of which is used to modulate the capacitance across two terminals of the TC device.[0004] A voltage-controlled oscillator (VCO) is an example circuit that may use a varactor in which the size of a depletion region formed in a p-n junction diode is varied by changing a bias voltage to alter the junction capacitance. Any junction diode exhibits this effect (including p-n junctions in transistors), but devices used as variable capacitance diodes are designed with a large junction area and a doping profile specifically chosen to improve the device performance, such as quality factor and tuning range.SUMMARY[0005] Certain aspects of the present disclosure generally relate to a structure for a semiconductor device. [0006] Certain aspects provide a semiconductor variable capacitor. The semiconductor variable capacitor generally includes a semiconductor region having a first region, a second region, and a third region, the third region being disposed between the first and second regions and having at least one of a different doping type or a different doping concentration than at least one of the first region or the second region; an insulative layer; a first terminal; a first non-insulative region coupled to the first terminal, the insulative layer being disposed between the first non-insulative region and the semiconductor region, wherein the insulative layer is disposed adjacent to a first side of the semiconductor region; a second terminal; and a first silicide layer coupled to the second terminal and disposed adjacent to a second side of the semiconductor region, the first side and the second side being opposite sides of the semiconductor region.[0007] Certain aspects provide a semiconductor variable capacitor. The semiconductor variable capacitor generally includes a semiconductor region; an insulative layer; a first terminal; a first non-insulative region coupled to the first terminal, the insulative layer being disposed between the first non-insulative region and only a portion of the semiconductor region, wherein the insulative layer is disposed adjacent to a first side of the semiconductor region; a second terminal; and a first silicide layer coupled to the second terminal and disposed adjacent to a second side of the semiconductor region, the first side and the second side being opposite sides of the semiconductor region.[0008] Certain aspects provide a semiconductor variable capacitor. The semiconductor variable capacitor generally includes a semiconductor region; a buried oxide (BOX) region; a first non-insulative region, the BOX region being disposed between the first non-insulative region and the semiconductor region, wherein the BOX region is disposed adjacent to a first side of the semiconductor region; and a first silicide layer disposed adjacent to the first side of the semiconductor region.[0009] Certain aspects provide a method for fabricating a semiconductor variable capacitor. The method generally includes forming a semiconductor region having a first region, a second region, and a third region, the third region being formed between the first and second regions and having at least one of a different doping type or a different doping concentration than at least one of the first region or the second region; forming an insulative layer; forming a first non-insulative region, the insulative layer being formed between the first non-insulative region and the semiconductor region, wherein the insulative layer is formed adjacent to a first side of the semiconductor region; coupling a first terminal to the first non-insulative region; forming a first silicide layer adjacent to a second side of the semiconductor region, the first side and the second side being opposite sides of the semiconductor region; and coupling a second terminal to the first silicide layer.[0010] Certain aspects provide a method for fabricating a semiconductor variable capacitor. The method generally includes forming a semiconductor region; forming an insulative layer; forming a first non-insulative region, the insulative layer being formed between the first non-insulative region and only a portion of the semiconductor region, wherein the insulative layer is formed adjacent to a first side of the semiconductor region; coupling a first terminal to the first non-insulative region; forming a first silicide layer adjacent to a second side of the semiconductor region, the first side and the second side being opposite sides of the semiconductor region; and coupling a second terminal to the first silicide layer.[0011] Certain aspects provide a method for fabricating a semiconductor variable capacitor. The method generally includes forming a buried oxide (BOX) region; forming a semiconductor region; forming a first non-insulative region, the BOX region being formed between the first non-insulative region and the semiconductor region, wherein the BOX region is formed adjacent to a first side of the semiconductor region; and forming a first silicide layer adjacent to the first side of the semiconductor region.BRIEF DESCRIPTION OF THE DRAWINGS[0012] So that the manner in which the above-recited features of the present disclosure can be understood in detail, a more particular description, briefly summarized above, may be by reference to aspects, some of which are illustrated in the appended drawings. It is to be noted, however, that the appended drawings illustrate only certain typical aspects of this disclosure and are therefore not to be considered limiting of its scope, for the description may admit to other equally effective aspects.[0013] FIG. 1 illustrates a cross-sectional view of an example transcap device (TC). [0014] FIG. 2 illustrates an example TC device implemented using a back-gate configuration.[0015] FIGs. 3A and 3B illustrate example TC devices implemented using a back silicide layer, in accordance with certain aspects of the present disclosure.[0016] FIG. 4 illustrates an example TC device implemented using a back silicide layer for the well terminal, in accordance with certain aspects of the present disclosure.[0017] FIG. 5 illustrates an example TC device implemented using a back-gate configuration and using back silicide layers, in accordance with certain aspects of the present disclosure.[0018] FIGs. 6A and 6B illustrate example varactors using a back silicide layer, in accordance with certain aspects of the present disclosure.[0019] FIG. 7 is a flow diagram of example operations for fabricating a semiconductor variable capacitor having a semiconductor region with regions having different doping concentrations, in accordance with certain aspects of the present disclosure.[0020] FIG. 8 is a flow diagram of example operations for fabricating a semiconductor variable capacitor having a top gate region disposed above only a portion of a semiconductor region, in accordance with certain aspects of the present disclosure.[0021] FIG. 9 is a flow diagram of example operations for fabricating a semiconductor variable capacitor implemented using a back-gate configuration, in accordance with certain aspects of the present disclosure.DETAILED DESCRIPTION[0022] Certain aspects of the present disclosure are generally directed to a semiconductor device structure implemented using a back silicide configuration in an effort, for example, to reduce parasitic coupling between terminals of the device.[0023] The word“exemplary” is used herein to mean“serving as an example, instance, or illustration.” Any aspect described herein as“exemplary” is not necessarily to be construed as preferred or advantageous over other aspects. [0024] As used herein, the term“connected with” in the various tenses of the verb “connect” may mean that element A is directly connected to element B or that other elements may be connected between elements A and B (i.e., that element A is indirectly connected with element B). In the case of electrical components, the term“connected with” may also be used herein to mean that a wire, trace, or other electrically conductive material is used to electrically connect elements A and B (and any components electrically connected therebetween).EXAMPLE TRANSCAP DEVICES[0025] FIG. 1 illustrates an example structure of a transcap (TC) device 100. The TC device 100 includes a non-insulative region 112 coupled to a plate (P) terminal 101, a non-insulative region 106 coupled to a well (W) terminal 103, and a non-insulative region 108 coupled to a displacement (D) terminal 102. Certain implementations of a TC device use a plate oxide layer 110 disposed above a semiconductor region 114. The plate oxide layer 110 may isolate the W and P terminals, and thus, in effect act as a dielectric for the TC device 100. The non-insulative region 106 (e.g., heavily n doped region) and the non-insulative region 108 (e.g., heavily p doped region) may be formed in the semiconductor region 114 and on two sides of the TC device 100 in order to create p-n junctions. As used herein, a non-insulative region generally refers to a region that may be conductive or semi conductive.[0026] In certain aspects, a bias voltage may be applied between the D terminal 102 and the W terminal 103 in order to modulate the capacitance between the P and W terminals. For example, by applying a bias voltage to the D terminal 102, a depletion region 130 may be formed between the p-n junction of the non-insulative region 108 and the region 115 of the semiconductor region 114. Based on the bias voltage, this depletion region 130 may widen under the plate oxide layer 110, reducing the area of the equivalent electrode formed by the semiconductor region 114, and with it, the effective capacitance area and capacitance value of the TC device 100. Furthermore, the bias of the W and P terminals may be set as to avoid the formation of an inverted region underneath the oxide and operate the TC device 100 in deep depletion mode. By varying the voltage of the W terminal with respect to the P and D terminals, both vertical and horizontal depletion regions may be used to modulate the capacitance between the W and P terminals. [0027] The work-function of the non-insulative region 112 above the plate oxide layer 110 may be chosen to improve the device performance. For example, an n-doped poly-silicon material may be used (instead of p-doped), even if the semiconductor region 114 underneath the plate oxide layer 110 is doped with n-type impurities. In some aspects, a metallic material (also doped if desired) may be used for the non- insulative region 112 with an opportune work-function or a multi-layer stack of different metallic materials to obtain the desired work-function. In certain aspects, the non-insulative region 112 may be divided into two sub-regions, one n-doped and one p- doped, or a different metallic material may be used for each sub-region.[0028] In some cases, the semiconductor region 114 may be disposed above an insulator or region 116. The type of material for the region 116 may be chosen in order to improve the TC device 100 performance. For example, the region 116 may be an insulator, a semi-insulator, or an intrinsic/near-intrinsic semiconductor in order to decrease the parasitic capacitances associated with the TC device 100. In some cases, the region 116 may be made of n-doped or p-doped semiconductor with an appropriate doping profile in order to increase the TC device Q and/or the control on the depletion region 130 that may be formed between the non-insulative region 108 and the region 115 of the semiconductor region 114 when applying a bias voltage to the D terminal 102. The region 116 may also be formed by multiple semiconductor layers or regions doped in different ways (n, p, or intrinsic). Furthermore, the region 116 may include semiconductors, insulating layers, and/or substrates or may be formed above semiconductors, insulating layers, and/or substrates.[0029] To better understand the working principle of the TC device 100, it may be assumed that the D terminal 102 is biased with a negative voltage with respect to the W terminal 103. The width of the depletion region 130 in the semiconductor region 114 may be controlled by applying a control voltage to the D terminal 102 or to the W terminal 103. The capacitance between the W and P terminals may depend on the width of the depletion region 130 in the semiconductor region 114, and thus, may be controlled by applying the control voltage to the D terminal 102. Furthermore, the variation of the bias voltage applied to the D terminal 102 may not alter the direct- current (DC) voltage between the W and P terminals, allowing for improved control of the device characteristics. [0030] In some cases, it may be preferable to have the non-insulative region 106 and/or non-insulative region 108 a distance away from the plate oxide layer 110 in order to reduce the parasitic capacitance associated with the non-insulative region 108 and improve the isolation of the non-insulative region 106 for high control voltages. For example, the non-insulative region 106 may be partially overlapped with the plate oxide layer 110, or the non-insulative region 106 may be formed at a distance from the edge of the plate oxide layer 110 to increase the device tuning range and linearity. In the latter case, the voltage-withstanding capability of the device is improved since a portion of a radio-frequency (RF) signal, that may be applied to the P and W terminals, drops between the oxide edge and the non-insulative region 106 instead of being applied entirely across the plate oxide layer 110. The non-insulative region 108 may be partially overlapped with the plate oxide layer 110, or the non-insulative region 108 may be spaced apart from the plate oxide layer 110 so as to reduce the parasitic capacitance between the P terminal 101 and the D terminal 102.[0031] In certain aspects, the semiconductor region 114 may be implemented with a p-well region to improve the breakdown voltage of the p-n junction between the non- insulative region 108 and the region 115 of the semiconductor region 114, decreasing, at the same time, the parasitic capacitance between the P terminal 101 and the D terminal 102. Similarly, the semiconductor region 114 may be implemented with an n-doped region between the non-insulative region 106 and region 115 of the semiconductor region 114 in order to regulate the doping concentration between the plate oxide layer 110 and the non-insulative region 106. In certain aspects of the present disclosure, the semiconductor region 114 may be implemented with two or more regions having different doping concentrations and/or different doping types. A junction between the two or more regions may be disposed below the plate oxide layer 110 to improve the Q of the TC device 100.[0032] FIG. 2 illustrates an example TC device 200 implemented using a back-gate configuration. For example, a non-insulative region 202 (e.g., a backside P terminal) may be formed below at least a portion of a buried oxide (BOX) region 204 of the TC device 200. Therefore, the BOX region 204 may be used as the plate oxide, and a backside cavity contact may be used as a P terminal, enabling the use of the TC device 200 in high voltage applications, for example. [0033] While reducing the maximum control voltage is not a primary objective for this TC device configuration, the tuning-range-versus-Q performance of the TC device 200 may be improved by incorporating an intrinsic region 206. The configuration of the TC device 200 allows for the fabrication of thick oxide transcaps with oxide thicknesses in the range of 30-40 nm with operating voltages up to 15 V-20 V, for example. In certain aspects, a silicide-blocking layer 208 may be formed above at least a portion of the semiconductor region 114 to prevent the junctions between the different regions of the semiconductor region 114 from being shorted.[0034] The TC devices 100 and 200 may be fabricated on the same wafer using substrate removal silicon-on-insulator (SOI) process technologies. While the TC device of FIG. 1 may use a polysilicon or metal gate as the P terminal and has a plate oxide with an operating voltage usually in the range of 2.5 V-3.3 V, the buried-oxide-based TC device 200 exploits a metal cavity underneath the structure as a P terminal. Therefore, the TC device 200 is capable of operating at much higher voltages (e.g., up to 20 V), as previously described. In terms of device performance, thin oxide devices may have higher tuning range and higher capacitance density but lower quality factor and linearity compared to the buried oxide TC device 200, making the latter an attractive solution for tuning RF front-end applications where the voltage amplitude may reach high voltages (e.g., 20-30 V).EXAMPLE BACK SILICIDED VARIABLE CAPACITOR DEVICES[0035] The performance of the TC device 100 is related to the parasitic capacitance of the metallization connecting the TC device to the other components in the circuit. For example, parasitic capacitances may exist between the W terminal 103 and the P terminal 101, and between the D terminal 102 and the P terminal 101, which degrade the performance of the TC device 100. Certain aspects of the present disclosure provide device solutions for mitigating the degradation in the device performance due to these parasitic capacitances. For example, certain aspects of the present disclosure provide techniques for reducing coupling capacitance between terminals of a TC device by manufacturing the plate metal interconnections on one side of the wafer and the well and/or displacement interconnections on the other side. For instance, a silicide layer may be formed on the bottom of the wafer, after flipping the wafer and etching away the buried oxide (BOX) dielectric, and used for the D and/or the W terminals, as described in more detail herein.[0036] FIGs. 3A and 3B illustrate example TC devices 300 and 301 implemented using a back silicide layer, in accordance with certain aspects of the present disclosure. The non-insulative region 108 may be coupled to a silicide layer 302 for the D terminal 303. For example, the silicide layer 302 may be coupled to the D terminal 303. As illustrated, the silicide layer 302 and the plate oxide layer 110 are disposed adjacent to opposite sides of the semiconductor region 114 (e.g., a first side 330 and a second side 332). The plate oxide layer 110 and the non-insulative region 112 are fabricated using SOI technology and are disposed above only a portion of the semiconductor region 114. In certain aspects, the semiconductor region 114 may be a monocrystalline semiconductor.[0037] In addition, the non-insulative region 106 may be coupled to a silicide layer 304 for the W terminal 305. For example, the silicide layer 304 may be coupled to the W terminal 305, as illustrated. The silicide layer 304 and the plate oxide layer 110 may be disposed adjacent to opposite sides of the semiconductor region 114. By having the D terminal 303 and W terminal 305 on an opposite side of the semiconductor region 114 as the P terminal 101, the parasitic capacitance between the P terminal 101 and each of the D terminal 303 and the W terminal 305 is decreased compared to conventional transcap devices.[0038] As illustrated in FIG. 3A, the silicide layer 304 may be disposed adjacent to the highly doped region (non-insulative region 106). In some cases, the silicide layer 304 may be extended to also be adjacent to the low doped n-well region (e.g., region 115), as illustrated in FIG. 3B, in order to increase the quality factor of the TC device at the expense of tuning range.[0039] In certain aspects, the BOX region 306 may be disposed between the D terminal 303 and the W terminal 305. For example, during the fabrication of the TC device 300, the wafer on which the TC device 300 is fabricated may be flipped, and the BOX region 306 may be etched to allow formation of the silicide layers 302 and 304 for the D terminal 303 and the W terminal 305, respectively. [0040] FIG. 4 illustrates an example TC device 400 implemented using a back silicide layer for the W terminal, in accordance with certain aspects of the present disclosure. As illustrated, the silicide layers 402 and 404 are disposed adjacent to the same side of the semiconductor region 114 as the non-insulative region 112 for the P terminal 101. The silicide layers 402 and 404 are coupled to D terminals 403 and 405, respectively, as illustrated.[0041] The TC device 400 also includes a silicide layer 406 coupled to a W terminal 407. The silicide layer 406 and the non-insulative region 112 are disposed adjacent to opposite sides of the region 115, reducing parasitic capacitance between the P terminal 101 and the W terminal 407.[0042] As illustrated, the W terminal 407 is disposed between a BOX region 410 and a BOX region 412. For example, during the fabrication of the TC device 400, the wafer on which the TC device 400 is fabricated may be flipped, and a BOX region may be etched to allow formation of the silicide layer 406 for the well region (and W terminal 407), forming two separate BOX regions 410 and 412.[0043] The structure of the TC device 400 leverages the back silicide process to double the displacement diffusions so as to increase the control of the depletion region under the plate oxide layer 110. In certain aspects, a shallow n-type implant may be formed between the silicide layer 406 and the region 115 to reduce the contact resistance of the W terminal 407.[0044] FIG. 5 illustrates an example TC device 500 implemented using a back-gate configuration and using back silicide layers, in accordance with certain aspects of the present disclosure. The TC device 500 includes a silicide layer 502 coupled to the D terminal 503, and a silicide layer 504 coupled to the W terminal 505. The silicide layers 502 and 504 are disposed adjacent to the same side of the region 115 as the BOX region 204.[0045] FIGs. 6 A and 6B illustrate example varactors 600 and 601 using a back silicide layer, in accordance with certain aspects of the present disclosure. As illustrated, the varactor 600 includes an anode terminal 602 coupled to the non- insulative region 112, and a cathode terminal 604 coupled to a silicide layer 606. The non-insulative region 112 and the silicide layer 606 are disposed adjacent to opposite sides of the region 115 to reduce the parasitic capacitance between the anode terminal 602 and the cathode terminal 604, thereby improving the varactor tuning range. In certain aspects, the varactor 600 may include non-insulative regions 608 and 610 (e.g., highly doped regions), as illustrated in FIG. 6 A. As illustrated by the structure of varactor 601 in FIG. 6B, the varactor may be implemented without the non-insulative regions 608 and 610 to further decrease the parasitic capacitance between the anode terminal 602 and the cathode terminal 604.[0046] For both varactors 600 and 601, a shallow implant region, having the same doping type as the region 115, may be disposed between the silicide layer 606 and the region 115 to reduce the contact resistance of the cathode terminal 604. In certain aspects, a series of cathode terminals and silicide layers may be disposed adjacent to the bottom side of the regions 115 to reduce the cathode contact resistance.[0047] FIG. 7 is a flow diagram of example operations 700 for fabricating a semiconductor variable capacitor, in accordance with certain aspects of the present disclosure. The operations 700 may be performed, for example, by a semiconductor processing chamber.[0048] Operations 700 may begin, at block 702, by forming a semiconductor region (e.g., semiconductor region 114) having a first region (e.g., non-insulative region 108), a second region (e.g., non-insulative region 106), and a third region (e.g., region 115), the third region being formed between the first and second regions and having at least one of a different doping type or a different doping concentration than at least one of the first region or the second region. At block 704, an insulative layer (e.g., plate oxide layer 110) is formed, and at block 706, a first non-insulative region (e.g., non-insulative region 112) is formed, the insulative layer being disposed between the first non- insulative region and the semiconductor region, wherein the insulative layer is formed adjacent to a first side of the semiconductor region. At block 708, a first terminal (e.g., P terminal 101) is coupled to the first non-insulative region, and at block 710, a first silicide layer (e.g., silicide layer 302) is formed adjacent to a second side of the semiconductor region, the first side and the second side being opposite sides of the semiconductor region. At block 712, a second terminal (e.g., D terminal 303) is coupled to the first silicide layer. [0049] FIG. 8 is a flow diagram of example operations 800 for fabricating a semiconductor variable capacitor, in accordance with certain aspects of the present disclosure. The operations 800 may be performed, for example, by a semiconductor processing chamber.[0050] The operations 800 begin, at block 802, by forming a semiconductor region (e.g., semiconductor region 114), and at block 804, by forming an insulative layer (e.g., plate oxide layer 110). At block 806, a first non-insulative region (e.g., non- insulative region 112) is formed, the insulative layer being formed between the first non-insulative region and only a portion of the semiconductor region, wherein the insulative layer is disposed adjacent to a first side (e.g., side 330) of the semiconductor region. At block 808, a first terminal (e.g., anode terminal 602) is coupled to the first non-insulative region, and at block 810, a first silicide layer (e.g., silicide layer 606) is formed adjacent to a second side (e.g., side 332) of the semiconductor region, the first side and the second side being opposite sides of the semiconductor region. At block 812, a second terminal (e.g., cathode terminal 604) is coupled to the first silicide layer.[0051] FIG. 9 is a flow diagram of example operations 900 for fabricating a semiconductor variable capacitor, in accordance with certain aspects of the present disclosure. The operations 900 may be performed, for example, by a semiconductor processing chamber.[0052] The operations 900 begin, at block 902, by forming a BOX region (e.g., BOX region 204), and at block 904, forming a semiconductor region (e.g., semiconductor region 114). At block 906, a first non-insulative region (e.g., non- insulative region 202) is formed, the BOX region being formed between the first non- insulative region and the semiconductor region, wherein the BOX region is formed adjacent to a first side of the semiconductor region. At block 908, a first silicide layer (e.g., silicide layer 502) is formed adjacent to the first side of the semiconductor region.[0053] The various operations of methods described above may be performed by any suitable means capable of performing the corresponding functions. The means may include various hardware and/or software component(s) and/or module(s), including, but not limited to a circuit, an application-specific integrated circuit (ASIC), or processor. Generally, where there are operations illustrated in figures, those operations may have corresponding counterpart means-plus-function components with similar numbering.[0054] As used herein, the term“determining” encompasses a wide variety of actions. For example,“determining” may include calculating, computing, processing, deriving, investigating, looking up (e.g., looking up in a table, a database, or another data structure), ascertaining, and the like. Also,“determining” may include receiving (e.g., receiving information), accessing (e.g., accessing data in a memory), and the like. Also,“determining” may include resolving, selecting, choosing, establishing, and the like.[0055] As used herein, a phrase referring to“at least one of’ a list of items refers to any combination of those items, including single members. As an example,“at least one of: a, b , or c” is intended to cover: a, b, c, a-b , a-c, b-c , and a-b-c, as well as any combination with multiples of the same element (e.g., a-a, a-a-a, a-a-b, a-a-c, a-b-b , a-c-c, b-b , b-b-b , b-b-c , c-c, and c-c-c or any other ordering of a , b , and c).[0056] The methods disclosed herein comprise one or more steps or actions for achieving the described method. The method steps and/or actions may be interchanged with one another without departing from the scope of the claims. In other words, unless a specific order of steps or actions is specified, the order and/or use of specific steps and/or actions may be modified without departing from the scope of the claims.[0057] It is to be understood that the claims are not limited to the precise configuration and components illustrated above. Various modifications, changes and variations may be made in the arrangement, operation and details of the methods and apparatus described above without departing from the scope of the claims. |
A plurality of capacitive proximity sensors on a substantially horizontal plane and in combination with a microcontroller are used to detect user gestures for Page Up/Down, Zoom In/Out, Move Up/Down/Right/Left, Rotation, etc., commands to a video display. The microcontroller is adapted to interpret the capacitive changes of the plurality of capacitive proximity sensors caused by the user gestures, and generate control signals based upon these gestures to control the visual content of the video display. |
CLAI S What is claimed is: 1. A human interface device, comprising: a plurality of capacitive proximity sensors arranged in pattern on a plane of a substrate; and a controller operable to measure a capacitance of each of the plurality the capacitive proximity sensors and to detect gestures by means of the plurality of capacitive proximity sensors. 2. The device according to claim 1 , wherein the plurality of capacitive proximity sensors are six capacitive proximity sensors arranged in the pattern on the plane of the substrate. 3. The dev ice according to claim 2, wherein the pattern comprises two of the capacitive proximity sensors arranged on a distal portion of the plane, another two of the capacitive proximity sensors arranged on a proximate portion of the plane, and still another two of the capacitive proximity sensors arranged on either side portions of the plane. 4. The device according to claim 1 , wherein the controller is a microcontroller. 5. The device according to claim 4, wherein the microcontroller comprises: an analog front end and multiplexer coupled to the plurality of capacitive proximity sensors; a capacitance measurement circuit coupled to the analog front end and multiplexer; an analog-to-digital converter (ADC) having an input coupled to the capacitance measurement circuit; a digital processor and memory coupled to an output of the ADC; and a computer interface coupled to the digital processor. 6. The device according to claim 5, wherein the computer interface is a universal serial bus (USB) interface.7, A method for detecting gestures with a human interface device comprising a plurality of capacitive proximity sensors, said method comprising the steps of: arranging the plurality of capacitive proximity sensors in a pattern within a sensing plane; detecting a movement of at least one hand of a user at a distance from the sensing plane with at least two of the capacitive proximity sensors; and decoding and associating the detected movement to a respective one of a plurality of commands. 8, The method according to claim 7, wherein the plurality of capacitive proximity sensors are six capacitive proximity sensors arranged in the pattern on the sensing plane, 9. The method according to claim 8, wherein top left and top right capacitive proximity sensors are arranged on a distal portion of the sensing plane, bottom left and bottom right capacitive proximity sensors are arranged on a proximate portion of the sensing plane, and left and right capacitive proximity sensors are arranged on either side portions of the sensing plane. 10. The method according to claim 9, wherein a page up command is detected when a hand moves from the right sensor to the left sensor in a sweeping motion, wherein capacitive changes in the right, bottom right, bottom left, and left sensors are detected. 1 1. The method according to claim 9, wherein a page down command is detected when a hand moves from the left sensor to the right sensor in a sweeping motion, wherein capacitive changes in the left, bottom left, bottom right, and right sensors are detected. 12. The method according to claim 9, wherein a left/right/up/down command is detected when a hand hovers over the sensors and moves in a desired direction of travel, wherein ratio metric changes in the capacitance values of the sensors are detected. 13. The method according to claim 9, wherein a zoom up/down command is detected when a hand hovers over the sensors and moves in or out of a desired direction of travel, wherein ratio metric changes in the capacitance values of the sensor are detected. 14. The method according to claim 9, wherein a clockwise rotation command is detected when at least one hand hovers over the top right right sensors and the bottom left left sensors, and then rotates clockwise to the bottom right/right sensors and the top left/left sensors, wherein changes in the capacitance values of the top right/right sensors to the right/bottom right sensors and the bottom left left sensors to the top left left sensors are detected. 15. The method according to claim 9, wherein a counter clockwise rotation command is detected when at least one hand hovers over the bottom right/right sensors and the lop left/left sensors, and then rotates clockwise to the top right/right sensors and the bottom left/left sensors, wherein changes in the capacitance values of the bottom right/right sensors to the right/ top right sensors and the top left/left sensors to the bottom left/left sensors are detected. |
CAPACITIVE PROXIMITY BASED GESTURE INPUT SYSTEM RELATED PATENT APPLICATION This application claims priority to commonly owned United States Provisional Patent Application Serial Number 61/570,530; filed December 14, 201 1 ; entitled "Capacitive Proximity Based Gesture Input System," by Keith Edwin Curtis and Fanie Duvenhage; which is hereby incorporated by reference herein for all purposes. TECHNICAL FIELD The present disclosure relates to a method and apparatus for proximity detection, and, in particular, a capacitive proximity based gesture input system. BACKGROUND Current document viewing software requires short-cut key combinations or pull down menus plus a mouse to control the display of the document. Keyboard and mouse interfaces are not as intuitive as gesture based systems, requiring specialized knowledge about system operation and command structure. Gesture based systems do not require specialized commands, using hand gestures that are nearly identical to the handling of a paper hardcopy. SUMMARY Therefore there is a need for a gesture based system that may be used with many different information displays, such as, for example but not limited to, information (e.g., documents and data) kiosks at airports, office buildings, doctors offices, museums, libraries, schools, zoos, government and post offices, and the like. The gesture based system may be independent of the visual display and may be easily interfaced with a computer associated with the visual display, according to the teachings of this disclosure. According to an embodiment, a human interface device may comprise: a plurality of capacitive proximity sensors arranged in pattern on a plane of a substrate; and a controller operable to measure a capacitance of each of the plurality the capacitive proximity sensors and to detect gestures by means of the plurality of capacitive proximity sensors. According to a further embodiment, the plurality of capacitive proximity sensors may be six capacitive proximity sensors arranged in the pattern on the plane of the substrate. According to a further embodiment, the pattern comprises two of the capacitive proximity sensors arranged on a distal portion of the plane, another two of the capacitive proximity sensors arranged on aproximate portion of the plane, and still another two of the capacitive proximity sensors arranged on either side portions of the plane. According to a further embodiment, the controller may be a microcontroller. According to a further embodiment, the microcontroller may comprise: an analog front end and multiplexer coupled to the plurality of capacitive proximity sensors; a capacitance measurement circuit coupled to the analog front end and multiplexer; an analog- to-digital converter (ADC) having an input coupled to the capacitance measurement circuit; a digital processor and memory coupled to an output of the ADC; and a computer interface coupled to the digital processor. According to a further embodiment, the computer interface may be a universal serial bus (USB) interface. According to another embodiment, a method for detecting gestures with a human interface device comprising a plurality of capacitive proximity sensors may comprise the steps of: arranging the plurality of capacitive proximity sensors in a pattern within a sensing plane; detecting a movement of at least one hand of a user at a distance from the sensing plane with at least two of the capacitive proximity sensors; and decoding and associating the detected movement to a respective one of a plurality of commands. According to a further embodiment of the method, the plurality of capacitive proximity sensors may be six capacitive proximity sensors arranged in the pattern on the sensing plane. According to a further embodiment of the method, top left and top right capacitive proximity sensors may be arranged on a distal portion of the sensing plane, bottom left and bottom right capacitive proximity sensors may be arranged on a proximate portion of the sensing plane, and left and right capacitive proximity sensors may be arranged on either side portions of the sensing plane. According to a further embodiment of the method, a page up command may be detected when a hand moves from the right sensor to the left sensor in a sweeping motion, wherein capacitive changes in the right, bottom right, bottom left, and left sensors may be detected. According to a further embodiment of the method, a page down command may be detected when a hand moves from the left sensor to the right sensor in a sweeping motion, wherein capacitive changes in the left, bottom left, bottom right, and right sensors may be detected. According to a further embodiment of the method, a left/right/up/down command may be detected when a hand hovers over the sensors andmoves in a desired direction of travel, wherein ratio metric changes in the capacitance values of the sensors may be detected. According to a further embodiment of the method, a zoom up/down command may be detected when a hand hovers over the sensors and moves in or out of a desired direction of travel, wherein ratio metric changes in the capacitance values of the sensor may be detected. According to a further embodiment of the method, a clockwise rotation command may be detected when at least one hand hovers over the top right/right sensors and the bottom left left sensors, and then rotates clockwise to the bottom right/right sensors and the top left/left sensors, wherein changes in the capacitance values of the top right/right sensors to the right bottom right sensors and the bottom left/left sensors to the top left left sensors may be delected. According to a further embodiment of the method, a counter clockwise rotation command may be detected when at least one hand hovers over the bottom right/right sensors and the top left/left sensors, and then rotates clockwise to the top right right sensors and the bottom left/left sensors, wherein changes in the capacitance values of the bottom right/right sensors to the right/top right sensors and the top left/left sensors to the bottom left/left sensors may be detected. BRIEF DESCRIPTION OF THE DRAWINGS A more complete understanding of the present disclosure may be acquired by referring to the following description taken in conjunction with the accompanying drawings wherein: Figure 1 illustrates a schematic isometric diagram of a display kiosk, gesture input panel and computer, according to the teachings of this disclosure; Figure 2 illustrates a schematic plan view diagram of gestures for rotation of a document, according to the teachings of this disclosure; Figure 3 illustrates a schematic plan view diagram of gestures for Zoom In/Out of a document, according to the teachings of this disclosure; Figure 4 illustrates a schematic plan view diagram of gestures for X/Y positioning of a document, according to the teachings of this disclosure; Figure 5 illustrates a schematic plan view diagram of gestures for Page Up/Down positioning of a document, according to the teachings of this disclosure; andFigure 6 illustrates a schematic block diagram of a gesture input panel having a plurality of capacitive proximity sensors and a microcontroller interface, according to a specific example embodiment of this disclosure. While the present disclosure is susceptible to various modifications and alternative forms, specific example embodiments thereof have been shown in the drawings and are herein described in detail. It should be understood, however, that the description herein of specific example embodiments is not intended to limit the disclosure to the particular forms disclosed herein, but on the contrary, this disclosure is to cover all modifications and equivalents as defined by the appended claims. DETAILED DESCRIPTION All current in use gesture systems either require contact to a touch screen, or require visual capture and differentiation of the users hand, based on a camera system mounted to the display. A system according to various embodiments is instead, based on the proximity of the user to a substantially horizontal sensor plate, which can be mounted, for example, approximately perpendicular to the visual display. This removes the gesture capture from the display system and makes it an independent peripheral adapted to easy interface with a computer. According to various embodiments, a method for using a combination of a plurality of capacitive proximity sensors to detect gestures for Page Up/Down, Zoom In/Out, Move Up/Down/Right/Left, and Rotation is disclosed herein. The proposed gestures disclosed herein cover common document / image viewer controls, however they can be easily adapted for other human interface devices. The plurality of possible gestures are decodable using a simple data driven state machine. Thus, a single mixed signal integrated circuit or microcontroller may be used in such a human interface device. A detection state machine can also be implemented with 8-32 bit microprocessor systems requiring low program overhead. A respective system equipped with such a gesture recognition device can replace a Mouse/Trackball interface for information displays, personal computers, workstations and/or mobile devices, etc. This methodology allows the creation of intuitive gesture based user interface systems for any document or data display, e.g., information kiosk. The plurality of capacitive proximity sensors may provide for up to about three (3) inches of proportional proximity detection. If combined with microcontrollers having integrated communicationsU 2012/069119 functionality, e.g., a universal serial bus (USB) interface, such a gesturing device can be beneficially used in a variety of human/machine interface devices. Referring now to the drawings, the details of specific example gesturing embodiments and hardware implementations therefore, are schematically illustrated. Like elements in the drawings will be represented by like numbers, and similar elements will be represented by like numbers with a different lower case letter suffix. Referring to Figure 1, depicted is a schematic isometric diagram of a display kiosk, gesture input panel and computer, according to the teachings of this disclosure. A gesture based human interface input device 120, according an embodiment disclosed herein, in combination with a visual display device 1 10 and a computer 140 may be used for many different information displays, such as, for example but not limited to, information (e.g., documents and data) kiosks at airports, office buildings, doctors offices, museums, libraries, schools, zoos, government and post offices, and the like. The gesture based human interface input device 120 may be independent of the visual display device 1 10 and may be easily interfaced with a computer 140 associated with the visual display device 1 10, according to the teachings of this disclosure. As shown in Figure 1 the gesture based human interface input device 120 may be mounted with or independent from the visual display device 1 10, and positioned appropriately for human gesturing interaction with images displayed on the visual display device 1 10. The based human interface input device 120 can be designed to detect the movement of one or both hands, and may interpret certain gestures as predefined commands that may be used interactively with the visual display device 1 10. The gesture based human interface input device 120 may be based upon six capacitive proximity sensors arranged as shown in Figures 1. These six capacitive proximity sensors may be further defined as a top left sensor 1 , a top right sensor 2, a bottom left sensor 3, a bottom right sensor 4, a left sensor 5 and a right sensor 6. It is also contemplated and within the scope of this disclosure that more or less capacitive proximity sensors may be utilized according to the teachings of this disclosure. A microcontroller (see Figure 6) preferably with a computer interface, e.g., universal serial bus (USB) interface, may be used to measure the capacitances of the individual capacitive proximity sensors and to evaluate changing patterns for interpreting respectivegestures. Individual gestures are therefore detected and decoded based upon the movement of the user's hand while within the detection range of these six capacitive proximity sensors 1 through 6. Referring to Figure 2, depicted is a schematic plan view diagram of gestures for rotation of a document, according to the teachings of this disclosure. For rotation of a document the user places his/her hand above sensor 2, or alternately 2 and 6. The user then rotates his/her hand until it is over sensor 4, or alternately, 4 and 6. For a clockwise rotation command, two hands may hover over the top right/right (2, 6) and bottom left/left (3, 5), and then rotate clockwise to bottom right/right (4, 6) and top left/left (1 , 5). An associated recognition pattern may be: top right right (2, 6) to right/bottom right (6, 4) plus bottom left/left (3, 5) to top left/left ( 1 , 5). For a counter-clockwise rotation command, two hands may hover over the bottom right/right (4, 6) and top left/left (1 , 5), and then rotate clockwise to top right/right (2, 6) and bottom left/left (3, 5). An associated recognition pattern may be: bottom right right (4, 6) to right/top right (6, 2) plus top left/left ( 1 , 5) to bottom left/left (3, 5). Referring to Figure 3, depicted is a schematic plan view diagram of gestures for Zoom In Out of a document, according to the teachings of this disclosure. For Zoom In Out the user moves his/her hand parallel to the plane of the sensors 1 -6, until his/her hand is centered over all six sensors 1-6. The user then raises or lowers his/her hand to zoom in or out. When the desire level of zoom is reached the user's hand is withdrawn horizontally. For a Zoom In command the hand hovers over the sensors and moves toward (moves into) the sensors 1 -6. For a Zoom Out command the hand hovers over the sensors and moves away from the sensors 1-6. An associated recognition pattern may be: ratio metric change in all of the sensor capacitance values. Referring to Figure 4, depicted is a schematic plan view diagram of gestures for X/Y positioning of a document, according to the teachings of this disclosure. For X/Y positioning the user moves his/her hand vertically, into the plane of the sensors 1-6, until his/her hand is within range of all six sensors 1 -6. The user then moves his/her hand in the plane of the sensors 1-6 until the appropriate position is reached. The user then removes his/her hand vertically from the sensors 1 -6.12 069119 For a left/right/up/down-command, a hand hovers over the sensors and moves in the direction of the desired movement of the document. An associated recognition pattern may be ratio metric changes in the sensor capacitance values. Referring to Figure 5, depicted is a schematic plan view diagram of gestures for Page Up/Down positioning of a document, according to the teachings of this disclosure. For Page Up/Down the user may move his/her hand parallel to the plane of the sensors 1-6, until his/her hand is centered over sensor 6 for Page Down, or sensor 5 for Page Up. The user may then flip his/her hand while moving horizontally over the sensors 1-6. This action approximates the flipping of a page in a book. Once this gesture is complete, the hand can be removed parallel to the plane of the sensors. A Page Up command may be detected when the hand moves in a sweeping motion from the right sensor 6 to the left sensor 5 in a sweeping motion. An associated sensor recognition pattern/sequence may be: right 6, bottom right 4, bottom left 3 and left 5. A Page Down command may be detected when the hand moves in a sweeping motion from the left sensor 5 to the right sensor 6 in a sweeping motion. An associated sensor recognition pattern/sequence may be: Jeft 5, bottom left 3, bottom right 4 and right 6. Referring to Figure 6, depicted is a schematic block diagram of a gesture input panel having a plural ity of capacitive proximity sensors and a microcontroller interface, according to a specific example embodiment of this disclosure. A gesture input panel, generally represented by the numeral 620, may comprise a plurality of capacitive proximity sensors 1 - 6, a microcontroller 650 comprising a digital processor and memory 652, a computer interface 654, an analog-to-digital converter (ADC) 656, a capacitive measurement circuit 658, and an analog front end and multiplexer 660. The analog front end and multiplexer 660 couple each of the capacitive proximity sensors 1 -6 to the capacitance measurement circuit 658. The capacitance measurement circuit 658 precisely measures the capacitance value of each of the plurality of capacitive proximity sensors 1 -6 as an analog voltage. The ADC 656 converts analog voltages representative of the capacitance values of the capacitive proximity sensors 1 -6 into digital representations thereof. The digital processor and memory 652 reads these digital representations of the capacitance values and stores them in the memory for further processing to create commands to the computer 140 based upon the gesturing inputs12 069119 8 described more fully hereinabove. A computer interface 654, e.g., USB, serial, PS-2, etc., may be adapted to communicate with a computer 140 that drives a visual display 1 10. The capacitance measurement circuit 658 may be any one or more capacitance measurement peripherals that have the necessary capacitance measurement resolution. For example, but not limited to, a Charge Time Measurement Unit (CTMU), a capacitive voltage divider (CVD) method, and a capacitive sensing module (CSM). The CTMU may be used for very accurate capacitance measurements. The CTMU is more fully described in Microchip applications notes AN 1250 and AN 1375, available at www.microchip.com, and commonly owned U.S. Patent Nos. US 7,460,441 B2, entitled "Measuring a long time period;" and US 7,764,213 B2, entitled "Current-time digital-to-analog converter," both by James E. Bartling; wherein all of which are hereby incorporated by reference herein for all purposes. The capacitive voltage divider (CVD) method determines a capacitance value and/or evaluates whether the capacitive value has changed. The CVD method is more fully described in Application Note AN 1208, available at www.inicrochip.com; and a more detailed explanation of the CVD method is presented in commonly owned United States Patent Application Publ ication No. US 2010/0181 180, entitled "Capacitive Touch Sensing using an internal Capacitor of an Analog-To-Digital Converter (ADC) and a Voltage Reference," by Dieter Peter; wherein both are hereby incorporated by reference herein for all purposes. Capacitive sensing using the period method and a capacitive sensing module (CSM) are more fully described in Application Notes AN 1 101 , AN 1 171 , AN 1268, AN 13 12, AN 1334 and TB3064, available at www.microchip.com, and commonly owned U.S. Patent Application No.: US 201 1/0007028 A l , entitled "Capacitive Touch System With Noise Immunity" by Keith E. Curtis, et al.; wherein all of which are hereby incorporated by reference herein for all purposes. The proposed gestures cover common document / image viewer controls, however they can be easily adapted for other human interface devices. The plurality of possible gestures are decodable using a simple data driven state machine. Thus, a single mixed signal integrated circuit or microcontroller may be used in such a human interface device. Adetection state machine can also be implemented on 8-32 bit microprocessor systems with low overhead. While embodiments of this disclosure have been depicted, described, and are defined by reference to example embodiments of the disclosure, such references do not imply a limitation on the disclosure, and no such limitation is to be inferred. The subject matter disclosed is capable of considerable modification, alteration, and equivalents in form and function, as will occur to those ordinarily skilled in the pertinent art and having the benefit of this disclosure. The depicted and described embodiments of this disclosure are examples only, and are not exhaustive of the scope of the disclosure. |
Technologies for connecting data cables in a data center are disclosed. In the illustrative embodiment, racks of the data center are grouped into different zones based on the distance from the racks in a given zone to a network switch. All of the racks in a given zone are connected to the network switch using data cables of the same length. In some embodiments, certain physical resources such as storage may be placed in racks that are in zones closer to the network switch and therefore use shorter data cables with lower latency. An orchestrator server may, in some embodiments, schedule workloads or create virtual servers based on the different zones and corresponding latency of different physical resources. |
WHAT IS CLAIMED IS:1. A data center comprising:a network switch;a plurality of racks, wherein each rack of the plurality of racks is located in a corresponding zone of a plurality of zones,wherein each zone of the plurality of zones is associated with a minimum threshold distance and a maximum threshold distance that define a distance range, wherein no two distance ranges of the plurality of zones overlap with each other, andwherein each rack of the plurality of racks is defined as located in a zone of the plurality of zones if the distance from the network switch to the corresponding rack is above the minimum threshold distance associated with the corresponding zone and below a maximum threshold distance associated with the corresponding zone; anda plurality of data cables, wherein each rack of the plurality of racks is connected to the network switch with one of the data cables of the plurality of data cables and wherein each data cable of the plurality of data cables that is connected to a corresponding rack located in the same zone has approximately the same length.2. The data center of claim 1, wherein each rack of the plurality of racks comprises a plurality of sleds, wherein each data cable of the plurality of data cables is connected directly to a sled of the plurality of sleds of a rack of the plurality of racks.3. The data center of claim 1, wherein each data cable of the plurality of data cables is a passive optical cable.4. The data center of claim 3, further comprising at least 256 sleds, wherein each of the at least 256 sleds is included in a rack of the plurality of racks.5. The data center of claim 3, further comprising at least 1,024 sleds, wherein each of the at least 1,024 sleds is included in a rack of the plurality of racks.6. The data center of claim 1, wherein each data cable of the plurality of data cables is connected to a top-of-rack switch of a rack of the plurality of racks.7. The data center of claim 1, further comprising at least 256 sleds, wherein each of the 256 sleds is included in a rack of the plurality of racks, wherein each rack connected to the network switch is in a zone of the plurality of zones, wherein the plurality of zones comprises at most 4 zones.8. The data center of claim 1, further comprising a plurality of sleds, wherein each sled of the plurality of sleds is included in a rack of the plurality of racks, wherein at least half of the sleds in the zone closest to the network switch are storage sleds.9. A data center comprising:a spine switch;a plurality of pods, each pod of the plurality of pods comprising a plurality of racks and a network switch, wherein each rack of a plurality of racks of a pod of the plurality of pods is connected to the corresponding network switch and wherein each pod of the plurality of pods is located in a corresponding zone of a plurality of zones,wherein each zone of the plurality of zones is associated with a minimum threshold distance and a maximum threshold distance that define a distance range, wherein no two distance ranges of the plurality of zones overlap with each other, andwherein each pod of the plurality of pods is defined as located in a zone of the plurality of zones if the distance from the spine switch to the corresponding network switch is above the minimum threshold distance associated with the corresponding zone and below a maximum threshold distance associated with the corresponding zone; anda plurality of data cables, wherein each network switch of the plurality of pods is connected to the spine switch with one of the data cables of the plurality of data cables and wherein each data cable of the plurality of data cables that is connected to a corresponding network switch located in the same zone has approximately the same length.10. The data center of claim 9, wherein each data cable of the plurality of data cables is a passive optical cable.11. The data center of claim 9, wherein the plurality of pods comprises at least 32 pods, wherein each network switch of a pod of the plurality of pods connected to the spine switch is in a zone of the plurality of zones, wherein the plurality of zones comprises at most 4 zones.12. An orchestrator server for managing resources of a data center, the orchestrator server comprising:one or more processors;one or more memory devices having stored therein a plurality of instructions that, when executed by the one or more processors, causes the orchestrator server to:receive a request for creation of a low-latency virtual machine; and select, in response to the request for creation of the low-latency virtual machine, one or more sleds of the data center in a low-latency zone, wherein each sled in the low-latency zone is connected to the same network switch via a corresponding data cable of a plurality of data cables, wherein each data cable of the plurality of data cables has a length shorter than or approximately equal to the length of each other data cable of the plurality of data cables connected to the same network switch; andcreate, in response to the request for creation of the low-latency virtual machine, the low-latency virtual machine with use of the one or more sleds.13. The orchestrator server of claim 12, wherein the low latency zone comprises at least 128 sleds14. The orchestrator server of claim 12, wherein the plurality of instructions further cause the compute device to:receive network utilization data of a plurality of workloads of the data center;analyze the plurality of workloads based on the network utilization data to determine one or more workloads with high network utilization; andtransfer the one or more workloads with high network utilization to one or more additional sleds in the low-latency zone.15. A method for configuring a data center, the method comprising:determining a length from each rack of a plurality of racks to a network switch of the data center;assigning each rack of the plurality of racks to a zone of a plurality of zones associated with the network switch based on a determination that the corresponding distance from the rack to the network switch is above a minimum threshold distance associated with the assigned zone and below a maximum threshold distance associated with the assigned zone; selecting, for each rack of the plurality of racks, a length of a data cable to connect the rack to the network switch based on the assigned zone; andconnecting, for each rack of the plurality of racks, a data cable with the selected length from the rack to the network switch, wherein the length selected for each data cable is approximately the same as the length of each other data cable selected for each rack of the plurality of racks assigned to the same zone.16. The method of claim 15, wherein each rack of the plurality of racks comprises a plurality of sleds, wherein each data cable connected to a rack of the plurality of racks is connected directly to a sled of the plurality of sleds of a rack of the plurality of racks.17. The method of claim 15, wherein each data cable of the plurality of data cables is a passive optical cable.18. The method of claim 17, wherein connecting the data cable with the selected length from the rack to the network switch comprises connecting at least 256 sleds, wherein each of the at least 256 sleds is included in a rack of the plurality of racks.19. A method for configuring a data center, the method comprising:determining a length from each network switch of a plurality of network switches to a spine switch of the data center, wherein each network switch of the plurality of network switches is associated with a different pod comprising a plurality of racks;assigning each network switch of the plurality of network switches to a zone of a plurality of zones associated with the spine switch based on a determination that the corresponding distance from the network switch to the spine switch is above a minimum threshold associated with the assigned zone and below a maximum threshold associated with the assigned zone;selecting, for each network switch of the plurality of network switches, a length of a data cable to connect the network switch to the spine switch based on the assigned zone; and connecting, for each network switch of the plurality of network switches, a data cable with the selected length from the network switch to the spine switch, wherein the length selected for each data cable is approximately the same as the length of each other data cable selected for each network switch of the plurality of network switches assigned to the same zone.20. The method of claim 19, wherein each data cable of the plurality of data cables is a passive optical cable.21. A method for managing resources of a data center with an orchestrator server, the method comprising:receiving, by the orchestrator server, a request for creation of a low-latency virtual machine;selecting, by the orchestrator server and in response to the request for creation of the low-latency virtual machine, one or more sleds of the data center in a low-latency zone, wherein each sled in the low-latency zone is connected to the same network switch via a corresponding data cable of a plurality of data cables, wherein each data cable of the plurality of data cables has a length shorter than or approximately equal to the length of each other data cable of the plurality of data cables connected to the same network switch; andcreating, by the orchestrator server and in response to the request for creation of the low-latency virtual machine, the low-latency virtual machine with use of the one or more sleds.22. The method of claim 21, wherein the low latency zone comprises at least 128 sleds.23. The method of claim 21, further comprising:receiving network utilization data of a plurality of workloads of the data center;analyzing the plurality of workloads based on the network utilization data to determine one or more workloads with high network utilization; andtransferring the one or more workloads with high network utilization to one or more additional sleds in the low-latency zone.24. One or more computer-readable media comprising a plurality of instructions stored thereon that, when executed, causes a compute device to perform the method of any of claims 21-23.25. A compute device for managing resources of a data center with an orchestrator server, the compute device comprising means for performing the method of any of claims 21- 23. |
TECHNOLOGIES FOR DATA CENTER MULTI-ZONE CABLINGCROSS-REFERENCE TO RELATED APPLICATIONS[0001] The present application claims priority to U.S. Utility Patent Application SerialNo. 15/395,995, entitled "TECHNOLOGIES FOR DATA CENTER MULTI-ZONE CABLING," which was filed on December 30, 2016 and which claims priority to U.S. Provisional Patent Application No. 62/365,969, filed July 22, 2016; U.S. Provisional Patent Application No. 62/376,859, filed August 18, 2016; and U.S. Provisional Patent Application No. 62/427,268, filed November 29, 2016.BACKGROUND[0002] A data center may include several racks of computing resources such as servers.The various servers and racks in the datacenter are typically connected to each other through one or more network switches using data cables, such as electrical data cables or optical data cables. Since every rack may be a different distance from the network switch to which it is connected, the data cable connecting a rack to a network switch may have a length that depends on the particular distance between the rack and the network switch.[0003] A data server may also include one or more spine switches, which connect to the network switches that connect directly to the racks using data cables. Similar to the data cables connecting the network switches to the racks, the data cables connecting the spine switches to the network switches may have a length corresponding to the particular distance between a given network switch and a given spine switch.BRIEF DESCRIPTION OF THE DRAWINGS[0004] The concepts described herein are illustrated by way of example and not by way of limitation in the accompanying figures. For simplicity and clarity of illustration, elements illustrated in the figures are not necessarily drawn to scale. Where considered appropriate, reference labels have been repeated among the figures to indicate corresponding or analogous elements.[0005] FIG. 1 is a diagram of a conceptual overview of a data center in which one or more techniques described herein may be implemented according to various embodiments;[0006] FIG. 2 is a diagram of an example embodiment of a logical configuration of a rack of the data center of FIG. 1; [0007] FIG. 3 is a diagram of an example embodiment of another data center in which one or more techniques described herein may be implemented according to various embodiments;[0008] FIG. 4 is a diagram of another example embodiment of a data center in which one or more techniques described herein may be implemented according to various embodiments;[0009] FIG. 5 is a diagram of a connectivity scheme representative of link-layer connectivity that may be established among various sleds of the data centers of FIGS. 1, 3, and 4;[0010] FIG. 6 is a diagram of a rack architecture that may be representative of an architecture of any particular one of the racks depicted in FIGS. 1-4 according to some embodiments;[0011] FIG. 7 is a diagram of an example embodiment of a sled that may be used with the rack architecture of FIG. 6;[0012] FIG. 8 is a diagram of an example embodiment of a rack architecture to provide support for sleds featuring expansion capabilities;[0013] FIG. 9 is a diagram of an example embodiment of a rack implemented according to the rack architecture of FIG. 8;[0014] FIG. 10 is a diagram of an example embodiment of a sled designed for use in conjunction with the rack of FIG. 9;[0015] FIG. 11 is a diagram of an example embodiment of a data center in which one or more techniques described herein may be implemented according to various embodiments;[0016] FIG. 12 is a diagram of an example embodiment of a data center in which the length of a data cable used to connect to a network switch to a rack depends on a zone of the rack;[0017] FIGS. 13 A and 13 B are a diagram of an example embodiment of a rack in the data center of FIG. 12 in which a data cable connects from a network switch to a top-of-rack switch;[0018] FIG. 14 is a diagram of an example embodiment of a data center in which data cables connect from a network switch directly to sleds of a rack and each data cable connecting to sleds in the same rack is the same length;[0019] FIG. 15 is a diagram of an example embodiment of a data center in which data cables connect from a network switch directly to sleds of a rack and the length of each data cable connecting to sleds in the same rack may not be the same length; [0020] FIG. 16 is a diagram of an example embodiment of a data center in which the length of a data cable used to connect a network switch to a spine switch depends on a zone of the network switch;[0021] FIG. 17 is a simplified block diagram of at least one embodiment of a system for orchestrating workloads assigned in a data center;[0022] FIG. 18 is a simplified block diagram of at least one embodiment of an orchestrator server of the system of FIG. 17;[0023] FIG. 19 is a simplified block diagram of at least one embodiment of an environment that may be established by the orchestrator server of FIG. 18;[0024] FIG. 20 is at least one embodiment of a flowchart of a method for creating a virtual server that may be executed by the orchestrator server of FIG. 18; and[0025] FIG. 21 is at least one embodiment of a flowchart of a method for managing workloads that may be executed by the orchestrator server of FIG. 18.DETAILED DESCRIPTION OF THE DRAWINGS[0026] While the concepts of the present disclosure are susceptible to various modifications and alternative forms, specific embodiments thereof have been shown by way of example in the drawings and will be described herein in detail. It should be understood, however, that there is no intent to limit the concepts of the present disclosure to the particular forms disclosed, but on the contrary, the intention is to cover all modifications, equivalents, and alternatives consistent with the present disclosure and the appended claims.[0027] References in the specification to "one embodiment," "an embodiment," "an illustrative embodiment," etc., indicate that the embodiment described may include a particular feature, structure, or characteristic, but every embodiment may or may not necessarily include that particular feature, structure, or characteristic. Moreover, such phrases are not necessarily referring to the same embodiment. Further, when a particular feature, structure, or characteristic is described in connection with an embodiment, it is submitted that it is within the knowledge of one skilled in the art to effect such feature, structure, or characteristic in connection with other embodiments whether or not explicitly described. Additionally, it should be appreciated that items included in a list in the form of "at least one A, B, and C" can mean (A); (B); (C): (A and B); (B and C); (A and C); or (A, B, and C). Similarly, items listed in the form of "at least one of A, B, or C" can mean (A); (B); (C): (A and B); (B and C); (A and C); or (A, B, and C). [0028] The disclosed embodiments may be implemented, in some cases, in hardware, firmware, software, or any combination thereof. The disclosed embodiments may also be implemented as instructions carried by or stored on one or more transitory or non-transitory machine-readable (e.g., computer-readable) storage medium, which may be read and executed by one or more processors. A machine-readable storage medium may be embodied as any storage device, mechanism, or other physical structure for storing or transmitting information in a form readable by a machine (e.g., a volatile or non-volatile memory, a media disc, or other media device).[0029] In the drawings, some structural or method features may be shown in specific arrangements and/or orderings. However, it should be appreciated that such specific arrangements and/or orderings may not be required. Rather, in some embodiments, such features may be arranged in a different manner and/or order than shown in the illustrative figures. Additionally, the inclusion of a structural or method feature in a particular figure is not meant to imply that such feature is required in all embodiments and, in some embodiments, may not be included or may be combined with other features.[0030] FIG. 1 illustrates a conceptual overview of a data center 100 that may generally be representative of a data center or other type of computing network in/for which one or more techniques described herein may be implemented according to various embodiments. As shown in FIG. 1, data center 100 may generally contain a plurality of racks, each of which may house computing equipment comprising a respective set of physical resources. In the particular non- limiting example depicted in FIG. 1, data center 100 contains four racks 102A to 102D, which house computing equipment comprising respective sets of physical resources 105A to 105D. According to this example, a collective set of physical resources 106 of data center 100 includes the various sets of physical resources 105A to 105D that are distributed among racks 102A to 102D. Physical resources 106 may include resources of multiple types, such as - for example - processors, co-processors, accelerators, field-programmable gate arrays (FPGAs), memory, and storage. The embodiments are not limited to these examples.[0031] The illustrative data center 100 differs from typical data centers in many ways.For example, in the illustrative embodiment, the circuit boards ("sleds") on which components such as CPUs, memory, and other components are placed are designed for increased thermal performance. In particular, in the illustrative embodiment, the sleds are shallower than typical boards. In other words, the sleds are shorter from the front to the back, where cooling fans are located. This decreases the length of the path that air must to travel across the components on the board. Further, the components on the sled are spaced further apart than in typical circuit boards, and the components are arranged to reduce or eliminate shadowing (i.e., one component in the air flow path of another component). In the illustrative embodiment, processing components such as the processors are located on a top side of a sled while near memory, such as Dual In-line Memory Modules (DIMMs), are located on a bottom side of the sled. As a result of the enhanced airflow provided by this design, the components may operate at higher frequencies and power levels than in typical systems, thereby increasing performance. Furthermore, the sleds are configured to blindly mate with power and data communication cables in each rack 102A, 102B, 102C, 102D, enhancing their ability to be quickly removed, upgraded, reinstalled, and/or replaced. Similarly, individual components located on the sleds, such as processors, accelerators, memory, and data storage drives, are configured to be easily upgraded due to their increased spacing from each other. In the illustrative embodiment, the components additionally include hardware attestation features to prove their authenticity.[0032] Furthermore, in the illustrative embodiment, the data center 100 utilizes a single network architecture ("fabric") that supports multiple other network architectures including Ethernet and Omni-Path. The sleds, in the illustrative embodiment, are coupled to switches via optical fibers, which provide higher bandwidth and lower latency than typical twisted pair cabling (e.g., Category 5, Category 5e, Category 6, etc.). Due to the high bandwidth, low latency interconnections and network architecture, the data center 100 may, in use, pool resources, such as memory, accelerators (e.g., graphics accelerators, FPGAs, Application Specific Integrated Circuits (ASICs), etc.), and data storage drives that are physically disaggregated, and provide them to compute resources (e.g., processors) on an as needed basis, enabling the compute resources to access the pooled resources as if they were local. The illustrative data center 100 additionally receives usage information for the various resources, predicts resource usage for different types of workloads based on past resource usage, and dynamically reallocates the resources based on this information.[0033] The racks 102A, 102B, 102C, 102D of the data center 100 may include physical design features that facilitate the automation of a variety of types of maintenance tasks. For example, data center 100 may be implemented using racks that are designed to be robotically- accessed, and to accept and house robotically-manipulatable resource sleds. Furthermore, in the illustrative embodiment, the racks 102A, 102B, 102C, 102D include integrated power sources that receive a greater voltage than is typical for power sources. The increased voltage enables the power sources to provide additional power to the components on each sled, enabling the components to operate at higher than typical frequencies.[0034] FIG. 2 illustrates an exemplary logical configuration of a rack 202 of the data center 100. As shown in FIG. 2, rack 202 may generally house a plurality of sleds, each of which may comprise a respective set of physical resources. In the particular non-limiting example depicted in FIG. 2, rack 202 houses sleds 204-1 to 204-4 comprising respective sets of physical resources 205-1 to 205-4, each of which constitutes a portion of the collective set of physical resources 206 comprised in rack 202. With respect to FIG. 1, if rack 202 is representative of - for example - rack 102A, then physical resources 206 may correspond to the physical resources 105A comprised in rack 102A. In the context of this example, physical resources 105A may thus be made up of the respective sets of physical resources, including physical storage resources 205-1, physical accelerator resources 205-2, physical memory resources 205-3, and physical compute resources 205-5 comprised in the sleds 204-1 to 204-4 of rack 202. The embodiments are not limited to this example. Each sled may contain a pool of each of the various types of physical resources (e.g., compute, memory, accelerator, storage). By having robotically accessible and robotically manipulatable sleds comprising disaggregated resources, each type of resource can be upgraded independently of each other and at their own optimized refresh rate.[0035] FIG. 3 illustrates an example of a data center 300 that may generally be representative of one in/for which one or more techniques described herein may be implemented according to various embodiments. In the particular non-limiting example depicted in FIG. 3, data center 300 comprises racks 302-1 to 302-32. In various embodiments, the racks of data center 300 may be arranged in such fashion as to define and/or accommodate various access pathways. For example, as shown in FIG. 3, the racks of data center 300 may be arranged in such fashion as to define and/or accommodate access pathways 311A, 31 IB, 311C, and 31 ID. In some embodiments, the presence of such access pathways may generally enable automated maintenance equipment, such as robotic maintenance equipment, to physically access the computing equipment housed in the various racks of data center 300 and perform automated maintenance tasks (e.g., replace a failed sled, upgrade a sled). In various embodiments, the dimensions of access pathways 311A, 31 IB, 311C, and 31 ID, the dimensions of racks 302-1 to 302-32, and/or one or more other aspects of the physical layout of data center 300 may be selected to facilitate such automated operations. The embodiments are not limited in this context.[0036] FIG. 4 illustrates an example of a data center 400 that may generally be representative of one in/for which one or more techniques described herein may be implemented according to various embodiments. As shown in FIG. 4, data center 400 may feature an optical fabric 412. Optical fabric 412 may generally comprise a combination of optical signaling media (such as optical cabling) and optical switching infrastructure via which any particular sled in data center 400 can send signals to (and receive signals from) each of the other sleds in data center 400. The signaling connectivity that optical fabric 412 provides to any given sled may include connectivity both to other sleds in a same rack and sleds in other racks. In the particular non-limiting example depicted in FIG. 4, data center 400 includes four racks 402A to 402D. Racks 402A to 402D house respective pairs of sleds 404 A- 1 and 404 A-2, 404B-1 and 404B-2, 404C-1 and 404C-2, and 404D-1 and 404D-2. Thus, in this example, data center 400 comprises a total of eight sleds. Via optical fabric 412, each such sled may possess signaling connectivity with each of the seven other sleds in data center 400. For example, via optical fabric 412, sled 404 A- 1 in rack 402A may possess signaling connectivity with sled 404 A-2 in rack 402A, as well as the six other sleds 404B-1, 404B-2, 404C-1, 404C-2, 404D-1, and 404D-2 that are distributed among the other racks 402B, 402C, and 402D of data center 400. The embodiments are not limited to this example.[0037] FIG. 5 illustrates an overview of a connectivity scheme 500 that may generally be representative of link-layer connectivity that may be established in some embodiments among the various sleds of a data center, such as any of example data centers 100, 300, and 400 of FIGS. 1, 3, and 4. Connectivity scheme 500 may be implemented using an optical fabric that features a dual-mode optical switching infrastructure 514. Dual-mode optical switching infrastructure 514 may generally comprise a switching infrastructure that is capable of receiving communications according to multiple link-layer protocols via a same unified set of optical signaling media, and properly switching such communications. In various embodiments, dual- mode optical switching infrastructure 514 may be implemented using one or more dual-mode optical switches 515. In various embodiments, dual-mode optical switches 515 may generally comprise high-radix switches. In some embodiments, dual-mode optical switches 515 may comprise multi-ply switches, such as four-ply switches. In various embodiments, dual-mode optical switches 515 may feature integrated silicon photonics that enable them to switch communications with significantly reduced latency in comparison to conventional switching devices. In some embodiments, dual-mode optical switches 515 may constitute leaf switches 530 in a leaf-spine architecture additionally including one or more dual-mode optical spine switches 520.[0038] In various embodiments, dual-mode optical switches may be capable of receiving both Ethernet protocol communications carrying Internet Protocol (IP packets) and communications according to a second, high-performance computing (HPC) link-layer protocol (e.g., Intel's Omni-Path Architecture's, Infiniband) via optical signaling media of an optical fabric. As reflected in FIG. 5, with respect to any particular pair of sleds 504A and 504B possessing optical signaling connectivity to the optical fabric, connectivity scheme 500 may thus provide support for link-layer connectivity via both Ethernet links and HPC links. Thus, both Ethernet and HPC communications can be supported by a single high-bandwidth, low- latency switch fabric. The embodiments are not limited to this example.[0039] FIG. 6 illustrates a general overview of a rack architecture 600 that may be representative of an architecture of any particular one of the racks depicted in FIGS. 1 to 4 according to some embodiments. As reflected in FIG. 6, rack architecture 600 may generally feature a plurality of sled spaces into which sleds may be inserted, each of which may be robotically-accessible via a rack access region 601. In the particular non-limiting example depicted in FIG. 6, rack architecture 600 features five sled spaces 603-1 to 603-5. Sled spaces 603-1 to 603-5 feature respective multi-purpose connector modules (MPCMs) 616-1 to 616-5.[0040] FIG. 7 illustrates an example of a sled 704 that may be representative of a sled of such a type. As shown in FIG. 7, sled 704 may comprise a set of physical resources 705, as well as an MPCM 716 designed to couple with a counterpart MPCM when sled 704 is inserted into a sled space such as any of sled spaces 603-1 to 603-5 of FIG. 6. Sled 704 may also feature an expansion connector 717. Expansion connector 717 may generally comprise a socket, slot, or other type of connection element that is capable of accepting one or more types of expansion modules, such as an expansion sled 718. By coupling with a counterpart connector on expansion sled 718, expansion connector 717 may provide physical resources 705 with access to supplemental computing resources 705B residing on expansion sled 718. The embodiments are not limited in this context.[0041] FIG. 8 illustrates an example of a rack architecture 800 that may be representative of a rack architecture that may be implemented in order to provide support for sleds featuring expansion capabilities, such as sled 704 of FIG. 7. In the particular non-limiting example depicted in FIG. 8, rack architecture 800 includes seven sled spaces 803-1 to 803-7, which feature respective MPCMs 816-1 to 816-7. Sled spaces 803-1 to 803-7 include respective primary regions 803-1 A to 803 -7 A and respective expansion regions 803- IB to 803- 7B. With respect to each such sled space, when the corresponding MPCM is coupled with a counterpart MPCM of an inserted sled, the primary region may generally constitute a region of the sled space that physically accommodates the inserted sled. The expansion region may generally constitute a region of the sled space that can physically accommodate an expansion module, such as expansion sled 718 of FIG. 7, in the event that the inserted sled is configured with such a module.[0042] FIG. 9 illustrates an example of a rack 902 that may be representative of a rack implemented according to rack architecture 800 of FIG. 8 according to some embodiments. In the particular non-limiting example depicted in FIG. 9, rack 902 features seven sled spaces 903-1 to 903-7, which include respective primary regions 903-1 A to 903 -7 A and respective expansion regions 903- IB to 903-7B. In various embodiments, temperature control in rack 902 may be implemented using an air cooling system. For example, as reflected in FIG. 9, rack 902 may feature a plurality of fans 919 that are generally arranged to provide air cooling within the various sled spaces 903-1 to 903-7. In some embodiments, the height of the sled space is greater than the conventional "1U" server height. In such embodiments, fans 919 may generally comprise relatively slow, large diameter cooling fans as compared to fans used in conventional rack configurations. Running larger diameter cooling fans at lower speeds may increase fan lifetime relative to smaller diameter cooling fans running at higher speeds while still providing the same amount of cooling. The sleds are physically shallower than conventional rack dimensions. Further, components are arranged on each sled to reduce thermal shadowing (i.e., not arranged serially in the direction of air flow). As a result, the wider, shallower sleds allow for an increase in device performance because the devices can be operated at a higher thermal envelope (e.g., 250W) due to improved cooling (i.e., no thermal shadowing, more space between devices, more room for larger heat sinks, etc.).[0043] MPCMs 916-1 to 916-7 may be configured to provide inserted sleds with access to power sourced by respective power modules 920-1 to 920-7, each of which may draw power from an external power source 921. In various embodiments, external power source 921 may deliver alternating current (AC) power to rack 902, and power modules 920-1 to 920-7 may be configured to convert such AC power to direct current (DC) power to be sourced to inserted sleds. In some embodiments, for example, power modules 920-1 to 920-7 may be configured to convert 277-volt AC power into 12-volt DC power for provision to inserted sleds via respective MPCMs 916-1 to 916-7. The embodiments are not limited to this example.[0044] MPCMs 916-1 to 916-7 may also be arranged to provide inserted sleds with optical signaling connectivity to a dual-mode optical switching infrastructure 914, which may be the same as - or similar to - dual-mode optical switching infrastructure 514 of FIG. 5. In various embodiments, optical connectors contained in MPCMs 916-1 to 916-7 may be designed to couple with counterpart optical connectors contained in MPCMs of inserted sleds to provide such sleds with optical signaling connectivity to dual-mode optical switching infrastructure 914 via respective lengths of optical cabling 922-1 to 922-7. In some embodiments, each such length of optical cabling may extend from its corresponding MPCM to an optical interconnect loom 923 that is external to the sled spaces of rack 902. In various embodiments, optical interconnect loom 923 may be arranged to pass through a support post or other type of load- bearing element of rack 902. The embodiments are not limited in this context. Because inserted sleds connect to an optical switching infrastructure via MPCMs, the resources typically spent in manually configuring the rack cabling to accommodate a newly inserted sled can be saved.[0045] FIG. 10 illustrates an example of a sled 1004 that may be representative of a sled designed for use in conjunction with rack 902 of FIG. 9 according to some embodiments. Sled 1004 may feature an MPCM 1016 that comprises an optical connector 1016A and a power connector 1016B, and that is designed to couple with a counterpart MPCM of a sled space in conjunction with insertion of MPCM 1016 into that sled space. Coupling MPCM 1016 with such a counterpart MPCM may cause power connector 1016 to couple with a power connector comprised in the counterpart MPCM. This may generally enable physical resources 1005 of sled 1004 to source power from an external source, via power connector 1016 and power transmission media 1024 that conductively couples power connector 1016 to physical resources 1005.[0046] Sled 1004 may also include dual-mode optical network interface circuitry 1026.Dual-mode optical network interface circuitry 1026 may generally comprise circuitry that is capable of communicating over optical signaling media according to each of multiple link-layer protocols supported by dual-mode optical switching infrastructure 914 of FIG. 9. In some embodiments, dual-mode optical network interface circuitry 1026 may be capable both of Ethernet protocol communications and of communications according to a second, high- performance protocol. In various embodiments, dual-mode optical network interface circuitry 1026 may include one or more optical transceiver modules 1027, each of which may be capable of transmitting and receiving optical signals over each of one or more optical channels. The embodiments are not limited in this context.[0047] Coupling MPCM 1016 with a counterpart MPCM of a sled space in a given rack may cause optical connector 1016A to couple with an optical connector comprised in the counterpart MPCM. This may generally establish optical connectivity between optical cabling of the sled and dual-mode optical network interface circuitry 1026, via each of a set of optical channels 1025. Dual-mode optical network interface circuitry 1026 may communicate with the physical resources 1005 of sled 1004 via electrical signaling media 1028. In addition to the dimensions of the sleds and arrangement of components on the sleds to provide improved cooling and enable operation at a relatively higher thermal envelope (e.g., 250W), as described above with reference to FIG. 9, in some embodiments, a sled may include one or more additional features to facilitate air cooling, such as a heat pipe and/or heat sinks arranged to dissipate heat generated by physical resources 1005. It is worthy of note that although the example sled 1004 depicted in FIG. 10 does not feature an expansion connector, any given sled that features the design elements of sled 1004 may also feature an expansion connector according to some embodiments. The embodiments are not limited in this context.[0048] FIG. 11 illustrates an example of a data center 1100 that may generally be representative of one in/for which one or more techniques described herein may be implemented according to various embodiments. As reflected in FIG. 11, a physical infrastructure management framework 1150A may be implemented to facilitate management of a physical infrastructure 1100A of data center 1100. In various embodiments, one function of physical infrastructure management framework 1150A may be to manage automated maintenance functions within data center 1100, such as the use of robotic maintenance equipment to service computing equipment within physical infrastructure 1100A. In some embodiments, physical infrastructure 1100A may feature an advanced telemetry system that performs telemetry reporting that is sufficiently robust to support remote automated management of physical infrastructure 1100A. In various embodiments, telemetry information provided by such an advanced telemetry system may support features such as failure prediction/prevention capabilities and capacity planning capabilities. In some embodiments, physical infrastructure management framework 1150A may also be configured to manage authentication of physical infrastructure components using hardware attestation techniques. For example, robots may verify the authenticity of components before installation by analyzing information collected from a radio frequency identification (RFID) tag associated with each component to be installed. The embodiments are not limited in this context.[0049] As shown in FIG. 11, the physical infrastructure 1100A of data center 1100 may comprise an optical fabric 1112, which may include a dual-mode optical switching infrastructure 1114. Optical fabric 1112 and dual-mode optical switching infrastructure 1114 may be the same as - or similar to - optical fabric 412 of FIG. 4 and dual-mode optical switching infrastructure 514 of FIG. 5, respectively, and may provide high-bandwidth, low- latency, multi-protocol connectivity among sleds of data center 1100. As discussed above, with reference to FIG. 1, in various embodiments, the availability of such connectivity may make it feasible to disaggregate and dynamically pool resources such as accelerators, memory, and storage. In some embodiments, for example, one or more pooled accelerator sleds 1130 may be included among the physical infrastructure 1100A of data center 1100, each of which may comprise a pool of accelerator resources - such as co-processors and/or FPGAs, for example - that is globally accessible to other sleds via optical fabric 1112 and dual-mode optical switching infrastructure 1114.[0050] In another example, in various embodiments, one or more pooled storage sleds1132 may be included among the physical infrastructure 1100A of data center 1100, each of which may comprise a pool of storage resources that is available globally accessible to other sleds via optical fabric 1112 and dual-mode optical switching infrastructure 1114. In some embodiments, such pooled storage sleds 1132 may comprise pools of solid-state storage devices such as solid-state drives (SSDs). In various embodiments, one or more high-performance processing sleds 1134 may be included among the physical infrastructure 1100A of data center 1100. In some embodiments, high-performance processing sleds 1134 may comprise pools of high-performance processors, as well as cooling features that enhance air cooling to yield a higher thermal envelope of up to 250W or more. In various embodiments, any given high- performance processing sled 1134 may feature an expansion connector 1117 that can accept a far memory expansion sled, such that the far memory that is locally available to that high- performance processing sled 1134 is disaggregated from the processors and near memory comprised on that sled. In some embodiments, such a high-performance processing sled 1134 may be configured with far memory using an expansion sled that comprises low-latency SSD storage. The optical infrastructure allows for compute resources on one sled to utilize remote accelerator/FPGA, memory, and/or SSD resources that are disaggregated on a sled located on the same rack or any other rack in the data center. The remote resources can be located one switch jump away or two-switch jumps away in the spine-leaf network architecture described above with reference to FIG. 5. The embodiments are not limited in this context.[0051] In various embodiments, one or more layers of abstraction may be applied to the physical resources of physical infrastructure 1100A in order to define a virtual infrastructure, such as a software-defined infrastructure 1100B. In some embodiments, virtual computing resources 1136 of software-defined infrastructure 1100B may be allocated to support the provision of cloud services 1140. In various embodiments, particular sets of virtual computing resources 1136 may be grouped for provision to cloud services 1140 in the form of SDI services 1138. Examples of cloud services 1140 may include - without limitation - software as a service (SaaS) services 1142, platform as a service (PaaS) services 1144, and infrastructure as a service (IaaS) services 1146.[0052] In some embodiments, management of software-defined infrastructure 1100B may be conducted using a virtual infrastructure management framework 1150B. In various embodiments, virtual infrastructure management framework 1150B may be designed to implement workload fingerprinting techniques and/or machine-learning techniques in conjunction with managing allocation of virtual computing resources 1136 and/or SDI services1138 to cloud services 1140. In some embodiments, virtual infrastructure management framework 1150B may use/consult telemetry data in conjunction with performing such resource allocation. In various embodiments, an application/service management framework 1150C may be implemented in order to provide QoS management capabilities for cloud services 1140. The embodiments are not limited in this context.[0053] Referring now to FIGS. 12-21, in some embodiments, each rack 302 and/or each sled 204 may be grouped into a zone based on a distance from the rack 302 and/or sled 204 to the network switch 1204 to which it is attached. A data cable used to connect each rack 302 and/or sled 204 may have a certain length which is determined by which zone the rack and/or 204 is in. It should be appreciated that such a configuration allows for the data cables used in the data center 300 to be manufactured with relatively few different lengths without significantly impacting the latency or cost of a given data cable.[0054] Referring now to FIG. 12, an illustrative data center 300 may arrange the racks302 into one or more rows 1202, with each rack 302 including several sleds 204 (not shown in FIG. 12) and connected to a network switch 1204. It should be appreciated that the network switch 1204 may be connected to an intermediate switch at the rack 302 (such as through a top- of-rack switch 1302 as shown below in FIG. 13 A) or may be directly connected to the sleds 204 of the rack 302 (as shown below in FIGS. 14 & 15). Only one rack 302 in FIG. 12 is labeled in the interest of clarity, but each solid black rectangle represents a rack 302. The data center 300 may include any number of racks 302 and/or sleds 204. In the illustrative embodiment, the network switch 1204 may be connected to 64 racks 302, each with 16 sleds 204, for a total of 1,024 sleds 204 connected to the same network switch 1204. In other embodiments, the network switch 1204 may be connected to more than, fewer than, or equal to 2, 4, 8, 16, 32, 64, 128, or 256 racks 302, with each rack 302 having more than, fewer than, or equal to 2, 4, 8, 16, 32, or 64 sleds 204.[0055] The racks 302 of FIG. 12 are organized into zones based on a distance from the rack 302 to the network switch 1204. The racks 302 of FIG. 12 are grouped into a first zone1206, a second zone 1208, a third zone 1210, and a fourth zone 1212. In some embodiments, different sleds 204 in the same rack 302 may be considered to be in different zones. In other embodiments, all of the sleds 204 in the same rack 302 may always be considered to be in the same zone. A data cable of the same length is used to connect the network switch 1204 to each rack 302 and/or sled 204 in the same zone. As shown in FIG. 12, a first length data cable 1214 is used to connect the racks 302 in the first zone 1206, a second length data cable 1216 is used to connect the racks 302 in the second zone 1208, a third length data cable 1218 is used to connect the racks 302 in the third zone 1210, and a fourth length data cable 1220 is used to connect the racks 302 in the fourth zone 1212. Each data cable of the same shade is the same type, but not every data cable in FIG. 12 is labeled in the interest of clarity. Each data cable1214-1220 may be an electrical cable (e.g., copper cable) or optical cable. It should be appreciated that each data cable of a given type (e.g., each first length data cable 1214) is approximately the same length. As used herein, two data cables are considered to be approximately the same length if the length of the shorter data cable is within 1% of the length of the longer data cable or if the length of the shorter cable is within 5 centimeters of the length of the longer data cable, unless explicitly noted otherwise. The different data cables may be identified by, e.g., writing on the jacket specifying the length of the data cable, a tag connected to the data cable near one or both ends, and/or by using different colors for the jacket based on the cable length.[0056] In the illustrative embodiment, each data cable 1214-1220 is a passive optical cable (i.e., an optical cable that does not include electrical-to-optical and/or optical-to-electrical transceivers at one or both ends) and the network switch 1204 may employ silicon photonics(including silicon photonics integrated with silicon electronics on a single chip) to receive and generate optical signals to and from electrical signals for internal processing routing, and may employ optical multiplexers, photodiodes, and other silicon photonics components. Once converted to an electrical signal, the illustrative network switch 1204 may determine the destination of the received signal using standard routing techniques. In some embodiments, the data cables 1214-1220 may be electrical cables or active optical cables, and the network switch1204 may employ an all-electrical signal processing and routing approach.[0057] In the illustrative embodiment, each rack 302 and/or sled 204 may be defined to be part of one of the various zones 1206-1212 based on being a distance away from the network switch 1204 that is above a minimum threshold distance and/or below a maximum threshold distance. For example, in the illustrative embodiment, every rack 302 in the first zone 1206 is less than a threshold distance of 5 meters from the network switch 1204, every rack 302 in the second zone 1208 is more than a threshold distance of 5 meters but less than a threshold distance of 10 meters from the network switch 1204, every rack 302 in the third zone 1210 is more than a threshold distance of 10 meters but less than a threshold distance of 15 meters from the network switch 1204, and every rack 302 in the fourth zone 1212 is more than a threshold distance of 15 meters from the network switch 1204. Of course, different values may apply in different embodiments for a minimum or maximum threshold distance for any zone, such as any length between 1 and 50 meters. In the illustrative embodiments, the maximum threshold for one zone is the same as the minimum threshold for the next zone, so that ranges of data cable lengths associated with the various zones do not overlap and have no gaps between them.In some embodiments, the thresholds for the various zones may be such that the ranges of data cable lengths associated with the various zones overlap and/or have gaps between them. In the illustrative embodiment, the length of the data cable used to connect to all of the racks 302 in a given zone is at least the length of the threshold distance that defines the maximum extent of that zone. For example, the length of the first length data cable 1214 is at least 5 meters. Of course, the length of the data cables may be longer than the threshold distance defining the maximum extent of the zone, since the data cables may not be routed directly to the racks 302 and/or sleds 204 in a straight line, the data cables may need to travel vertically (i.e., up or down) at some point, and may otherwise need to be somewhat longer than the distance between the network switch 1204 and the rack 302 in a given zone that is farthest away. In some embodiments, a zone of a given rack 302 may be determined based on the shortest data cable (e.g., the shortest of the data cables 1214-1220) that can be used to connect the rack 302 to the network switch 1204, subject to any restrictions in how the data cables should be routed or organized in the data center 300. It should be appreciated that, in some embodiments, there may not be any indication that a rack 302 is part of a particular zone, other than which data cable is used to connect to that rack 302 (e.g., which of data cables 1214-1220).[0058] It should be appreciated that, in some embodiments, the maximum length of the data cables 1214-1220 may depend on the particular type of data cable. For example, a particular type of high-bandwidth electrical data cable (e.g., a cable capable of carrying a 10 GHz signal) may have a maximum length of 10 meters due to signal loss, while a passive optical cable capable of carrying a signal with a similar digital bandwidth may have a maximum length of several hundred meters or longer.[0059] In the illustrative embodiment, each zone has a large number of racks 302 in it, such as 256 racks 302. In some embodiments, each zone may have any number of racks 302 in it, such as any number from 1-1000, or more than, less than, or equal to 2, 5, 10, 20, 50, 100, 200, 500 or 1000 racks 302, and the number of racks 302 in any given zone may be the same as or different from the number of racks 302 in other zones.[0060] In the illustrative embodiment, each data cable 1214-1220 runs from the network switch 1204 to a corresponding rack 302 and/or sled 204 as a separate data cable from any other data cable 1214-1220. In some embodiments, the data cables 1214-1220 that connect to the sleds 204 in the same rack 302 and/or to racks 302 in the same zone may be bundled together (such as with a jacket) at some point.[0061] In the illustrative embodiment, the switching latency of the network switch 1204 is substantially the same for a signal sent from any sled 204 to any other sled 204 (i.e., the time between when the signal reaches the network switch 1204 from the source sled 204 and when the signal leaves the network switch 1204 to the destination sled 204 is substantially the same, regardless of the source sled 204 and destination sled 204). The switching latency may be any value capable of reaching the required performance levels, such as more than, less than, or equal to, 100 ns, 200 ns, 500 ns, 750 ns, 1,000 ns, 1,500 ns, or 2,000 ns. Of course, the overall latency for communication between any two sleds 204 may depend on the length of the particular data cables 1214-1220 used. In the illustrative embodiment, the latency in communicating between any two sleds 204 is less than 1,000 ns, even for the longest length data cable.[0062] It should be appreciated that, in some embodiments, the data center 300 may include sleds 204 and racks 302 that are not directly connected to the network switch 1204. For example, the data center 300 may include several network switches 1204, with each network switch 1204 connected to a large number of sleds 204. The sleds 204 connected directly to the same network switch 1204 may be grouped together as a unit called a pod. The data center 300 may include any number of pods, such as any number from 1-1000 or more than, fewer than, or equal to 1, 2, 5, 10, 20, 50, 100, 200, 500 or 1,000 pods. The data center 300 may also include additional computational resources organized in a different manner from the pods described above. Of course, the various network switches 1204 of the data center 300 may all be connected to each other, allowing for communication between a first sled 204 connected to a first network switch 1204 and a second sled connected to a second network switch 1204 (although such communication may have a higher latency and/or lower bandwidth than communication between sleds 204 connected to the same network switch 1204).[0063] In the illustrative embodiments, certain physical resources may be preferentially or exclusively placed in certain zones. For example, storage sleds may be preferentially or exclusively placed in the first zone 1206 in order to reduce latency to those storage sleds. The data center 300 may be arranged such that a certain type of physical resource (e.g., compute sleds, memory sleds, storage sleds, accelerator sleds, or other types of sled) outnumbers another type of physical resource (e.g., compute sleds, memory sleds, storage sleds, accelerator sleds, or other type of sled) in a given zone by a certain ratio, such as 3:2, 2: 1, 5: 1, 10: 1, or 20: 1. The data center 300 may also be arranged such that a given zone (such as the zone closest to the network switch 1204) may be composed of a certain portion of a certain type of physical resource (e.g., compute sleds, memory sleds, storage sleds, accelerator sleds, or other type of sled), such as more than, less than, or equal to 5%, 10%, 25%, 35%, 50%, 65%, 75%, 90%, 95%, or any other portion from 0-100%. In some embodiments, most or all of the sleds of a given type (e.g., compute sleds, memory sleds, storage sleds, accelerator sleds, or other sleds) may be placed in the same zone.[0064] Referring now to FIGS. 13 A and 13B (which illustrate the same embodiment from a front-facing and top-down view, respectively), an illustrative rack 302 of the data center300 includes a top-of-rack switch 1302 to which the network switch 1202 is connected, such as with a first length data cable 1214. The rack 302 includes two support posts 1304 and several support arms 1306. It should be appreciated that, in some embodiments, a support post 1304 may support more than one rack 302 by being the left support post 1304 for one rack 302 and the right support post 1304 for the adjacent rack 302. Each pair of support arms 1306 that are the same distance from the ground form a sled space between them, into which a sled 204 may be inserted, but no sled 204 is shown in FIGS. 13 A and 13B for the purpose of clarity.[0065] A data cable 1308 runs from the top-of-rack switch 1302 to each sled space and ends with a connector 1310 which can mate with a corresponding component on the sled 204 to connect the sled 204 to the top-of-rack switch 1302. In the illustrative embodiment of FIG. 13, each data cable 1308 is an electrical cable and the first length data cable 1214 is an optical cable, but it should be appreciated that the data cable 1308 and/or the first length data cable 1214 may be an electrical cable or optical cable. As shown in the illustrative embodiment of FIGS. 13A and 13B, the data cables 1308 may run alongside the support post 1304. In some embodiments, the support posts 1304 may be hollow, allowing the data cables 1308 to be run inside of the support posts 1304. In the illustrative embodiments, each data cable 1308 that runs from the top-of-rack switch 1302 to a connector 1310 is a separate data cable, but, in some embodiments, some or all of the data cables 1308 running to different connectors 1310 in the same rack 302 may be bundled together in some way. Of course, not every component of the rack 302 is shown in FIGS. 13A and 13B, and the rack 302 may include additional elements such as mechanical support for the connectors 1310, power supplies, additional cables, etc.[0066] Referring now to FIG. 14, an illustrative rack 302 of the data center 300 includes two support posts 1304 and several support arms 1306, as in FIGS. 13A and 13B. FIG. 14 is a front-facing view, similar to FIG. 13A. The embodiment of the rack 302 shown in FIG. 14 does not include a top-of-rack switch, so the data cables coming from the network switch 1204 (such as a first length data cable 1214) are connected to the rack 302 by running directly to the sleds 204. In the embodiment shown in FIG. 14, each sled 204 in the same rack 302 is considered to be in the same zone and has the same length data cable running to the network switch 1204 (e.g., the first length data cable 1214). The data cables 1214 running from the various sleds 204 from a single rack 302 to the same network switch 1204 may or may not be bundled together in some manner.[0067] Referring now to FIG. 15, an illustrative rack 302 of the data center 300 includes two support posts 1304 and several support arms 1306, as in FIGS. 13 A and 13B. FIG. 15 is a front-facing view, similar to FIG. 13A. The embodiment of the rack 302 shown in FIG. 15, like in FIG. 14, does not include a top-of-rack switch, so the data cables coming from the network switch 1204 (such as a first length data cable 1214 or a second length data cable 1216) may run directly to the sleds 204. In the embodiment shown in FIG. 15, different sleds 204 in the same rack 302 may be in the different zones, and so some of the lower sleds 204 may be connected by a longer data cable (e.g., a second length data cable 1216) as compared to the data cables used for some of the higher sleds 204 (e.g., a first length data cable 1214). In the illustrative embodiment, all of the first length data cables 1214 and all of the second length data cables 1216 are separate, independent cables. In some embodiments, some or all of the first length data cables 1214 and the second length data cables 1216 running to the same rack 302 may be bundled together, such as all of the first length data cables 1214 running to connectors 1310 in the same rack 302 and all of the second length data cables 1216 running to connectors 1310 in the same rack 302.[0068] Referring now to FIG. 16, an illustrative data center 300 may include several pods 1602 connected together through one or more spine switches 1604. As described above, a pod 1602 includes all of the racks 302 and sleds 204 that are connected together with a single network switch 1204 (with or without intermediate top-of-rack switches 1302 as shown in FIG. 13 and FIGS 14 & 15, respectively). Each pod 1602 includes several rows 1202 of racks 302. Only one row 1202 is labeled in FIG. 16 in the interest of clarity, but each unlabeled solid black rectangle represents a row 1202. Similar to how racks 302 are grouped into zones in FIG. 12, the pods 1602 in FIG. 16 are grouped into zones based on a distance from the network switch 1204 to the spine switch 1604.[0069] The pods 1602 of FIG. 16 are grouped into a first zone 1606, a second zone1608, and a third zone 1610. A data cable of the same length is used to connect the spine switch 1604 to each network switch 1204 in the same zone. As shown in FIG. 16, a first length data cable 1612 is used to connect the pods 1602 in the first zone 1606, a second length data cable 1614 is used to connect the pods 1602 in the second zone 1608, and a third length data cable 1616 is used to connect the pods 1602 in the third zone 1610. Each data cable of the same shade is the same type, but not every data cable in FIG. 16 is labeled in the interest of clarity. It should be appreciated that each data cable of a given type (e.g., each first length data cable 1612) is approximately the same length. Each data cable 1612-1616 may be an electrical cable (e.g., copper cable) or optical cable. In the illustrative embodiment, each data cable 1612-1616 is a passive optical cable. The spine switch 1604 may use a similar switching technology as the network switch 1204 (i.e., the spine switch 1604 may employ silicon photonics to interface with optical signals or may use all-electrical signal processing and routing).[0070] In the illustrative embodiment, each pod 1602 may be defined to be part of one of the various zones 1606-1610 based on the corresponding network switch 1204 being a distance away from the spine switch 1604 that is above a minimum threshold distance and/or below a maximum threshold distance. For example, in the illustrative embodiment, every network switch 1204 in the first zone 1606 is less than a threshold distance of 20 meters from the spine switch 1604, every network switch 1204 in the second zone 1608 is more than a threshold distance of 20 meters but less than a threshold distance of 60 meters from the spine switch 1604, and every network switch 1204 in the third zone 1610 is more than a threshold distance of 60 meters but less than a threshold distance of 100 meters from the spine switch 1604.[0071] Of course, different values may apply in different embodiments for a minimum or maximum threshold distance for any zone, such as any length between 1 and 500 meters. In the illustrative embodiments, the maximum threshold for one zone is the same as the minimum threshold for the next zone, so that ranges of data cable lengths associated with the various zones do not overlap and have no gaps between them. In some embodiments, the thresholds for the various zones may be such that the ranges of data cable lengths associated with the various zones overlap and/or have gaps between them. In the illustrative embodiment, the length of the data cable used to connect to all of the network switches 1204 in a given zone is at least the length of the threshold distance that defines the maximum extent of that zone. For example, the length of the first length data cable 1612 is at least 20 meters. Of course, the length of the data cables may be longer than the threshold distance defining the maximum extent of the zone, since the data cables may not be routed directly to the network switches 1204 in a straight line, the data cables may need to travel vertically (i.e., up or down) at some point, and may otherwise need to be somewhat longer than the distance between the spine switch 1604 and the network switch 1204 in a given zone that is farthest away. In some embodiments, a zone of a given network switch 1204 may be determined based on the shortest data cable (e.g., the shortest of the data cables 1214-1220) that can be used to connect the network switch 1204 to the spine switch 1204, subject to any restrictions in how the data cables should be routed or organized in the data center 300. It should be appreciated that, in some embodiments, there may not be any indication that a pod 1602 is part of a particular zone, other than which data cable is used to connect to the corresponding network switch 1204 (e.g., which of data cables 1612-1616).[0072] It should be appreciated that, in some embodiments, the maximum length of the data cables 1612-1616 may depend on the particular type of data cable. For example, a particular type of high-bandwidth electrical data cable (e.g., a cable capable of carrying a 10 GHz signal) may have a maximum length of 10 meters due to signal loss, while a passive optical cable capable of carrying a signal with a similar digital bandwidth may have a maximum length of several hundred meters or longer. [0073] As shown in FIG. 17, an illustrative system 1710 for orchestrating workloads assigned in a data center includes an orchestrator server 1740 in communication with a set of managed nodes 1760. In the illustrative embodiment, the set of managed nodes 1760 includes managed nodes 1750, 1752, and 154. While three managed nodes 1760 are shown for simplicity, it should be understood that, in the illustrative embodiment the set includes many more managed nodes 1760 (e.g., tens of thousands of managed nodes 1760). The system 1710 may be located in a data center 300 and provide storage and compute services (e.g., cloud services) to a client device 1720 that is in communication with the system 1710 through a network 1730. The orchestrator server 1740 may support a cloud operating environment, such as OpenStack, and the managed nodes 1760 may execute one or more applications or processes (i.e., workloads), such as in virtual machines or containers, on behalf of a user of the client device 1720. As discussed in more detail herein, the orchestrator server 1740, in operation, is configured to receive availability data from each managed node 1760. The availability data may be embodied as any data indicative of the ability of the corresponding managed node to receive and execute a workload in addition to any workloads the managed node 1760 is presently executing. After receiving the availability data, which is generated by the managed nodes 1760, the orchestrator server 1740 performs analytics to determine how to assign or reassign workloads among the managed nodes 1760 that reported themselves as being available in the availability data. As such, in the illustrative embodiment, the orchestrator server 1740 focuses the data analytics for determining workload assignments and reassignments to the limited set of available managed nodes 1760, thereby enabling the orchestrator server 1740 to operate more efficiently.[0074] Referring now to FIG. 18, the orchestrator server 1740 may be embodied as any type of compute device capable of performing the functions described herein, including issuing a request to have cloud services performed, receiving results of the cloud services, assigning workloads to managed nodes 1760, analyzing telemetry data indicative of performance and conditions (e.g., resource utilization, one or more temperatures, fan speeds, etc.) as the workloads are executed, and adjusting the assignments of the workloads to increase resource utilization as the workloads are performed. For example, the orchestrator server 1740 may be embodied as a computer, a distributed computing system, one or more sleds (e.g., the sleds 204-1, 204-2, 204-3, 204-4, etc.), a server (e.g., stand-alone, rack-mounted, blade, etc.), a multiprocessor system, a network appliance (e.g., physical or virtual), a desktop computer, a workstation, a laptop computer, a notebook computer, a processor-based system, or a network appliance. As shown in FIG. 18, the illustrative orchestrator server 1740 includes a central processing unit (CPU) 1802, a main memory 1804, an input/output (I/O) subsystem 1806, communication circuitry 1808, and one or more data storage devices 1812. Of course, in other embodiments, the orchestrator server 1740 may include other or additional components, such as those commonly found in a computer (e.g., display, peripheral devices, etc.). Additionally, in some embodiments, one or more of the illustrative components may be incorporated in, or otherwise form a portion of, another component. For example, in some embodiments, the main memory 1804, or portions thereof, may be incorporated in the CPU 1802.[0075] The CPU 1802 may be embodied as any type of processor capable of performing the functions described herein. The CPU 1802 may be embodied as a single or multi-core processor(s), a microcontroller, or other processor or processing/controlling circuit. In some embodiments, the CPU 1802 may be embodied as, include, or be coupled to a field programmable gate array (FPGA), an application specific integrated circuit (ASIC), reconfigurable hardware or hardware circuitry, or other specialized hardware to facilitate performance of the functions described herein. Similarly, the main memory 1804 may be embodied as any type of volatile (e.g., dynamic random access memory (DRAM), etc.) or nonvolatile memory or data storage capable of performing the functions described herein. In some embodiments, all or a portion of the main memory 1804 may be integrated into the CPU 1802. In operation, the main memory 1804 may store various software and data used during operation such as availability data, telemetry data, policy data, workload labels, workload classifications, workload adjustment data, operating systems, applications, programs, libraries, and drivers.[0076] The I/O subsystem 1806 may be embodied as circuitry and/or components to facilitate input/output operations with the CPU 1802, the main memory 1802, and other components of the orchestrator server 1740. For example, the I/O subsystem 1806 may be embodied as, or otherwise include, memory controller hubs, input/output control hubs, integrated sensor hubs, firmware devices, communication links (e.g., point-to-point links, bus links, wires, cables, light guides, printed circuit board traces, etc.), and/or other components and subsystems to facilitate the input/output operations. In some embodiments, the I/O subsystem 1806 may form a portion of a system-on-a-chip (SoC) and be incorporated, along with one or more of the CPU 1802, the main memory 1804, and other components of the orchestrator server 1740, on a single integrated circuit chip.[0077] The communication circuitry 1808 may be embodied as any communication circuit, device, or collection thereof, capable of enabling communications over the network 1730 between the orchestrator server 1740 and another compute device (e.g., the client device 1720 and/or the managed nodes 1760). The communication circuitry 1808 may be configured to use any one or more communication technology (e.g., wired or wireless communications) and associated protocols (e.g., Ethernet, Bluetooth®, Wi-Fi®, WiMAX, etc.) to effect such communication.[0078] The illustrative communication circuitry 1808 includes a network interface controller (NIC) 1810, which may also be referred to as a host fabric interface (HFI). The NIC 1810 may be embodied as one or more add-in-boards, daughtercards, network interface cards, controller chips, chipsets, or other devices that may be used by the orchestrator server 1740 to connect with another compute device (e.g., a managed node 1760 or the client device 1720). In some embodiments, the NIC 1810 may be embodied as part of a system-on-a-chip (SoC) that includes one or more processors, or included on a multichip package that also contains one or more processors. In some embodiments, the NIC 1810 may include a local processor (not shown) and/or a local memory (not shown) that are both local to the NIC 1810. In such embodiments, the local processor of the NIC 1810 may be capable of performing one or more of the functions of the CPU 1802 described herein. Additionally or alternatively, in such embodiments, the local memory of the NIC 1810 may be integrated into one or more components of the orchestrator server 1740 at the board level, socket level, chip level, and/or other levels.[0079] The one or more illustrative data storage devices 1812, may be embodied as any type of devices configured for short-term or long-term storage of data such as, for example, memory devices and circuits, memory cards, hard disk drives, solid-state drives, or other data storage devices. Each data storage device 1812 may include a system partition that stores data and firmware code for the data storage device 1812. Each data storage device 1812 may also include an operating system partition that stores data files and executables for an operating system.[0080] Additionally, the orchestrator server 1740 may include a display 1814. The display 1814 may be embodied as, or otherwise use, any suitable display technology including, for example, a liquid crystal display (LCD), a light emitting diode (LED) display, a cathode ray tube (CRT) display, a plasma display, and/or other display usable in a compute device. The display 1814 may include a touchscreen sensor that uses any suitable touchscreen input technology to detect the user's tactile selection of information displayed on the display including, but not limited to, resistive touchscreen sensors, capacitive touchscreen sensors, surface acoustic wave (SAW) touchscreen sensors, infrared touchscreen sensors, optical imaging touchscreen sensors, acoustic touchscreen sensors, and/or other type of touchscreen sensors.[0081] Additionally or alternatively, the orchestrator server 1740 may include one or more peripheral devices 1816. Such peripheral devices 1816 may include any type of peripheral device commonly found in a compute device such as speakers, a mouse, a keyboard, and/or other input/output devices, interface devices, and/or other peripheral devices.[0082] The client device 1720 may have components similar to those described in FIG.18. The description of those components of the orchestrator server 1740 is equally applicable to the description of components of the client device 1720 and is not repeated herein for clarity of the description. In the illustrative embodiment, each of the managed nodes 1760 may be embodied as a sled 204 in a rack 304 of the data center 300. In other embodiments, each of the managed nodes 1760 may have components similar to those described in FIG. 18, like the client device 1720. Further, it should be appreciated that any of the client device 1720 and the managed nodes 1760 may include other components, sub-components, and devices commonly found in a computing device, which are not discussed above in reference to the orchestrator server 1740 and not discussed herein for clarity of the description.[0083] As described above, the client device 1720, the orchestrator server 1740 and the managed nodes 1760 are illustratively in communication via the network 1730, which may be embodied as any type of wired or wireless communication network, including global networks (e.g., the Internet), local area networks (LANs) or wide area networks (WANs), cellular networks (e.g., Global System for Mobile Communications (GSM), 3G, Long Term Evolution (LTE), Worldwide Interoperability for Microwave Access (WiMAX), etc.), digital subscriber line (DSL) networks, cable networks (e.g., coaxial networks, fiber networks, etc.), or any combination thereof.[0084] Referring now to FIG. 19, in the illustrative embodiment, the orchestrator server1740 may establish an environment 1900 during operation. The illustrative environment 1900 includes a network communicator 1902, a telemetry monitor 1904, a policy manager 1940, and a resource manager 1906. Each of the components of the environment 1900 may be embodied as hardware, firmware, software, or a combination thereof. As such, in some embodiments, one or more of the components of the environment 1900 may be embodied as circuitry or a collection of electrical devices (e.g., network communicator circuitry 1902, telemetry monitor circuitry 1904, resource manager circuitry 1906, etc.). It should be appreciated that, in such embodiments, one or more of the network communicator circuitry 1902, telemetry monitor circuitry 1904, or resource manager circuitry 1906 may form a portion of one or more of the CPU 1804, the main memory 1806, the I/O subsystem 1810, and/or other components of the orchestrator server 1740.[0085] In the illustrative environment 1900, the network communicator 1902, which may be embodied as hardware, firmware, software, virtualized hardware, emulated architecture, and/or a combination thereof as discussed above, is configured to facilitate inbound and outbound network communications (e.g., network traffic, network packets, network flows, etc.) to and from the orchestrator server 1740, respectively. To do so, the network communicator 1902 is configured to receive and process data packets from one system or computing device (e.g., the client device 1720) and to prepare and send data packets to another computing device or system (e.g., the managed nodes 1760). Accordingly, in some embodiments, at least a portion of the functionality of the network communicator 1902 may be performed by the communication circuitry 1810, and, in the illustrative embodiment, by the NIC 1810.[0086] The telemetry monitor 1904, which may be embodied as hardware, firmware, software, virtualized hardware, emulated architecture, and/or a combination thereof as discussed above, is configured to collect status data (e.g., telemetry data 1902 and managed node availability data 1917) from the managed nodes 1760 as the managed nodes 1760 execute the workloads assigned to them. The telemetry monitor 1904 may actively poll each of the managed nodes 1760 for updated status data on an ongoing basis or may passively receive the status data from the managed nodes 1760, such as by listening on a particular network port for updated status data. The telemetry monitor 1904 may further parse and categorize the status data, such as by separating the status data into an individual file or data set for each managed node 1760.[0087] The resource manager 1906, which may be embodied as hardware, firmware, software, virtualized hardware, emulated architecture, and/or a combination thereof, is configured to generate data analytics from the telemetry data 1902, identify the workloads, classify the workloads, identify trends in the resource utilization of the workloads, predict future resource utilizations of the workloads, and adjust the assignments of the workloads to the managed nodes 1760 and the settings of the managed nodes 1760 to increase the resource utilization. The resource manager 1906 includes a network utilization monitor 1908, which may identify workloads that would benefit from low-latency communication between various sleds 204, and move those identified workloads to low-latency sleds 204 (i.e., to sleds in zones close to the network switch 1202. The resource manager 1906 may also include a virtual machine creator 1910, which is configured to receive requests for creation of virtual machines (such as from a client device 1720 through the network communicator 1902) which may specify whether or not a low-latency virtual machine is required. The virtual machine creator 1910 may then create a virtual machine using physical resources (e.g., sleds 204 and managed nodes 1760) with a latency corresponding to the request.[0088] Referring now to FIG. 20, in use, the orchestrator 1740 may execute a method2000 for creating a virtual server. As described above, each zone has a different cable length associated with it and, as such, may have a different latency. The orchestrator 1740 may create low-latency virtual servers by assigning resources of racks located in zones which use short cables, such as the first zone 1206. The method 2000 begins in block 2002, in which the orchestrator server 2002 receives a request to create a virtual server, which may be a request to create a low-latency virtual server. If the received request is a request for a low-latency virtual server, the method 2000 in block 2004 proceeds to block 2006. In block 2006, the orchestrator server 1740 creates a low-latency virtual server by creating a virtual server with one or more physical resources in a low-latency zone, such as the first zone 1206 shown in FIG. 12. The orchestrator server 1740 may create the virtual server with a low-latency compute sled in block 2008, with a low-latency storage sled in block 2010, with a low-latency memory sled in block 2012, and/or with another low-latency sled in block 2010, such as an accelerator sled.[0089] Referring back to block 2004, if the received request is not a request for a low- latency virtual server, the method 2000 in block 2004 proceeds to block 2016. In block 2016, the orchestrator server 1740 creates a standard virtual server. The orchestrator server 1740 may create a standard virtual server by creating a virtual server without regard to latency of the sleds 204 composing the virtual server in block 2018 or may create a standard virtual server by creating a high-latency virtual server by creating a virtual server with sleds 204 that have a high latency.[0090] Referring now to FIG. 21, in use, the orchestrator server 1740 may execute a method 2100 for managing workloads of the data center 300. The method 2100 begins in block 2102, in which the orchestrator server 1740 receives telemetry data from the managed nodes as workloads are performed. The orchestrator server 1740 may receive network utilization data in block 2104 and, in block 2106, may analyze the workloads to determine which workloads have a high network utilization.[0091] In block 2108, the orchestrator server 1740 may transfer workloads with high network utilization to low-latency sleds, such as by transferring workloads to a low-latency compute sled in block 2110, to a low-latency storage sled in block 2112, a low-latency memory sled in block 2114, and/or to another low-latency sled in block 2216, such as an accelerator sled. It should be appreciated that, as part of transferring workloads with high network utilization to low-latency sleds, the orchestrator server 1740 may also similarly transfer workloads with low network utilization to high-latency sleds.EXAMPLES[0092] Illustrative examples of the devices, systems, and methods disclosed herein are provided below. An embodiment of the devices, systems, and methods may include any one or more, and any combination of, the examples described below. [0093] Example 1 includes a data center comprising a network switch; a plurality of racks, wherein each rack of the plurality of racks is located in a corresponding zone of a plurality of zones, wherein each zone of the plurality of zones is associated with a minimum threshold distance and a maximum threshold distance that define a distance range, wherein no two distance ranges of the plurality of zones overlap with each other, and wherein each rack of the plurality of racks is defined as located in a zone of the plurality of zones if the distance from the network switch to the corresponding rack is above the minimum threshold distance associated with the corresponding zone and below a maximum threshold distance associated with the corresponding zone; and a plurality of data cables, wherein each rack of the plurality of racks is connected to the network switch with one of the data cables of the plurality of data cables and wherein each data cable of the plurality of data cables that is connected to a corresponding rack located in the same zone has approximately the same length.[0094] Example 2 includes the subject matter of Example 1, and wherein each rack of the plurality of racks comprises a plurality of sleds, wherein each data cable of the plurality of data cables is connected directly to a sled of the plurality of sleds of a rack of the plurality of racks.[0095] Example 3 includes the subject matter of any of Examples 1 and 2, and wherein each data cable of the plurality of data cables is a passive optical cable.[0096] Example 4 includes the subject matter of any of Examples 1-3, and further including at least 256 sleds, wherein each of the at least 256 sleds is included in a rack of the plurality of racks.[0097] Example 5 includes the subject matter of any of Examples 1-4, and further including at least 1,024 sleds, wherein each of the at least 1,024 sleds is included in a rack of the plurality of racks.[0098] Example 6 includes the subject matter of any of Examples 1-5, and wherein each data cable of the plurality of data cables is connected to a top-of-rack switch of a rack of the plurality of racks.[0099] Example 7 includes the subject matter of any of Examples 1-6, and wherein each data cable of the plurality of data cables is the same color as each other data cable of the plurality of data cables that is approximately the same length as the corresponding data cable and is a different color from each other data cable of the plurality of data cables that is not approximately the same length as the corresponding data cable.[00100] Example 8 includes the subject matter of any of Examples 1-7, and further including at least 256 sleds, wherein each of the 256 sleds is included in a rack of the plurality of racks, wherein each rack connected to the network switch is in a zone of the plurality of zones, wherein the plurality of zones comprises at most 4 zones.[00101] Example 9 includes the subject matter of any of Examples 1-8, and further including a plurality of sleds, wherein each sled of the plurality of sleds is included in a rack of the plurality of racks, wherein at least half of the sleds in the zone closest to the network switch are storage sleds.[00102] Example 10 includes a data center comprising a spine switch; a plurality of pods, each pod of the plurality of pods comprising a plurality of racks and a network switch, wherein each rack of a plurality of racks of a pod of the plurality of pods is connected to the corresponding network switch and wherein each pod of the plurality of pods is located in a corresponding zone of a plurality of zones, wherein each zone of the plurality of zones is associated with a minimum threshold distance and a maximum threshold distance that define a distance range, wherein no two distance ranges of the plurality of zones overlap with each other, and wherein each pod of the plurality of pods is defined as located in a zone of the plurality of zones if the distance from the spine switch to the corresponding network switch is above the minimum threshold distance associated with the corresponding zone and below a maximum threshold distance associated with the corresponding zone; and a plurality of data cables, wherein each network switch of the plurality of pods is connected to the spine switch with one of the data cables of the plurality of data cables and wherein each data cable of the plurality of data cables that is connected to a corresponding network switch located in the same zone has approximately the same length.[00103] Example 11 includes the subject matter of Example 10, and wherein each data cable of the plurality of data cables is a passive optical cable.[00104] Example 12 includes the subject matter of any of Examples 10 and 11, and wherein each data cable of the plurality of data cables is the same color as each other data cable of the plurality of data cables that is approximately the same length as the corresponding data cable and is a different color from each other data cable of the plurality of data cables that is not approximately the same length as the corresponding data cable.[00105] Example 13 includes the subject matter of any of Examples 10-12, and wherein the plurality of pods comprises at least 32 pods, wherein each network switch of a pod of the plurality of pods connected to the spine switch is in a zone of the plurality of zones, wherein the plurality of zones comprises at most 4 zones.[00106] Example 14 includes an orchestrator server for managing resources of a data center, the orchestrator server comprising one or more processors; one or more memory devices having stored therein a plurality of instructions that, when executed by the one or more processors, causes the orchestrator server to receive a request for creation of a low-latency virtual machine; and select, in response to the request for creation of the low-latency virtual machine, one or more sleds of the data center in a low-latency zone, wherein each sled in the low-latency zone is connected to the same network switch via a corresponding data cable of a plurality of data cables, wherein each data cable of the plurality of data cables has a length shorter than or approximately equal to the length of each other data cable of the plurality of data cables connected to the same network switch; and create, in response to the request for creation of the low-latency virtual machine, the low-latency virtual machine with use of the one or more sleds.[00107] Example 15 includes the subject matter of Example 14, and wherein the one or more sleds comprises a storage sled.[00108] Example 16 includes the subject matter of any of Examples 14 and 15, and wherein the low latency zone comprises at least 128 sleds.[00109] Example 17 includes the subject matter of any of Examples 14-16, and wherein the plurality of instructions further cause the compute device to receive network utilization data of a plurality of workloads of the data center; analyze the plurality of workloads based on the network utilization data to determine one or more workloads with high network utilization; and transfer the one or more workloads with high network utilization to one or more additional sleds in the low-latency zone.[00110] Example 18 includes a method for configuring a data center, the method comprising determining a length from each rack of a plurality of racks to a network switch of the data center; assigning each rack of the plurality of racks to a zone of a plurality of zones associated with the network switch based on a determination that the corresponding distance from the rack to the network switch is above a minimum threshold distance associated with the assigned zone and below a maximum threshold distance associated with the assigned zone; selecting, for each rack of the plurality of racks, a length of a data cable to connect the rack to the network switch based on the assigned zone; and connecting, for each rack of the plurality of racks, a data cable with the selected length from the rack to the network switch, wherein the length selected for each data cable is approximately the same as the length of each other data cable selected for each rack of the plurality of racks assigned to the same zone.[00111] Example 19 includes the subject matter of Example 18, and wherein each rack of the plurality of racks comprises a plurality of sleds, wherein each data cable connected to a rack of the plurality of racks is connected directly to a sled of the plurality of sleds of a rack of the plurality of racks. [00112] Example 20 includes the subject matter of any of Examples 18 and 19, and wherein each data cable of the plurality of data cables is a passive optical cable.[00113] Example 21 includes the subject matter of any of Examples 18-20, and wherein connecting the data cable with the selected length from the rack to the network switch comprises connecting at least 256 sleds, wherein each of the at least 256 sleds is included in a rack of the plurality of racks.[00114] Example 22 includes the subject matter of any of Examples 18-21, and wherein connecting the data cable with the selected length from the rack to the network switch comprises connecting at least 1,024 sleds, wherein each of the at least 1,024 sleds is included in a rack of the plurality of racks.[00115] Example 23 includes the subject matter of any of Examples 18-22, and wherein connecting the data cable with the selected length from the rack to the network switch comprises connecting the data cable to a top-of-rack switch of the corresponding rack.[00116] Example 24 includes the subject matter of any of Examples 18-23, and wherein each data cable connected to a rack of the plurality of racks is the same color as each other data cable connected to a rack of the plurality of racks that is approximately the same length as the data cable and is a different color from each other data cable connected to a rack of the plurality of racks that is not approximately the same length as the data cable.[00117] Example 25 includes the subject matter of any of Examples 18-24, and wherein the plurality of racks comprises at least 32 racks and wherein the plurality of zones comprises at most 4 zones.[00118] Example 26 includes the subject matter of any of Examples 18-25, and wherein each rack of the data center comprises a plurality of sled spaces, further comprising inserting a storage sled to at least half of the sled spaces of the racks assigned to a zone closest to the network switch.[00119] Example 27 includes a method for configuring a data center, the method comprising determining a length from each network switch of a plurality of network switches to a spine switch of the data center, wherein each network switch of the plurality of network switches is associated with a different pod comprising a plurality of racks; assigning each network switch of the plurality of network switches to a zone of a plurality of zones associated with the spine switch based on a determination that the corresponding distance from the network switch to the spine switch is above a minimum threshold associated with the assigned zone and below a maximum threshold associated with the assigned zone; selecting, for each network switch of the plurality of network switches, a length of a data cable to connect the network switch to the spine switch based on the assigned zone; and connecting, for each network switch of the plurality of network switches, a data cable with the selected length from the network switch to the spine switch, wherein the length selected for each data cable is approximately the same as the length of each other data cable selected for each network switch of the plurality of network switches assigned to the same zone.[00120] Example 28 includes the subject matter of Example 27, and wherein each data cable of the plurality of data cables is a passive optical cable.[00121] Example 29 includes the subject matter of any of Examples 27 and 28, and wherein each data cable connected to a network switch of the plurality of network switches is the same color as each other data cable connected to a network switch of the plurality of network switches that is approximately the same length as the data cable and is a different color from each other data cable connected to a rack of the plurality of racks that is not approximately the same length as the data cable.[00122] Example 30 includes the subject matter of any of Examples 27-29, and wherein the plurality of pods comprises at least 32 pods and wherein the plurality of zones comprises at most 4 zones.[00123] Example 31 includes a method for managing resources of a data center with an orchestrator server, the method comprising receiving, by the orchestrator server, a request for creation of a low-latency virtual machine; selecting, by the orchestrator server and in response to the request for creation of the low-latency virtual machine, one or more sleds of the data center in a low-latency zone, wherein each sled in the low-latency zone is connected to the same network switch via a corresponding data cable of a plurality of data cables, wherein each data cable of the plurality of data cables has a length shorter than or approximately equal to the length of each other data cable of the plurality of data cables connected to the same network switch; and creating, by the orchestrator server and in response to the request for creation of the low-latency virtual machine, the low-latency virtual machine with use of the one or more sleds.[00124] Example 32 includes the subject matter of Example 31, and wherein the one or more sleds comprises a storage sled.[00125] Example 33 includes the subject matter of any of Examples 31 and 32, and wherein the low latency zone comprises at least 128 sleds.[00126] Example 34 includes the subject matter of any of Examples 31-33, and further including receiving network utilization data of a plurality of workloads of the data center; analyzing the plurality of workloads based on the network utilization data to determine one or more workloads with high network utilization; and transferring the one or more workloads with high network utilization to one or more additional sleds in the low-latency zone. [00127] Example 35 includes one or more computer-readable media comprising a plurality of instructions stored thereon that, when executed, causes a compute device to perform the method of any of Examples 31-34.[00128] Example 36 includes an orchestrator server for managing resources of a data center, the orchestrator server comprising means for receiving a request for creation of a low- latency virtual machine; means for selecting, in response to the request for creation of the low- latency virtual machine, one or more sleds of the data center in a low-latency zone, wherein each sled in the low-latency zone is connected to the same network switch via a corresponding data cable of a plurality of data cables, wherein each data cable of the plurality of data cables has a length shorter than or approximately equal to the length of each other data cable of the plurality of data cables connected to the same network switch; and means for creating, in response to the request for creation of the low-latency virtual machine, the low-latency virtual machine with use of the one or more sleds.[00129] Example 37 includes the subject matter of Example 36, and wherein the one or more sleds comprises a storage sled.[00130] Example 38 includes the subject matter of any of Examples 36 and 37, and wherein the low latency zone comprises at least 128 sleds.[00131] Example 39 includes the subject matter of any of Examples 36-38, and further including means for receiving network utilization data of a plurality of workloads of the data center; means for analyzing the plurality of workloads based on the network utilization data to determine one or more workloads with high network utilization; and means for transferring the one or more workloads with high network utilization to one or more additional sleds in the low- latency zone. |
A system may include a pre-formed portion of underfill material defining openings. The openings may be configured to pass electrical interconnects for coupling an integrated circuit die to a portion of a substrate. |
WHAT IS CLAIMED IS: 1. An apparatus comprising: a pre-formed portion of underfill material defining openings, tb e openings to pass electrical interconnects for coupling an integrated circuit die to a portion of a substrate.2. An apparatus according to Claim 1, further comprising: a second pre-formed portion of underfill material coupled to the portion of underfill material, the second portion of underfill material defining second openings, the second openings to pass second electrical interconnects for coupling a second integrated circuit die to a second portion of a substrate.3. An apparatus according to Claim 2, further comprising a pre-formed sheet of underfill material comprising the first portion and the second portion.4. An apparatus according to Claim 2, further comprising a pre-formed tape of underfill material comprising the first portion and the second portion.5. An apparatus according to Claim 1, further comprising: the portion of the substrate.6. An apparatus according to Claim 5, further comprising: the integrated circuit die.7. An apparatus according to Claim 1, further comprising: the integrated circuit die.8. An apparatus according to Claim 1, the underfill material comprising: no-flow underfill material.9. A method comprising: manufacturing a pre-formed portion of underfill material defining openings, the openings to pass electrical interconnects for coupling an integrated circuit die to a portion of a substrate.10. A method according to Claim 9, wherein manufacturing the portion comprises: pressing the underfill material against a template of the openings to create the openings.11. A method according to Claim 10, wherein manufacturing the portion further comprises: plasma etching the openings to further create the openings.12. A method according to Claim 9, further comprising: attaching the underfill material to the substrate.13. A method according to Claim 12, further comprising: coupling the substrate to the integrated circuit die using electrical interconnects passing through the openings.14. A system comprising: a microprocessor comprising: an integrated circuit die; a substrate; and a pre-formed portion of underfill material pre-formed to define openings, the openings passing electrical interconnects for coupling the integrated circuit die to the substrate; and a double data rate memory coupled to the microprocessor.15. A system according to Claim 14, the underfill material comprising: no-flow underfill material.16. A system according to Claim 14, the integrated circuit die comprising a first plurality of electrical contacts; the substrate comprising a second plurality of electrical contacts; and the electrical interconnects for coupling the first plurality of electrical contacts to the second plurality of electrical contacts. |
INTEGRATED CIRCUIT DIE AND SUBSTRATE COUPLINGBACKGROUNDAn integrated circuit (IC) die may include electrical devices that are integrated with a. semiconductor substrate. The IC die may also include conductive paths that electrically couple the electrical devices to one another and to external connections. The die may include several layers of conductive paths, with each layer separated from adjacent layers by an inter-layer dielectric (ILD). The ILD may comprise material having an extremely low dielectric constant (k) in order to minimize capacitance coupling and crosstalk between the conductive paths. Low-k ILD materials often exhibit a coefficient of thermal expansion (CTE) that differs significantly from other elements to which they are coupled, such as the other elements of the IC die and elements of an IC substrate to which the IC die is coupled. Moreover, low-k ILD materials are often brittle. These two characteristics may cause low- k ILD materials to crack during IC die and/or IC package fabrication.BRIEF DESCRIPTION OF THE DRAWINGS FIG. 1 is a perspective view of two portions of underfill material according to some embodiments. FIG. 2 is a bottom view of an IC die according to some embodiments. FIG. 3 is a top view of an IC substrate according to some embodiments. FIG. 4 is a cutaway side elevation of a system according to some embodiments. FIG. 5 is diagram of a process according to some embodiments. FIG. 6 is a side elevation of a portion of underfill material and a carrier according to some embodiments. FIG. 7 is a side elevation of a portion of underfill material, a carrier, and a template according to some embodiments. FIG. 8 is a cutaway side elevation of a portion of underfill material and a carrier according to some embodiments. l FIG. 9 is a cutaway side elevation of a portion of underfill material and an IC substrate according to some embodiments. FIG. 10 is a diagram of a system according to some embodiments.DETAILED DESCRIPTION FIG. 1 is a perspective view of tape 1 according to some embodiments. Tape 1 comprises underfill material portion 10 and underfill material portion 2O). Underfill material portions 10 and 20 may comprise no-flow underfill material. NTo-flow underfill material may comprise low- iscosity, thermally-polymerizable, liquid resin systems that may or may not include fluxing additives. Non-exhaustive examples include 50% by weight silica-filled underfill material and STAYCHIP™ DP-0115 by Cookson Electronics - Semiconductor Products. Underfill material portions 10 and 20 may be in a non-cured, partially-cured, and/or fully cured state. Underfill material portion 10 defines openings 15. Openings 15 may be configured to pass electrical interconnects through underfill material portion 10. Trie electrical interconnects may in turn couple an IC die to an IC substrate. Such an arrangement might reduce ILD mechanical failures and/or provide high fabrication throughput. An IC die may be located on one side of openings 15 and an IC substrate to which the IC die is coupled may be located on an opposite side of openings 15. With reference to FIG. 1, the IC die may be located "above" portion 10 and the IC substrate may be located "below" portion 10. An example of the above-described arrangement according to some embodiments is described below. Underfill material portion 20 may define openings 25 that function similarly to openings 15 of underfill material portion 10. Openings 25 may therefore pass electrical interconnects through underfill material portion 20 for coupling an IC die to an IC substrate. The IC die and/or IC substrate may be identical to the IC die and/or IC substrate that is coupled by the electrical interconnects passed by openings 15. Thαe embodiments described below and shown in FIGS. 4 and 10 include underfill material portion 10 disposed between a dedicated IC die and a dedicated portion of an IC substrate. Underfill material portion 10 is coupled to underfill material portion 20 via coupling 30. Coupling 30 may comprise a physical connection that provides efficient separation of portion 10 from portion 20, or may simply comprise a solid region of material. According to some embodiments, a material located between portion 10 and portion 20 is different from the material of which portion 10 and portion 20 is composed. In some embodiments, tape 1 includes additional underfill material portions coupled to underfill material portion 10 and/or to underfill material portion 20. For example, a portion of underfill material may be coupled to end 27 of portion 20 in the manner that portion 20 is coupled to portion 10. Accordingly, tape 1 may comprise a series of connected portions of underfill material that may be dispensed from a roll or other suitable dispensing system. FIG. 2 illustrates IC die 40 according to some embodiments. IC die 40 includes integrated electrical devices and may be fabricated using any suitable substrate material and fabrication techniques. IC die 40 may provide one or more functions. In some embodiments, IC die 40 comprises a microprocessor chip having a silicon substrate. Side 42 of IC die 40 includes electrical contacts 44. IC die 40 may comprise a flip chip arrangement in which electrical devices that are integrated therein reside between a substrate of IC die 40 and electrical contacts 44. In some embodiments, the substrate resides between the electrical devices and electrical contacts 44. Electrical contacts 44 may comprise copper or lead-based contacts fabricated upon IC die 40. Electrical contacts 44 may comprise Controlled Collapse Chip Connect (C4) solder bumps. In this regard, conductive contacts 44 may be recessed under, flush with, or extending above first side 42 of IC die 40. Electrical contacts 44 may be electrically coupled to the electrical devices that are integrated into IC die 40. FIG. 3 is a view of a side of IC substrate 50 according to some embodiments. Substrate 50 may comprise any ceramic, organic, and/or other suitable material. Substrate 50 may be used to carry power and/or I/O signals between IC die 40 and external electrical components. Substrate 50 may also be used to transmit and receive signals directly to and from IC die 40 according to some embodiments. First side 52 of substrate 50 includes electrical contacts 54. Electrical contacts 54 may comprise C4 solder bumps or plated copper contacts. Electrical contacts 54 may be recessed under, flush with, or extending above first side 52 of substrate. Although the embodiments of FIGS. 2 and 3 show electrical contacts 44 and 54 as lnaving substantially square or circular cross section, respectively, in other embodiments one or more of electrical contacts 44 and 54 have cross sections of different and/or varying shapes. FIG. 4 is a cutaway side elevation of system 60 according to some embodiments. System 60 includes underfill material portion 10, IC die 40 and IC substrate 50. System 60 also includes electrical interconnects 70, which pass through openings 15 of portion 10 and which couple electrical contacts 44 and electrical contacts 54. Underfill material portion 10 may encapsulate electrical interconnects 70 and may therefore protect electrical interconnects 70 from exposure to environmental hazards. Moreover, the CTE of IC die 40 may differ from the CTE of substrate 50 so as to cause undue stress on IC die 40 when system 60 is heated during the attachment of IC die 40 to substrate 50. Underfill material 10 may address this mismatch by absorbing some of the stress and/or distributing the stress away from IC die 40. FIG. 5 is a diagram of process 80 according to some embodiirients. Process 80 may be executed by one or more fabrication devices, and all or a part of process 80 may be executed manually. Process 80 may be executed at any time prior to fabrication of system 60. Initially, no-flow underfill material is dispensed on a carrier at 82. No-flow underfill material may be dispensed according to any currently- or hereafter-known system, including a linear pump and a positive rotary displacement pump. The dispensed no-flow underfill material may be uncured, partially-cured or fully cured according to various embodiments. Partially- or fully-cured material may be dispensed in a laminate, sheet and/or tape form. FIG. 6 is a side elevation of carrier 90 having underfill material portion 10 dispensed thereon according to some embodiments. Carrier 90 may comprise any surface on which no-flow underfill material may be dispensed. In some embodiments, underfill material portion 10 is dispensed as a bead and is flattened to the profile shown in FIG. 6 using a suitable tool. Underfill material 10 and/or carrier 90 may be precleaned using chemical and/or plasma-based techniques prior to 82. The underfill material is pressed against a template at 84 to create openings in the underfill material. FIG. 7 is a side cutaway view of template 100 approaching underfill material 10 in order to create openings according to some embodiments. Template 100 includes projections 110 to create the openings and lip 120 to establish the dimensions of underfill material 10 during 84. Carrier 90 may be moved toward template 100 and/or template 100 may be moved toward carrier 90 according to some embodiments of 84. Projections 110 may be hollow so as to collect portions of underfill material 10 that are "punched-out" during 84. Underfill material 10 may be partially cured prior to 84 to enable clean removal of material from the areas in which openings are to be created. According to some embodiments, underfill material 10 is heated jτιst prior to 84 to establish a desired degree of curing. FIG. 8 is a cutaway side elevation of underfill material portion 10 and carrier 90 after 84. FIG. 8 shows openings 15 created by template 100. Openings 15 may be refined after 84 using etching techniques such as plasma etching. The portion of underfill material is attached to an IC substrate at 86. The IC substrate may be precleaned prior to 86. According to some embodiments, the portion of underfill material is laminated onto the IC substrate. The portion of underfill material and the IC substrate may again be cleaned after 86. FIG. 9 is a cutaway side view of underfill material portion 10 and IC substrate 50 after 86 according to some embodiments. FIG. 9 shows electrical interconnects 70, which may be formed on substrate 50 before or after 86. Electrical interconnects 70 pass through respective ones of openings 15 and are used to couple IC substrate 50 to an IC die. An IC die may be attached to the system of FIG. 9 after process 80. According to some embodiments, an IC die is placed thereon using a placement head of a pick-and- place machine. Such a machine may align electrical contacts of the IC die with respective ones of electrical interconnects 70 prior to placing the IC die. FIGr. 4 illustrates one example of a resulting system. Such a system may then be heated in order to form integral electrical connections between the IC die and the IC substrate, and/or to fully cure portion of underfill material 10. After curing, portion of underfill material 10 may form an inert protective polymer. Underfill material portion 10 may also include fluxing additives to deoxidize the metal surfaces of the electrical contacts of the IC die and of electrical interconnects 70. In some embodiments, flux is also or alternatively placed on the metal surfaces prior to heating. FIG. 10 is a side elevation of system 200 according to some embodiments. System 200 may comprise components of a desktop computing platform. System 200 includes system 60 as described above, memory 220 and motherboard 230. System 60 of system 200 may comprise a microprocessor. IC substrate 50 of system 60 may comprise multiple layers of conductive traces that are separated by layers of dielectric material and electrically coupled by vias formed within the dielectric material. Such traces and vias may electrically couple through-hole pins 210 to electrical contacts 54. Accordingly, pins 210 may carry signals such as power and I/O signals between IC die 40 and external devices. Pins 210 may be mounted directly on motherboard 230 or onto a socket (not shown) that is in turn mounted directly to motherboard 230. Motherboard 230 may comprise a memory bus (not shown) that is electrically coupled to pins 210 and to memory 220. Motherboard 230 may therefore electrically couple memory 220 to IC die 40. Memory 220 may comprise any type of memory for storing data, such as a Single Data Rate Random Access Memory, a Double Data Rate Random Access Memory, or a Programmable Read Only Memory. The several embodiments described herein are solely for the purpose of illustration. The various features described herein need not all be used together, and any one or more of those features may be incorporated in a single embodiment. Some embodiments may include any currently or hereafter-known versions of the elements described herein. Therefore, persons skilled in the art will recognize from this description that other embodiments may be practiced with various modifications and alterations. |
In various embodiments described herein, a device comprising a light guiding layer (204, 212) optically coupled to a photocell (200) is described. A plurality of surface features (208, 216) are formed on one of the surfaces of the light guiding layer (204, 212). The surface features (208, 216) can comprise facets that are angled with respect to each other. Light (220, 224) incident on the surface of the light guide is redirected by the surface features and guided through the light guide (204, 212) by multiple total internal reflections. The guided light is directed towards a photocell (200). |
1.A device for collecting solar energy includes:A first light guide having a top and a bottom surface in which the light guide guides light through multiple total internal reflections at the top and bottom surfaces;First photovoltaic cell; andA plurality of prism features arranged to redirect ambient light received through the top surface such that the light guide is guided in the light guide by total internal reflection from the top and bottom surfaces to the first photovoltaic cell Light.2.The device of claim 1, wherein the first light guide comprises a sheet.3.The device according to claim 2, wherein said sheet comprises a plastic sheet.4.The device according to claim 2, wherein the plastic sheet comprises an acrylic resin or a polycarbide.5.The device according to claim 2, wherein the sheet is at least 4 cm2.6.The device of claim 1, wherein the first light guide is flexible.7.The device of claim 1, wherein the first light guide comprises a polymer.8.The device of claim 1, wherein the first light guide comprises a thin film.9.The device of claim 1, wherein the first photovoltaic cell comprises a photovoltaic cell.10.The device of claim 1, wherein the first photovoltaic cell is butt-coupled to an edge of the first light guide.11.The device of claim 1, wherein the first light guide includes a beveled surface and the first photovoltaic cell is disposed relative to the beveled surface to receive light reflected from the beveled surface.12.The device according to claim 11, wherein the first photovoltaic cell is disposed below the first light guide.13.The device according to claim 1, wherein the first photovoltaic cell is disposed at a corner of the first light guide.14.The device of claim 1, wherein the plurality of prismatic features include an elongated groove.15.The device according to claim 14, wherein said elongated groove is straight.16.The device according to claim 14, wherein said elongated groove is curved.17.The device of claim 1, wherein the plurality of prismatic features include planar facets that are angled relative to each other.18.The device of claim 17, wherein the planar facets are oriented with respect to each other at an angle between 15 and 85 degrees.19.The device of claim 1, wherein the prismatic feature comprises a recess.20.The device according to claim 19, wherein said recess is tapered.21.The device according to claim 19, wherein the dimple has at least three sides including an inclined surface portion.22.The device of claim 1, wherein the prism features have the same shape.23.The device of claim 1, wherein at least some of the prism features have different shapes.24.The device of claim 1, wherein the plurality of prismatic features are formed in a substrate.25.The device of claim 1, wherein the first light guide further comprises a prism film disposed on a substrate and the film includes the plurality of prism features therein.26.The device of claim 1, wherein the prismatic feature is located at the bottom surface of the first light guide.27.The device of claim 1, wherein the prismatic feature extends along a plurality of parallel linear paths.28.The device of claim 1, wherein the prismatic feature extends along a plurality of concentric circular curved paths.29.The device of claim 1, wherein the prismatic feature extends along a plurality of elliptical curved paths.30.The device of claim 1, wherein the plurality of prismatic features are shaped to redirect ambient light received at an angle between 1 and 40 degrees with respect to a normal of the first light guide such that The light is guided in the first light guide by total internal reflection from the top and bottom surfaces to the first photovoltaic cell.31.The device of claim 1, wherein the plurality of prismatic features are shaped to redirect ambient light received at an angle between about 40 degrees and 90 degrees with respect to a normal to the first light guide, So that the light is guided in the first light guide by total internal reflection from the top and bottom surfaces to the first photovoltaic cell.32.The apparatus of claim 1, wherein the first light guide comprises:A first layer including the first set of prismatic features; andA second layer that includes a second set of prismatic features.33.The device of claim 32, wherein at least some of the prismatic features in the first layer are laterally offset relative to some of the prismatic features in the second layer.34.The device of claim 32, wherein at least some of the prismatic features in the first layer are shaped differently than some of the prismatic features in the second layer.35.The apparatus of claim 1, wherein the first light guide comprises:A first section including the first set of prismatic features; andA second section, which includes a second set of prismatic features,Wherein the first and second sections are arranged transversely with respect to each other and the prismatic features in the first section have a different shape or orientation than the prismatic features in the second section.36.The device of claim 35, wherein the prismatic features in the first section have a different orientation than the prismatic features in the second section.37.The device of claim 35, wherein the prismatic features in the first section have a different shape from the prismatic features in the second section.38.The device of claim 35, wherein the first and second sections are part of an array of different sections of the first light guide and a plurality of the sections have a shape or orientation different from the sections Features of other segments in the prism.39.The apparatus of claim 1, wherein the first light guide is disposed on a car, an aircraft, a spacecraft, or a marine vessel.40.The apparatus of claim 1, wherein the first light guide is mounted on a bicycle, a cart, or a trailer.41.The device of claim 1, wherein the first light guide is disposed on a piece of clothing.42.The device of claim 41, wherein the first light guide is disposed on a shirt, pants, shorts, coat, jacket, vest, hat, or shoes.43.The apparatus of claim 1, wherein the first light guide is disposed on a computer, a cellular phone, or a personal digital assistant.44.The apparatus of claim 1, wherein the first light guide is disposed on a building structure.45.The apparatus according to claim 44, wherein the first light guide is disposed on a house or a building.46.The device of claim 1, wherein the first light guide is disposed on an electrical device.47.The apparatus according to claim 46, wherein said first light guide is mounted on a lamp, a telephone, or a motor.48.The device of claim 1, wherein the first light guide is located on a tent or sleeping bag.49.The device of claim 1, wherein the first light guide is rolled or folded.50.A device for collecting ambient light includes:A first light guide having a top and a bottom surface in which the first light guide directs light through multiple total internal reflections at the top and bottom surfaces; andA plurality of prism features arranged to receive ambient light through the top surface at a first angle greater than 45 degrees with respect to the normal of the first light guide and redirect the ambient light at a second angle to pass Total internal reflection from the top and bottom surfaces guides the light in the first light guide.51.The apparatus of claim 50, wherein the first angle is greater than 50 degrees.52.The apparatus of claim 50, wherein the first angle is greater than 60 degrees.53.The apparatus of claim 50, wherein the first angle is greater than 70 degrees.54.The apparatus of claim 50, wherein the first angle is greater than 80 degrees.55.The device of claim 1, wherein the first light guide includes a plurality of edges between the top and bottom surfaces,Wherein the first photocell is disposed relative to one of the edges of the first light guide so that the light guided in the first light guide is incident on the first photocell.56.The device of claim 55, further comprising a second light guide having top and bottom surfaces and a plurality of edges located therebetween, the second light guide including a plurality of prismatic features, the plurality of prisms Features redirect light received through one of the top or bottom surfaces such that light is guided in the second light guide through total internal reflection from the top and bottom surfaces toward the first photovoltaic cell.57.The apparatus of claim 56, wherein the first light guide is configured to receive and direct ambient light incident through the top surface.58.The apparatus of claim 56, wherein the second light guide is configured to receive and guide light reflected from a substrate disposed relative to the bottom surface of the second light guide.59.The apparatus of claim 56, wherein the first light guide is configured to receive and direct ambient light incident through the top surface and a substrate disposed from the bottom surface relative to the second light guide Reflected light both.60.The device of claim 55, further comprising a second photovoltaic cell, the second photovoltaic cell being positioned relative to the other of the edges of the first light guide such that the first light guide is guided in the first light guide Light is incident on the second photovoltaic cell.61.The apparatus of claim 56, wherein the second light guide is disposed below the first light guide.62.The apparatus of claim 56, wherein the prismatic features included in the first light guide are offset from the prismatic features included in the second light guide.63.The apparatus of claim 1, further comprising a substrate.64.The device of claim 63, wherein the substrate comprises a smart glass.65.The device of claim 64, wherein the smart glass comprises an electrochromic device.66.The device of claim 64, wherein the smart glass comprises a suspended particle device.67.The device of claim 64, wherein the smart glass comprises a polymer dispersed liquid crystal device.68.The device of claim 64, wherein the smart glass is configured to change its transparency in response to an applied electric field.69.A device for collecting solar energy includes:A first device for guiding light, said first device having first and second devices for reflecting light such that multiple total internal reflections passing through said first and second light reflecting devices are in said Guiding light in a light guiding device;Means for converting light energy into alternative forms of energy; andFor redirecting the ambient light received through the first or second light reflecting device so that the light is guided in the first light guiding device to the light conversion device for converting light energy into an alternative form Device of energy device.70.The device of claim 69, wherein said means for converting light energy into an alternative form of energy comprises a photovoltaic cell.71.The apparatus of claim 69, wherein the means for redirecting ambient light includes a plurality of prismatic features.72.The apparatus according to claim 69, wherein said first means for guiding light comprises a light guide.73.The apparatus according to claim 69, wherein said first means for guiding light comprises a sheet.74.The device according to claim 69, wherein said first means for guiding light comprises a film.75.The device according to claim 69, wherein said first means for guiding light comprises a polymer.76.The apparatus of claim 69, wherein the means for redirecting ambient light is shaped to redirect a normal relative to the first means for directing light to between about 1 degree and 40 degrees Ambient light received at an angle between the two, so that total internal reflection through the first and second light reflecting means to the means for converting light energy into an alternative form of energy is guided in the for guiding The light is guided in a first device of light.77.The device of claim 69, wherein the means for redirecting ambient light is shaped to redirect a normal relative to the first means for directing light to between about 40 degrees and 90 degrees Ambient light received at an angle between the two, so that total internal reflection through the first and second light reflecting means to the means for converting light energy into an alternative form of energy is guided in the for guiding The light is guided in a first device of light.78.The device according to claim 69, further comprising a second device for guiding light, said second device having first and second devices for reflecting light, and said second device for guiding light The device includes means for redirecting light incident on the second device for guiding light.79.The device according to claim 69, wherein the first light reflecting device includes a top surface of the first light guiding device and the second light reflecting device includes a bottom surface of the first light guiding device.80.The device according to claim 79, wherein the first light guiding device further comprises a plurality of edges interposed between the top surface and the bottom surface of the first light guiding device, andWherein said means for converting light energy into an alternative form of energy is arranged adjacent to one of said edges.81.The apparatus according to claim 69, wherein said means for converting light energy into an alternative form of energy is disposed below said first means for guiding light.82.The apparatus according to claim 69, wherein said means for converting light energy into an alternative form of energy is disposed at a corner of said first means for guiding light.83.A method of manufacturing a device for collecting solar energy, the method comprising:Providing a first light guide having a top and a bottom surface; andPlacing a photovoltaic cell such that the first light guide is optically coupled to the photovoltaic cell,Wherein the first light guide includes a plurality of prismatic features on one of the top or bottom surfaces of the first light guide.84.The method of claim 83, further comprising forming the plurality of prismatic features by embossing.85.The method of claim 83, further comprising placing the first light guide on a substrate.86.The method of claim 85, wherein a first light guide layer is attached to the substrate using an adhesive.87.The method according to claim 85, wherein the first light guide layer is laminated on the substrate.88.The method of claim 83, wherein the method includes providing a second light guide layer disposed below the first light guide layer.89.The method of claim 88, further comprising forming a plurality of prismatic features on the second light guide layer.90.A device for collecting ambient light includes:A first device for guiding light, having first and second devices for reflecting light, the first light guiding device is obtained by multiple total internal reflections at the first and second light reflecting devices Guiding light; andA plurality of devices for redirecting ambient light received through the top surface of the first light guiding device at a first angle greater than 45 degrees relative to a normal of the first light guiding device, the The redirecting means refracts the ambient light at a second angle such that light is guided in the first light guiding device by total internal reflection from the first and second light reflecting devices.91.The apparatus according to claim 90, wherein said first light guiding means comprises a first light guide.92.The device of claim 90, wherein the plurality of redirection devices include a prism feature.93.The device according to claim 90, wherein the first light reflecting device includes a top surface of the first light guiding device and the second light reflecting device includes a bottom surface of the first light guiding device. |
Thin film solar collector / collectorTechnical fieldThe present invention relates to the field of light collectors and light collectors, and more particularly to the use of microstructured films to collect and concentrate solar radiation.This application claims US Patent Application No. 11 / 941,851 (Agent File No. QMRC.001A) entitled "THIN FILM SOLAR CONCENTRATOR / COLLECTOR" filed on November 16, 2007 ), Which is expressly incorporated herein by reference in its entirety.Background techniqueSolar energy is a renewable energy source that can be converted into other forms of energy, such as heat and electricity. The main disadvantages of using solar energy as a reliable renewable energy source are the inefficiency of converting light energy into heat or electricity and solar energy depending on the time of day and the month of the year.Photovoltaic (PV) cells based on the principle of converting light energy into electricity can be used to convert solar energy into electricity. Systems using PV cells can have conversion efficiencies between 10-20%. PV cells can be made extremely thin and not as large and bulky as other devices that use solar energy. PV cells can range in size from a few millimeters to tens of centimeters. Individual electrical output from a PV cell can range from a few milliwatts to a few watts. Several PV cells can be electrically connected and packaged to generate sufficient power.Solar collectors can be used to collect and focus solar energy to achieve higher conversion efficiency in PV cells. For example, parabolic mirrors can be used to collect and focus light on devices that convert light energy into heat and electricity. Other types of lenses and mirrors can also be used to significantly increase conversion efficiency, but they do not overcome changes in the amount of solar energy received depending on the time of day, month of the year, or weather conditions. In addition, systems using lenses / reflectors are often bulky and heavy because the lenses and mirrors that need to effectively collect and focus sunlight must be bulky.PV batteries can be used in a wide range of applications, such as powering satellites and spacecraft, powering residential and commercial property, charging car batteries and other navigation instruments. The performance of a PV cell depends on sunlight, so similar to other devices that use solar energy, the conversion efficiency of a PV cell depends on the time of day, the month of the year, and the weather conditions of the day. To overcome these shortcomings, it is advantageous to use light collectors and collectors that collect and focus light on PV cells and track the movement of the sun throughout the day. In addition, it is advantageous to have the ability to collect stray light during cloudy days. These systems are complex, often bulky and bulky. For many applications, these light collectors and / or light collectors are also required to be small in size.Summary of the inventionVarious embodiments described herein include a light guide for collecting / concentrating ambient light and directing the collected light to a photovoltaic cell. The light guide may include surface relief features to redirect incident light and propagate incident light through the light guide through multiple total internal reflections. The surface relief features may include facets that reflect light. In some embodiments, the facets may be angled relative to each other. The photovoltaic cell is optically coupled to the light guide. In some embodiments, the photovoltaic cell may be disposed adjacent to the light guide. In some other embodiments, the photovoltaic cell may be disposed at a corner of the light guide. In still other embodiments, the photovoltaic cell may be disposed below the light guide. In some embodiments, the light guide may be disposed on a substrate. The substrate may include glass, plastic, electrochromic glass, smart glass, and the like.In one embodiment, a device for collecting solar energy is disclosed. The device includes a first light guide having top and bottom surfaces, wherein the light guide directs light therein through multiple total internal reflections at the top and bottom surfaces. The device further includes a photocell optically coupled to the first light guide. In some embodiments, a plurality of prism features are disposed on the first light guide to redirect the ambient light received through the top surface so that the total internal reflection from the top and bottom surfaces to the photovoltaic cell is at The light is guided in the light guide. In one embodiment, the prismatic feature may include an elongated groove. In some embodiments, the elongated grooves may be straight. In other embodiments, the elongated groove may be curved. In one embodiment of the device, the prismatic feature may include a dimple. In one embodiment, the dimples may be tapered.In one embodiment, the device may include a first light guide, the first light guide further including a prism film disposed on a substrate and the film including the plurality of prism features therein. In some embodiments, the prismatic feature may be located at the bottom surface of the first light guide. In some other embodiments, the prismatic features may extend along multiple parallel linear paths. In other embodiments, the prismatic features may extend along multiple concentric circular curved paths. In still other embodiments, the prismatic features extend along multiple elliptical curved paths.In one embodiment of the device, the first light guide includes: a first layer including the first set of prismatic features; and a second layer including the second set of prismatic features. <In some embodiments, at least some of the prismatic features in the first layer are laterally offset relative to some of the prismatic features in the second layer. In another embodiment, at least some of the prismatic features in the first layer are shaped differently than some of the prismatic features in the second layer. In another embodiment of the device, a second light guide having a top and bottom surface and including a plurality of edges between the top and bottom surfaces is disposed below the first light guide. The second light guide includes a plurality of prismatic features to redirect light received through the bottom surface such that light is guided in the second light guide by total internal reflection from the top and bottom surfaces. The light guided in the second light guide is guided toward the photovoltaic cell.In one embodiment of the invention, a device for collecting solar energy is disclosed. The device includes a first device for guiding light, the first device having first and second devices for reflecting light so as to pass multiple total internals at the first and second light reflecting devices The reflection guides light in the device for guiding light. <The device further includes: a device for converting light energy into an alternative form of energy; and a device for redirecting ambient light received through the first and second light reflecting devices so that Total internal reflection of the first and second light reflecting means to the means for converting light energy into electric energy guides the light in the means for guiding light. In one embodiment, the first and second light reflecting devices may include the top and bottom surfaces of the light guiding device. A plurality of edges may be disposed between the top and bottom surfaces of the light guide.In one embodiment of the invention, a method for manufacturing a device for collecting solar energy is disclosed. The method includes providing a first light guide having a top and a bottom surface. The method further includes providing a photovoltaic cell to optically couple the first light guide to the photovoltaic cell; and forming a plurality of prismatic features on one of the top or bottom surfaces of the first light guide.In one embodiment, a device for collecting ambient light is disclosed. The device includes a first device for guiding light, the first device having first and second devices for reflecting light so as to pass the multiple total light at the first and second light reflecting devices. Internal reflection guides light within the first light guide device; and the top surface for redirecting through the first light guide device with respect to a normal of the first light guide device is greater than 45 A plurality of devices that receive ambient light at a first angle, and the redirecting device refracts the ambient light at a second angle so that the total internal reflection from the first and second light reflecting devices is reflected in the The first light guiding device guides light.BRIEF DESCRIPTION OF THE DRAWINGSThe exemplary embodiments disclosed herein are illustrated in the accompanying schematic drawings, which are for illustrative purposes only.FIG. 1A illustrates a side view of a prism light guide including a plurality of prism features to collect and direct light to a photovoltaic cell.FIG. 1B illustrates a perspective view of a prismatic light guide including a plurality of prismatic features to collect and direct light to a photovoltaic cell.Fig. 1C shows a perspective view of the embodiment described in Fig. 1A.FIG. 2 illustrates an embodiment that includes two layers of prism light guides stacked with offset prism features to collect and direct light into photovoltaic cells with higher efficiency.Figure 3 illustrates the distribution of light rays incident on a light guide coupled in a guided mode.FIG. 4 illustrates a convex angle along which incident radiation is coupled into a guided mode in the case of a prism film with a wide angled facet.FIG. 5 illustrates the lobes along which incident radiation is coupled into a guided pattern in the case of a prism film with narrow angled facets.FIG. 6 illustrates an embodiment of a prismatic light guide having two layers including narrow and wide angled facets to maximize the collection angle.FIG. 7 illustrates an embodiment in which narrow and wide angled facets are formed on the same prism light guide.FIG. 8A illustrates an embodiment composed of several prism features arranged concentrically with a photovoltaic cell placed at the center.FIG. 8B illustrates an embodiment composed of several curved prism features and a photovoltaic cell placed at one edge.FIG. 9 illustrates a microstructure pattern matrix.FIG. 10 illustrates an embodiment in which a photovoltaic cell is beveled with respect to a prism film.11 illustrates a side view of an embodiment including a collector lens, a prism film, and a reflector disposed on a photovoltaic cell array.FIG. 12A illustrates a top view of a thin film including conical features bounded by a reflector on both sides to direct light into two photovoltaic cells placed at the other two edges.Fig. 12B is a side view of the embodiment illustrated in Fig. 12A with a tapered facet.FIG. 13A illustrates a side view of an embodiment including two light collection films and a photovoltaic cell.13B illustrates a side view of an embodiment including two light collection films and two photovoltaic cells.13C illustrates a side view of an embodiment including one light collection film and two photovoltaic cells.Figure 14 shows a light collection plate, sheet or film optically coupled to a photovoltaic cell placed on the roof and windows of a residential home.FIG. 15 shows an embodiment in which a light collecting plate, sheet or film optically coupled to a photovoltaic cell is placed on the roof of a car.Figure 16 illustrates the attachment of a light collection plate, sheet or film optically coupled to a photovoltaic cell to the body of a laptop computer.FIG. 17 shows an example of attaching a light collection plate, sheet or film optically coupled to a photovoltaic cell to a piece of clothing.FIG. 18 shows an example of placing a light collecting plate, sheet, or film optically coupled to a photovoltaic cell on a shoe.FIG. 19 indicates an embodiment in which a light collection plate, sheet or film optically coupled to a photovoltaic cell is attached to the wings and windows of an aircraft.FIG. 20 indicates an embodiment in which a light collection plate, sheet or film optically coupled to a photovoltaic cell is attached to a sailing boat.FIG. 21 indicates an embodiment in which a light collection plate, sheet or film optically coupled to a photovoltaic cell is attached to a bicycle.FIG. 22 indicates an embodiment in which a light collection plate, sheet or film optically coupled to a photovoltaic cell is attached to a satellite.FIG. 23 shows an embodiment in which the light-collecting sheet that is generally flexible to be rolled up is optically coupled to a photovoltaic cell.detailed descriptionThe following specific implementations are directed to certain specific embodiments of the present invention. However, the present invention can be embodied in many different ways. As will be apparent from the following description, the embodiments may be implemented in any device configured to collect, capture, and concentrate radiation from a source. More specifically, the invention encompasses that the embodiments described herein may be implemented in or associated with a wide variety of applications, such as powering residential and commercial property, Computers, PDAs, watches, calculators, cellular phones, camcorders, cameras and camcorders, mp3 players and other electronic devices provide power. In addition, the embodiments described herein can be used in wearable clothes, shoes and accessories. Some of the embodiments described herein can be used to charge car batteries, navigation instruments, and pumping water. The embodiments described herein are also applicable in aerospace and satellite applications.In various embodiments described herein, a solar collector and / or light collector is coupled to a photovoltaic cell. The solar collector and / or light collector includes a light guide, such as a plate, sheet, or film on which prism turning features are formed. The ambient light incident on the light guide is diverted into the light guide by prism features and guided through the light guide by total internal reflection. Photocells are positioned along one or more edges of the light guide and light emitted from the light guide is coupled into the photocell. The use of a light guide to collect, concentrate, and direct ambient light to a photovoltaic cell can realize a photovoltaic device that converts light energy into heat and electricity with increased efficiency and reduced cost. The light guide may be formed as a plate, sheet or film. Light guides can be made from rigid or semi-rigid materials. In some embodiments, the light guide may be formed from a flexible material. In still other embodiments, the light guide may include a thin film. The light guide may include grooves arranged in a linear manner. In alternative embodiments, the prismatic features may have a non-linear range. For example, in some embodiments, the prism features may be arranged along a curve. Alternative embodiments may be composed of a thin-film light guide having a conical reflective feature dispersed throughout the light-guiding medium.One embodiment of a prism light guide for coupling ambient light into a photovoltaic cell is shown in FIG. 1A. Photocells can be photovoltaic cells or photodetectors. The prism light guide collector is based on the principle of interchange. FIG. 1A illustrates a side view of an embodiment including a light guide 104 disposed relative to the photovoltaic cell 100. In some embodiments, the light guide 104 may further include a substrate 105 and a plurality of prism features 108 disposed on the substrate. The light guide 104 may include top and bottom surfaces including a plurality of edges therebetween. Light incident on the light guide may be redirected into the light guide by the plurality of prism features and within the light guide through multiple total internal reflections at the top and bottom surfaces. The light guide 104 may include an optically transmissive material that is transparent to radiation at one or more wavelengths to which the photovoltaic cell is sensitive. For example, in one embodiment, the light guide 104 may be transparent to wavelengths in the visible and near infrared regions. In other embodiments, the light guide 104 may be transparent to wavelengths in the ultraviolet or infrared region. > The light guide 104 may be formed of a rigid or semi-rigid material such as glass or acrylic to provide structural stability to the embodiment. Alternatively, the light guide 104 may be formed from a flexible material, such as a flexible polymer.The light guide 104 includes two surfaces. The upper surface is configured to receive ambient light. The bottom surface is disposed below the upper surface. The periphery of the light guide 104 is bounded by edges. Generally, the length and width of the light guide 104 are substantially larger than the thickness of the light layer 104. The thickness of the light guide 104 may vary from 0.5 to 10 mm. The area of the light guide 104 can vary from 0.01 to 10,000 cm2. In some embodiments, the refractive index of the material constituting the light guide 104 may be significantly higher than the surroundings in order to guide a large portion of the ambient light within the light guide 104 through total internal reflection (TIR).In one embodiment, as shown in FIG. 1A, the light guide is composed of prism features 108 disposed on a bottom surface of a substrate 105. The prism feature is generally an elongated groove formed on a bottom surface of the substrate 105. The groove may be filled with an optically transmissive material. The prism features 108 may be formed on the bottom surface of the substrate 105 by molding, embossing, etching, or other alternative techniques. Alternatively, the prism features 108 may be disposed on a film that can be laminated on the bottom surface of the substrate 105. In some embodiments that include a prism film, light may be directed only within the prism film. In such embodiments, the substrate 105 may merely provide structural stability. The prism features 108 may include a variety of shapes. For example, the prism feature 108 may be a linear v-groove. Alternatively, the prism feature 108 may include a curved groove or a non-linear shape.FIG. 1B shows an enlarged view of the prismatic feature 108 in the form of a linear v-groove 116. The v-shaped groove 116 includes two planar facets F1 and F2 arranged at an angular distance α with respect to each other, as shown in FIG. 1B. The angular distance α between the facets can vary from 15 degrees to 120 degrees. In some embodiments, the facets F1 and F2 may have equal lengths. Alternatively, in other embodiments, the length of one of the facets may be greater than the other. The distance 'a' between two consecutive v-grooves can vary between 0.01 and 0.5 mm. The width of the v-groove indicated by 'b' may vary between 0.001 and 0.100 mm, and the depth of the v-groove indicated by 'd' may vary between 0.001 and 0.5 mm.Fig. 1C shows a perspective view of the embodiment described in Fig. 1A. As shown in FIG. 1C, the embodiment described in FIG. 1C is composed of several rows of linear v-shaped grooves arranged along the bottom surface of the substrate 105.Referring to FIGS. 1A and 1C, the photovoltaic cell 100 is disposed laterally with respect to the light guide 104. The photovoltaic cell is configured to receive light guided through a light guide 104 through a prismatic feature 108. The photovoltaic cell 100 may include a single-layer or multilayer p-n junction and may be formed of silicon, amorphous silicon, or other semiconductor materials such as cadmium telluride. In some embodiments, a photovoltaic cell 100 based on a photoelectrochemical cell, polymer, or nanotechnology may be used. The photovoltaic cell 100 may further include a thin multi-spectral layer. The multi-spectral layer may further include nanocrystals dispersed in a polymer. Several multi-spectral layers can be stacked to increase the efficiency of the photovoltaic cell 100. 1A and 1B show an embodiment in which the photovoltaic cell 100 is disposed along one edge of the light guide 104 (for example, to the left side of the light guide 104). However, another photovoltaic cell may be placed at the other edge of the light guide 104 (for example, to the right of the light guide 104). Other configurations for positioning the photovoltaic cell relative to the light guide 104 are also possible.Ambient light incident on the upper surface of the light guide 104 is transmitted through the light guide 104 as indicated by the light path 112. When incident on the facet of the prism feature 108, the light is totally internally reflected from the upper and bottom surfaces of the light guide 104 through multiple reflections. After striking the edge of the light guide 104, the light rays exit the light guide 104 and are optically coupled to the photovoltaic cell 100. A lens or light pipe may be used to optically couple light from the light guide 104 to the photovoltaic cell 100. In one embodiment, for example, the light guide 104 may be oriented toward the end non-prism feature 108 closer to the photovoltaic cell 100. The portion of the light guide 104 that does not have any prismatic features may serve as a light pipe. The amount of light that can be collected and guided through the light guide will depend on the geometry, type, and density of the prismatic features. The amount of light collected will also depend on the refractive index of the light guide material that determines the numerical aperture.Light is guided through the light guide 104 through TIR. Guided light can suffer losses due to absorption in the light guide and scattering from other facets. To reduce this loss of guided light, the length of the light guide 104 needs to be limited to tens of inches to reduce the number of reflections. However, limiting the length of the light guide 104 can reduce the area on which light is collected. Therefore, in some embodiments, the length of the light guide 104 may be increased to more than several tens of inches. In some other embodiments, an optical coating may be deposited on the surface of the light guide 104 to reduce Fresnel losses.When a light ray hits a portion of the light guide without the prismatic feature 108, it can transmit through the light guide and not be redirected into the light guide. To reduce the amount of light that escapes the light guide in this manner, it may be advantageous to stack several light guide layers including prismatic features, where the prismatic features are offset relative to each other, as illustrated in FIG. FIG. 2 illustrates an exemplary embodiment 2 including a first light guide layer 204 having a prismatic feature 208 and a second light guide layer 212 having a prismatic feature 216. The photovoltaic cell 200 is disposed laterally with respect to the two light guiding layers 204 and 212. The prism features 208 and 216 are offset relative to each other. The light ray 220 is turned and guided through the light guide 204, as described above. The light ray 224 passing from the point A through the light guide 204 is turned and guided through the light guide 212. Offsetting the prism features 208 and 216 in this manner reduces the space between the features and increases the density of the prism features. Offsetting the feature may increase the amount of light optically coupled to the photovoltaic cell, thereby increasing the electrical output of the photovoltaic cell. Since the light guide layers 204 and 212 may be thin, multiple light guide layers may be stacked and increase the amount of light coupled to the PV cell. The number of layers that can be stacked together depends on the size and / or thickness of each layer and the Fresnel loss at the interface of each layer. In some embodiments, at least ten light guiding layers may be stacked together.The advantage of using a prismatic light guide plate, sheet, or film to collect, focus, and direct light toward the photovoltaic cell is that a smaller number of photovoltaic cells may be required to achieve the desired electrical output. Therefore, this technology may reduce the cost of generating energy through photovoltaic cells.Figure 3 shows the distribution of light rays incident on a light guide coupled into the light guide by prism features. The distribution of the incident light includes two convex corners 312 and 316. The incident lobes 312 are close to the normal to the surface of the light guide. The incident lobes 312 can be incident from approaching the normal with respect to the light guide 104 to approximately 45 degrees from the normal to the light guide 104. The incident lobes 316 are oriented substantially parallel to the surface of the light guide. The angular spread of the incident lobes may be in a range from approximately 45 degrees with respect to the surface of the light guide 104 to approximately grazing angles with respect to the surface of the light guide 104.It is generally known that the physical properties of prism characteristics can be changed to change the size, shape, and angle of the incident lobes. For example, FIG. 4 illustrates an embodiment including a light guide 404. A prism feature 408 is disposed on the bottom surface of the light guide 404. The light incident on the upper surface of the light guide 404 is diverted into the light guide 404 through the prism feature 408 and guided through the light guide 404 through TIR. The angular distance α between the facets including the prism feature 408 is large (for example, greater than 90 degrees), resulting in a wide prism feature. The wide prism feature can divert light incident at a generally grazing incidence angle (eg, about 5 to 45 degrees from the surface of the light guide 404).In contrast, in the embodiment shown in FIG. 5, the angular distance α between the facets of the prism feature 508 is small (eg, less than 90 degrees), resulting in narrow angled facets. The wide prism feature can steer light incident at an angle that is approximately close to the normal to the surface (eg, about 5 to 45 degrees from the normal to the surface of the light guide 504).FIG. 6 shows an embodiment composed of two light guides 604 and 608 arranged laterally with respect to the edge of the photovoltaic cell 600. The light guide 604 is further composed of a relatively narrow prism feature 612 and the light guide 608 is composed of a relatively wide angled facet 616. Light rays 620 that are close to the normal (e.g., 5 to 45 degrees from the normals of the surfaces of the two light guides 604 and 608) are effectively collected and guided through a relatively narrow angled facet Light guide 604, and light rays 624 incident at a glancing angle (e.g., about 5 to 45 degrees from the surfaces of the two light guides 604 and 608) are effectively collected and guided through a relatively wide Facet light guide 608.One advantage of this design is that light can be efficiently collected at a variety of angles without mechanically rotating the film. Therefore, the dependence of the performance of the photovoltaic cell on the time of day and the month of the year can be significantly reduced. For example, light from the sun may be incident on the light guide at grazing angles in the morning and evening, and light from the sun may be incident on the light guide near normal at about noon. The above embodiments including multiple light guiding layers with relatively narrow and wide angled facets will be able to collect light with approximately equal efficiency in the morning, noon, and evening. Figure 7 illustrates an alternative embodiment including both narrow and wide angled facets on the same light guide.FIG. 8 illustrates an embodiment using a polygon method. In one embodiment, the elongated facets with prismatic features or v-grooves do not have a linear range. The specific embodiment illustrated in the embodiments is composed of a light guide plate, a sheet, or a film 800 formed of an optically transmissive material. The grooves are arranged along concentric circles 804 on the surface of the light guide plate 800. In some embodiments, the grooves may be disposed along an oval path. Such grooves may be v-shaped grooves, as indicated by section 812. Concentric V-grooves can be made using a similar process to linear V-grooves. This light guide will receive light at various azimuth angles Φ relative to the plane of the light guide 800. The v-shaped groove diverts the light. Light then travels to the center of the concentric pattern (as indicated by the light ray 808) and is incident on a photocell 816 placed at the center of the concentric pattern. The embodiment described in FIG. 8 may advantageously collect diffuse ambient light, such as in cloudy conditions.In an alternative embodiment, as indicated in FIG. 8A, the photovoltaic cell 820 may be positioned at one corner of the light guide plate, sheet, or film 824. The light guide plate, sheet, or film may have a rectangular, square, or some other geometric shape. A groove may be formed on the light guide plate, sheet, or film along a curve 828. The center of the curve 824 is not at the center of the light guide plate, sheet, or film 824. The center of the curve 828 is closer to the corner with the photovoltaic cell 820 than the other corner. The groove is concave and faces the photovoltaic cell 820. A light guide plate, sheet, or film 824 including a curved groove 828 may collect light and turn it toward the concave side and direct the light to the photovoltaic cell 820. This design including curved prism features or grooves can be more effective in light collection than a design including a photovoltaic cell disposed along one edge of the linear prism film.As mentioned above, in some embodiments, the length of the light guide may be limited to tens of inches to reduce losses due to reflections. However, limiting the length of the light guide can reduce the area on which light is collected. In some applications, it may be advantageous to collect light over a large area. In these cases, one approach may be that the microstructure matrix pattern shown in FIG. 9 may be beneficial. The embodiment shown in FIG. 9 illustrates a plurality of elements 900 arranged in a matrix pattern. The matrix pattern may be composed of a plurality of rows and columns. The number of rows can be equal to the number of columns. The number of components in any two rows can be different. Similarly, the number of elements in any two columns may be different. In some embodiments, the matrix pattern may be irregular. Elements in the matrix include a light guide plate, sheet, or film having a plurality of v-groove patterns 904 formed thereon. Other groove patterns other than v-grooves can also be used. The elements in the matrix may contain the same or different microstructure patterns. For example, the microstructure patterns in the different elements may be different in size, shape, and type. Therefore, different elements in the matrix can gather their eyes at different angles. Photocells 908 may be distributed within and along the perimeter of the matrix. The method disclosed above can advantageously make large panels composed of light collectors coupled to photovoltaic cells, which can be fixed to the roofs of residential and commercial buildings, for example.In the embodiment shown in FIG. 1A, the photovoltaic cell 100 abuts upwardly against the edge of the light guide plate, sheet, or film 105. The light guide plate, sheet, or film may also be advantageously beveled at its edges to redirect light out of the light guide plate, sheet, or film toward the photovoltaic cell, as shown in FIG. 10. FIG. 10 illustrates an embodiment with a beveled light guide plate, sheet, or film 1004 that includes prismatic features 1008. A side view of the embodiment shown in FIG. 10 indicates a light guide having an upper surface S1 and a lower surface S2. The upper and lower surfaces S1 and S2 are bounded on the left by the edge surface E1 and on the right by the edge surface E2. The edge surfaces E1 and E2 are inclined with respect to the upper and lower surfaces S1 and S2. The inclination angles of the edge surfaces E1 and E2 with respect to the upper and lower surfaces S1 and S2 are not equal to 90E. The light ray 1012 is guided along the beveled light guide by total internal reflection and is incident on a photo cell 1000 disposed behind the light guide plate or film 1004. Beveling the edges of the light guide plate, sheet, or film 1004 can simplify the alignment between the photovoltaic cell 1000 and the light guide plate, sheet, or film 1004.The light ray 1012 incident on the upper surface of the light guide plate, sheet or film 1004 is turned into the light guide 1004 by the prism feature 1008 and guided within the light guide 1004 by total internal reflection from the upper and lower surfaces S1 and S2. When incident on the inclined edge E1, the guided light ray 1012 guides the light guide toward the photovoltaic cell 1000 disposed behind the light guide 1004 near the normal of the lower surface S2.Multiple beveled light guides including prismatic features may be arranged in a matrix pattern similar to the embodiment described in FIG. 9. The photovoltaic cell in this embodiment may be disposed under the matrix pattern. Ambient light incident on the upper surface of the matrix pattern is guided through a beveled edge of a light guide toward a photocell disposed behind the matrix pattern.In some embodiments, light may be advantageously collected through the edges of the light guide plate or film or a stack of light guide plates or films including prismatic features, as shown in FIG. 11. FIG. 11 illustrates an embodiment including a light guide plate, sheet, or film 1100. The light guide includes four surfaces S1, S2, S3, and S4. The light is collected by the collection lens 1104 and is incident on one surface S1 of the light guide 1100. The disposed prismatic feature 1103 is disposed on the adjacent surface S2 of the light guide 1100. The light entering the light guide plate, sheet, or film 1100 is redirected through the prism feature 1103 and guided through the light guide plate, sheet, or film 1100 through total internal reflection. The light ray indicated by 1112 is guided within the light guide 1100 by total internal reflection from the two surfaces S2 and S3 adjacent to the input surface S1 until it hits one of the facets of the prismatic feature 1103. . Upon striking the facet, the light ray 1112 directs the light guide 1100 toward the photocell 1108 disposed away from the surface including the prismatic feature 1103, as indicated in FIG. However, the light rays indicated by 1116 that do not hit the prism feature and therefore do not direct out of the light guide plate, sheet or film 1100 are coupled back to the light guide plate through a reflector 1120 located at the other end away from the collection lens 1104, Sheet or film 1100.FIG. 12A indicates a top view of the thin-film solar light collector 1200. The thin-film solar collector 1200 is formed of an optically transmissive material and includes two surfaces. The thin-film solar collector has a tapered cavity 1204 instead of an elongated groove formed on a surface of the thin-film solar collector remote from a surface through which light is incident. Fig. 12B indicates a side view of a thin-film solar collector with a tapered cavity. Referring again to FIG. 12A, the tapered cavities 1204 are distributed throughout the light guide film in a random or ordered manner. The thin-film solar collector 1200 further includes photovoltaic cells 1208 placed along both edges of the thin-film solar collector 1200. In the embodiment shown in FIG. 12A, the reflector 1212 is placed along the remaining edges of the thin-film solar collector 1200 to increase light trapping efficiency. However, in alternative embodiments, the reflector 1212 may be replaced by a photovoltaic cell 1208.The tapered cavity indicated in FIG. 12B has a circular cross section. However, it is also possible to form a conical cavity with an oval cross section. The light incident on the surface of the thin-film solar collector 1200 is totally internally reflected by the conical cavity 1204 and guided toward the photovoltaic cell 1208. The tapered cavity is a three-dimensional structure and therefore can receive light from multiple directions and reflect light in multiple directions. The embodiment described in FIGS. 12A and 12B can collect light at a full solid angle and thus has a large light collection capability.In some embodiments, two light guiding layers with prismatic features can be stacked to collect ambient and reflected light, as shown in Figures 13A-13C. The embodiment illustrated in FIG. 13A includes a top light guide layer 1305 and a bottom light guide layer 1307. (The terms top and bottom are only relative to the drawing, even if the structure can be reoriented.) The light guide layers 1305 and 1307 include a top surface S1 and a bottom surface S2. The top light guide layer 1305 further includes a prism feature disposed on the bottom surface S2. The bottom light guide layer 1307 includes prism features disposed on the top surface S1. In some embodiments, the prism features located on the two light guiding layers may be offset relative to each other. In some embodiments, for example, where the two light guiding layers 1305 and 1307 are diffuse, the prism features may not be offset relative to each other. In some embodiments, the offset distance between the prism features in the top light guide layer 1305 and the bottom light guide layer 1307 is configured to reduce or avoid visual artifacts. The two light guide layers 1305 and 1307 may be bonded together by an adhesive. In some embodiments, the two light guiding layers 1305 and 1307 may be laminated together. In some embodiments, the two light guiding layers may include a gap therebetween.The two light guide layers 1305 and 1307 may be disposed on a substrate 1301. The substrate 1301 may be selected from the group consisting of a transparent substrate and may be a partially reflective surface, a display device, a display device including an interferometric modulator (IMOD), or other suitable materials. In some embodiments, the substrate 1301 may include smart glass or switchable glass. Smart glass or switchable glass is a glass or glazing that changes its transparency in response to an applied voltage. The smart glass or switchable glass may include an electrochromic device, a suspended particle device, or a polymer dispersed liquid crystal device. In the electrochromic device, the smart glass is formed of an electrochromic material. In other embodiments, the electrochromic material layer may be disposed on an outer surface or an inner surface of the transparent medium. The electrochromic material can change its transparency between opaque, translucent, and transparent in response to voltage or current. Once the change has been implemented, the electrochromic material will maintain its transparency even after removing the voltage or current. In embodiments that include smart glass formed with suspended particle devices, a thin layer of particles in the form of a laminate, film, or sheet may be placed between two layers of transparent material, such as glass or plastic. When no voltage is applied, the particles can be arranged in a random manner and can absorb or block the passage of light. However, in response to the applied voltage, the particles can be aligned and can allow light to pass through them. In a polymer dispersed liquid crystal device, a liquid crystal material layer may be disposed between two transparent layers including glass or plastic. Similar to a suspended particle device, when no voltage is applied, the liquid crystal can be oriented in a random manner and thus block light. In response to the voltage, the liquid crystal can be oriented in one direction and allow light to pass through it. The two light guide layers 1305 and 1307 may be attached to the substrate 1301 by an adhesive. In some embodiments, the two light guide layers 1305 and 1307 may be laminated to a substrate 1301. In some embodiments, this substrate 1301 may be diffusive. For example, in some embodiments, the substrate 1301 may have a diffusely reflective surface.The photovoltaic cell 1303 is disposed to one side of the two light guiding layers 1305 and 1307 (for example, to the left or to the right, as shown in FIG. 13A or elsewhere). In FIG. 13A, the photovoltaic cell is disposed to the left of the light guide layers 1305 and 1307. An incident light beam 1313 incident on a facet on the bottom surface S2 of the top light guide layer 1305 is reflected by the facet and guided through the top light guide layer 1305 toward the photovoltaic cell 1303. Therefore, the top light guide layer 1305 can be used to capture a portion of the incident light. Some incident light may not hit the facet of the top light guide layer 1305. Some incident light that does not hit the facet of the top light guide layer 1305 can pass through the bottom light guide layer 1307 and the substrate 1301, as indicated by rays 1311. Some incident light (for example, ray 1313) that is not incident on the facet of the top light guide layer 1305 may reflect the top light guide layer 1305 from the interface of the bottom light guide layer 1307 and the substrate 1301, as indicated by the ray 1315. However, some reflected light may be incident on a small face (as indicated by ray 1317) on the top surface of the bottom light guide layer 1307 and guided through the bottom light guide layer toward the photovoltaic cell 1303 (as described above, the substrate 1301 may Is diffusive, for example, with a diffusive reflective surface. In some embodiments, a diffusive layer can be placed on the substrate. Other designs can also be used.) Therefore, the bottom light guide layer 1307 can capture the top light guide layer The light collected by 1305 is reflected from the substrate 1301. Other configurations are also possible.In some embodiments, the light collection capability of the light guide layer may vary linearly with the density of the features. Therefore, in order to increase the amount of light captured by the two light guide layers 1305 and 1307 and reduce the amount of light exiting through the substrate or the top light guide layer, the density of the prism features may be increased. In some embodiments, the surface area of the prism feature may be about 5% -10% of the total surface area of the light guide layer. In some embodiments, the characteristic density may be greater than 10% of the total surface area of the film. Other configurations are also possible.In some embodiments, the PV cells may be placed on both sides of the light guide layers 1305 and 1307, as shown in FIG. 13B. In FIG. 13B, the incident light 1309 is collected through the top light guide layer 1305 and guided toward the photo cell 1303b disposed to the right of the light guide layer. In addition, the bottom light guide layer 1307 collects light reflected from the substrate 1301 and guides the collected light toward the photocell 1303a disposed on the left side of the light guide layer. As described above, the substrate 1301 may be diffusive, such as having a diffusely reflective surface. In some embodiments, a diffusing layer may be disposed on the substrate. Other configurations are also possible.In some embodiments, the top light guide layer 1305 may be excluded, as shown in FIG. 13C. In FIG. 13C, two photovoltaic cells 1303a and 1303b are respectively disposed on the left and right sides of the light guide layer 1307. The incident light beam 1309 incident on the facet of the prism feature disposed on the top surface S1 of the light guide layer 1307 is reflected by the prism feature and guided through the light guide layer 1307 toward the photovoltaic cell 1303b. The light beam 1313 that does not hit the facet enters the light guide layer 1307 and is reflected by the substrate 1301. As described above, the substrate 1301 may be diffusive, such as having a diffusely reflective surface. In some embodiments, a diffusing layer may be disposed on the substrate. The reflected ray 1317 hits the facet of the prism feature disposed on the top surface S1 of the light guide layer 1307 and is guided through the light guide layer toward the photocell 1303a. In this way, a single light guide layer can be used to collect both incident and reflected light. A variety of other configurations are also possible.A method of collecting, focusing, and directing light to a photovoltaic cell using a light collection plate, sheet, or film including prismatic features can be used to achieve a solar cell that has increased efficiency and can be inexpensive, thin, and lightweight. A solar cell including a light collecting plate, sheet, or film coupled to a photovoltaic cell may be arranged to form a solar cell panel. These solar cell panels can be used in a wide variety of applications. For example, a solar cell panel 1404 including a plurality of light collection light guides optically coupled to a photovoltaic cell may be installed on the roof of a residential or commercial building or placed on doors and windows as illustrated in FIG. Provide supplementary power. The light collection plate, sheet or film may be formed of a transparent or translucent plate, sheet or film. For aesthetic purposes, the prismatic light collection plate, sheet, or film may be colored (e.g., red or brown). The light collection plate, sheet or film may be rigid or flexible. In some embodiments, the light collection plate, sheet or film may be flexible enough to be rolled up. A solar cell panel composed of such sheets 1408 may be attached to a window glass, as shown in FIG. 14. The light collection sheet may be transparent to see through the window. Alternatively, it may be colored to block light. In other embodiments, the prism sheet may have a wavelength filtering property to filter out ultraviolet radiation.In other applications, a light collection plate, sheet, or film can be installed on cars and laptop computers to provide power, as shown in FIGS. 15 and 16, respectively. In FIG. 15, a light collection plate, sheet, or film 1504 is mounted to the roof of a car. The photovoltaic cell 1508 may be disposed along the edge of the light collector 1504. The electricity generated by photovoltaic cells can be used, for example, to recharge the battery of a vehicle powered by gasoline, electricity, or both, or can also operate electrical components. In FIG. 16, a light collection plate, sheet, or film 1604 may be attached to the body (e.g., an external housing) of a laptop computer. This facilitates powering the laptop in the absence of an electrical connection. On the other hand, a light guide collector optically coupled to a photovoltaic cell can be used to recharge a laptop computer battery.In alternative embodiments, a light collection plate, sheet, or film that is optically coupled to the photovoltaic cell may be attached to the garment or shoe. For example, FIG. 17 illustrates a jacket or vest including a light collection plate, sheet, or film 1704 optically coupled to a photovoltaic cell 1708 that is positioned around the lower periphery of the jacket or vest. In alternative embodiments, the photovoltaic cells 1708 can be placed anywhere on the jacket or vest. A light collection plate, sheet, or film 1704 may collect, focus, and direct ambient light to a photovoltaic cell 1708. The power generated by the photocell 1708 can be used to power handheld devices (such as PDAs, mp3 players, cellular phones, etc.). On the other hand, the power generated by the photocell 1708 can be used to illuminate vests and jackets worn by airline ground crews, police, firefighters, and emergency workers in the dark to increase visibility. In another embodiment illustrated in FIG. 18, a light collection plate, sheet, or film 1804 may be placed on a shoe. The photovoltaic cell 1808 may be disposed along an edge of the light collection plate, sheet, or film 1804.Solar cell panels consisting of prismatic light collection plates, sheets or films coupled to photovoltaic cells can also be mounted on aircraft, trucks, trains, bicycles, sailboats and satellites. For example, as shown in FIG. 19, a light collection plate, sheet, or film 1904 may be attached to an aircraft wing or an aircraft window glass. Photocell 1908 may be placed along the edge of a light collection plate, sheet, or film, as illustrated in FIG. 19. The generated electricity can be used to power components of the aircraft. FIG. 20 illustrates the use of a light collector coupled to a photovoltaic cell to power navigation instruments or devices in a ship (eg, refrigerators, televisions, and other electrical equipment). The light collection plate, sheet or film 2004 is attached to the sail of a sailing boat or alternatively is attached to the body of the boat. The PV cell 2008 is disposed at an edge of the light collection plate, sheet, or film 2004. In an alternative embodiment, the light collection plate, sheet or film 2004 may be attached to the body of a sailing boat, such as a hatch or deck. The light collection plate, sheet, or film 2104 may be mounted on a bicycle, as indicated in FIG. 21. 22 illustrates another application of a light collection plate, sheet, or film that is optically coupled to a photovoltaic cell to power communications, meteorology, and other types of satellites.FIG. 23 illustrates a light collection sheet 2304 that is flexible enough to be rolled up. The light collection sheet is optically coupled to a photovoltaic cell. The embodiment described in Figure 23 can be rolled up and carried while camping or backpacking to generate electricity outdoors and in remote locations where electrical connections are scarce. In addition, light collection plates, sheets or films optically coupled to photovoltaic cells can be attached to a wide variety of structures and products to provide power.A light collection plate, sheet, or film that is optically coupled to a photovoltaic cell may have the added advantage of being modular. For example, depending on the design, a photovoltaic cell may be configured to be selectively attached to and separated from a light collection plate, sheet, or film. Therefore, existing photovoltaic cells can be periodically replaced by newer and more efficient photovoltaic cells without having to replace the entire system. This ability to replace photovoltaic cells can significantly reduce maintenance and upgrade costs.A variety of other variations are also possible. Films, layers, components, and / or elements may be added, removed, or rearranged. In addition, process steps can be added, removed, or reordered. In addition, although the terms film and layer have been used herein, these terms used herein include film stacks and multiple layers. These film stacks and multilayers can be adhered to other structures using an adhesive or can be deposited or otherwise formed on other structures.The above examples are merely examples, and those skilled in the art can now make numerous uses of and deviate from the above examples without departing from the inventive concepts disclosed herein. Various modifications to these examples may be readily apparent to those skilled in the art, and the general principles defined herein may be applied to other examples without departing from the spirit and scope of the novel aspects described herein. Therefore, the scope of the present invention is not intended to be limited to the examples shown herein, but will be given the broadest scope consistent with the principles and novel features disclosed herein. The word "example" is used exclusively herein to mean "serving as an example, instance, or illustration." Any example described herein as "exemplary" is not necessarily to be construed as preferred or advantageous over other examples. |
A pinned photodiode with a surface layer of a first conductivity type laterally displaced from an electrically active area of a gate structure and a charge collection region of a second conductivity type formed by an angled implant is disclosed. The angle of the charge collection region implant may be tailored so that the charge collection region contacts an adjacent edge of the transfer gate of the pixel sensor cell and minimizes, therefore, the gate overlap region and an undesirable barrier potential. |
1.A photodiode for an imaging device, the photodiode comprising:a first layer of a first conductivity type formed in the substrate and laterally displaced from the electrically active portion of the gate of the charge transfer transistor by a distance of from about 0 angstroms to about 5000 angstroms;a second conductivity type charge collection region formed under the first layer for accumulating charge generated by light, the charge collection region being adjacent to the transistor gate, the gate being to be in the charge collection The charge accumulated in the region is transferred to the doping region of the second conductivity type.2.The photodiode of claim 1 wherein said first layer is laterally displaced from said electrically active portion by between about 300 angstroms and about 3,000 angstroms.3.The photodiode of claim 1 wherein said first layer is a surface layer.4.The photodiode of claim 1 wherein said first layer is in contact with an isolation region formed within said substrate.5.The photodiode of claim 1 wherein said first conductivity type is p-type and said second conductivity type is n-type.6.The photodiode of claim 5 wherein said first layer is doped with a p-type dopant at an implant dose of from about 1 x 1012 to about 1 x 1014 atoms/cm2.7.The photodiode of claim 5 wherein said charge collection region is doped with an n-type dopant at an implantation dose of from about 1 x 1011 to about 1 x 1014 atoms/cm2.8.The photodiode of claim 1 wherein said photodiode is a pnp photodiode.9.The photodiode of claim 1 wherein said imaging device is one of a 3T, 4T, 5T or 6T imaging device.10.The photodiode of claim 1 wherein said imaging device is a CCD imaging device.11.An image pixel comprising:a gate structure of a transistor formed on a semiconductor substrate;a photodiode adjacent to the gate, the photodiode comprising a pinning layer of a first conductivity type and a second conductivity type doping region under the pinning layer, the pinning layer being The electrically active portion of the gate is laterally displaced by a distance of from about 0 angstroms to about 5000 angstroms, and the doped region is separated from the electrically active portion of the gate by a gate sidewall.12.The image pixel of claim 11 wherein said pinned layer is laterally displaced from said gate by a distance of between about 300 angstroms and about 3,000 angstroms.13.The image pixel of claim 11 wherein said pinned layer is adjacent to and in contact with an isolation region formed within said semiconductor substrate.14.The image pixel of claim 11 wherein said first conductivity type is p-type and said second conductivity type is n-type.15.The image pixel according to claim 11, wherein the pinning layer is doped with a dopant selected from the group consisting of boron, bismuth, indium, and magnesium.16.The image pixel of claim 11 wherein said pinned layer is doped with boron at an implant dose of from about 1 x 1012 to about 1 x 1014 atoms/cm2.17.The image pixel of claim 11 wherein said photodiode is a pnp photodiode.18.The image pixel of claim 11 wherein said photodiode is part of a CMOS imager.19.The image pixel of claim 11 wherein said photodiode is part of a CCD imager.20.A photodiode for an image sensor, comprising:a surface layer of a first conductivity type adjacent to a gate of the transfer transistor, the gate being formed on the silicon substrate;a second conductivity type doping region located under the surface layer, at least a portion of the doping region being located between the gate electrode and the surface layer, the pinning layer being laterally displaced from the gate electrode A distance of 0 angstroms to about 5000 angstroms.21.The photodiode of claim 20 wherein said surface layer is laterally displaced from said gate by a distance of from about 300 angstroms to about 3,000 angstroms.22.The photodiode of claim 20 wherein said surface layer is adjacent to and in contact with an isolation region formed within said silicon substrate.23.The photodiode of claim 20 wherein said surface layer and said doped region are both within said doped layer of said first conductivity type.24.A photodiode according to claim 20, wherein said surface layer is doped with phosphorus at an implantation dose of from about 1 x 1012 to about 1 x 114 atoms/cm.25.The photodiode of claim 20 wherein said image sensor is a CMOS imager.26.The photodiode of claim 20 wherein said image sensor is a CCD imager.27.A CMOS imager system comprising:(i) the processor;(ii) a CMOS imaging device coupled to the processor, the CMOS imaging device comprising:a field isolation region formed in the substrate;a pixel adjacent to the field isolation region, the pixel including a pnp photodiode adjacent to a transfer transistor gate, the pnp photodiode further comprising a p-type surface layer and an n-type doping region under the p-type surface layer, The p-type surface layer is laterally displaced from the electrically active portion of the gate by a distance of from about 0 angstroms to about 5000 angstroms.28.The system of claim 27 wherein said p-type surface layer is laterally displaced from said electrically active portion of said gate to between about 300 angstroms and about 3,000 angstroms.29.The system of claim 27 wherein said p-type surface layer is adjacent to and in contact with said field oxide region.30.The system of claim 27 wherein said p-type surface layer and said n-type doped region are both located within a p-type doping region.31.The system of claim 27 wherein said p-type surface layer is doped with boron at an implant dose of from about 1 x 1012 to about 1 x 1014 atoms/cm2.32.A method of forming a photodiode of a pixel sensor unit, the method comprising:Forming a gate of the transistor on the substrate;Forming a first doped layer of a first conductivity type in the substrate, the first doped layer being laterally displaced from the electrically active portion of the gate by a predetermined distance;Injecting ions of the second conductivity type in a first region of the substrate under the first doped layer in a first direction and at an angle of incidence different from a zero angle of the substrate A doping region of the second conductivity type is formed in the substrate and under the first doping layer.33.The method of claim 32, wherein the first doped layer is formed by implanting ions of the first conductivity type at an incident angle different from a zero angle of the substrate.34.The method of claim 32, wherein the first doped layer is formed by implanting ions of the first conductivity type at an incident angle of about zero degrees with the substrate.35.The method of claim 32 wherein said first direction is a direction from right to left and into said substrate relative to said gate.36.The method of claim 32 wherein said first doped layer has an implant dose in a range from about 1 x 1012 to about 1 x 1014 atoms/cm.37.The method of claim 32 wherein said first doped layer is formed to laterally shift from said electrically active portion of said gate to between about 0 angstroms and about 5000 angstroms.38.The method of claim 37 wherein said first doped layer is formed to be laterally displaced from said electrically active portion of said gate to between about 300 angstroms and about 3,000 angstroms.39.The method of claim 32, wherein said forming said first doped layer further comprises forming a photoresist layer on said substrate and said gate and patterning said photoresist layer And etching to expose a second region of the substrate, the second region being between the gate and the at least one isolation region, the second region being spaced apart from the gate by the predetermined distance.40.The method of claim 32, wherein said act of forming said doped region of said second conductivity type further comprises forming a photoresist layer on said substrate and said gate, and said light The dicing layer is patterned and etched to expose the first region of the substrate between the sidewall of the gate and the at least one isolation region.41.32. The method of claim 32, wherein the act of implanting ions of the second conductivity type further comprises between the gate and the at least one isolation region at the angle of incidence different from a zero degree angle The doped agent is oriented in the first region of the substrate.42.The method of claim 32 wherein said doped region has an implant dose in a range from about 1 x 1011 to about 1 x 1014 atoms/cm2.43.The method of claim 32 wherein said first conductivity type is p-type and said second conductivity type is n-type.44.The method of claim 32 wherein said photodiode is a pnp photodiode.45.The method of claim 32 wherein said photodiode is part of a CMOS imager.46.The method of claim 32 wherein said photodiode is part of a CCD imager.47.A method of forming a photodiode, the method comprising:Forming at least one shallow trench isolation region in the silicon substrate;Forming a transistor gate on the silicon substrate and separating and separating from the at least one shallow trench;Forming a first doped layer of a first conductivity type in the silicon substrate;Forming a second doping layer of the first conductivity type in the first doping layer by implanting ions in a first direction and at an incident angle different from zero with the silicon substrate, the second doping layer The isolation region contacts and laterally shifts a predetermined distance from an electrically active region of the transistor gate;A second conductivity type doping region is formed in the first doping layer by implanting ions in a second direction and at an incident angle different from zero with the silicon substrate.48.The method of claim 47 wherein said second doped layer has an implant dose in a range from about 1 x 1012 to about 1 x 1014 atoms/cm2.49.The method of claim 47 wherein said second doped layer is laterally displaced from said electrically active region of said transistor gate by between about 0 angstroms and about 5000 angstroms.50.The method of claim 49 wherein said second doped layer is laterally displaced from said electrically active region of said transistor gate by between about 300 angstroms and about 3,000 angstroms.51.The method of claim 47, wherein the act of forming the doped region further comprises forming at least a portion of the doped region between the second doped layer and the transfer gate.52.The method of claim 47, wherein the doping region has an implant dose ranging from about 1 x 1011 to about 1 x 1014 atoms/cm2.53.The method of claim 47 wherein said first direction is opposite said second direction.54.The method of claim 47 wherein said photodiode is part of a CMOS imager.55.The method of claim 47 wherein said photodiode is part of a CCD imager.56.A method of forming a pnp photodiode, the method comprising:Forming at least one field oxide region within the substrate;Forming a transistor gate on the substrate and separating from the at least one field oxide region;Forming a first p-type doping layer in the substrate;Forming a photoresist layer on the transistor gate and the field oxide region;Patterning the photoresist layer to form a first opening extending between a first location corresponding to a first point on the photodiode region and a second location Corresponding to a second point on the field oxide region;Performing a first oblique implant through the first opening to form a p-type surface layer in the first p-type doping layer, the p-type surface layer being electrically effective from a gate structure formed on the substrate Horizontal shifting of the area;A second oblique implant is performed to form an n-type doping region in the first p-type doping layer, the n-type doping region being located under the p-type surface layer.57.The method of claim 56 wherein said p-type surface layer is laterally shifted a predetermined distance from said electrically active region of said transistor gate.58.The method of claim 56 wherein said predetermined distance is from about 0 angstroms to about 5000 angstroms.59.The method of claim 56 wherein said p-type surface layer has an implant dose in a range from about 1 x 1012 to about 1 x 1014 atoms/cm2.60.The method of claim 56 wherein said n-type doping region has an implant dose in a range from about 1 x 1011 to about 1 x 1014 atoms/cm2.61.The method of claim 56 wherein said pnp photodiode is part of a CMOS imager.62.The method of claim 56 wherein said pnp photodiode is part of a CCD imager. |
Tilt-pinned photodiode for high quantum efficiency and method of forming sameTechnical fieldThis invention relates to the field of semiconductor devices and, in particular, to improved photodiodes for high quantum efficiency.Background techniqueThe semiconductor industry currently uses different types of semiconductor-based imagers, such as charge coupled devices (CCDs), photodiode arrays, charge injection devices, hybrid focal plane arrays, and the like.Because of the inherent drawbacks and expense of CCD technology, CMOS imagers have been used as low cost imaging devices. The CMOS imager circuit includes a focal plane pixel cell array, each of which includes a photodiode, photogate, or photoconductor positioned over the substrate doped region to accumulate light generated charges in a lower portion of the substrate. A readout circuit is coupled to each of the pixel cells and includes a charge transfer portion formed adjacent the photodiode, photogate or photoconductor on the substrate, the latter having a charge detection node, typically a floating diffusion node, connected to the source follower The gate of the output transistor. The imager can include at least one transistor for transferring charge from the charge accumulation region of the substrate to the floating diffusion node, and a transistor for resetting the diffusion node to a predetermined charge level prior to charge transfer.In a conventional CMOS imager, the active components of a pixel cell perform the following necessary functions: (1) photon-to-charge conversion; (2) accumulation of image charge; (3) transfer of charge-amplified charge to a floating diffusion node; (4) resetting the floating diffusion node to a known state before transferring the charge to the floating diffusion node; (5) selecting one pixel readout; and (6) outputting and amplifying a signal representing the pixel charge. The charge on the floating diffusion node is converted to a pixel output voltage by the source follower output transistor. The photosensitive elements of a CMOS imager pixel are typically depletion mode pn junction photodiodes or field depletion regions under the light gate.Exemplary CMOS imaging circuits and imaging circuit various CMOS components are described in U.S. Patent No. 6,204,524 to Rhodes et al., U.S. Patent No. 6,310,366 to Rhodes et al., and U.S. Patent No. 6,326,652 to Rhodes et al. A detailed description of the functionality of the disclosure is incorporated herein by reference.FIG. 1 illustrates a schematic top view of a semiconductor wafer segment of an exemplary CMOS sensor pixel 4 transistor (4T) cell 10. As will be described below, the CMOS sensor pixel unit 10 includes a light-generated charge accumulation region 21 in a lower portion of the substrate. This region 21 is formed as a pinned photodiode 11 as shown in FIG. 2, which is formed as part of the pnp structure in the substrate 20. The pinned photodiode is called "pinning" because when the photodiode is completely depleted, the potential in the photodiode is pinned to a constant value. However, it should be understood that the CMOS sensor pixel unit 10 may include a light gate, a photoconductor, or other image charge conversion device instead of a pinned photodiode as an initial accumulation region 21 for light generated charges.The CMOS image sensor 10 of FIG. 1 has a transfer gate 30 for transferring the photocharge generated in the charge accumulation region 21 to a floating diffusion region (detection node) 25. The floating diffusion region 25 is also connected to the gate 50 of the source follower transistor. The source follower transistor provides an output signal to the row select access transistor, the latter having a gate 60 for selectively strobing the output signal to terminal 32. A reset transistor having a gate 40 resets the floating diffusion region 25 to a specified charge level before each charge is transferred out of the charge accumulation region 21.The charge accumulation region 21 is formed as a pinned photodiode 11 having a p-type layer 24, an n-type region 26, and a p-type substrate 20. The pinned photodiode 11 includes two p-type regions 20, 24 and an n-type photodiode region 26 that is fully depleted at the pinning voltage. The noisy source/drain regions 22 (Fig. 1) preferably have n-type conductivity and are disposed on either side of the transistor gates 40, 50, 60. The floating diffusion region 25 adjacent to the transfer gate 30 is also preferably n-type. Exemplary pinned photodiodes and various photodiode elements are described in, for example, U.S. Patent No. 6,320,617 to the name of the disclosure of U.S. Patent No. 6, 306, 676 to the name of the disclosure of U.S. Pat. Detailed description of the function.FIG. 2 also illustrates a trench isolation region 15 formed in the active layer 20 adjacent to the charge accumulation region 21. The trench isolation region 15 is typically formed using a conventional STI process or a local oxidation process using silicon (LOCOS). A translucent or transparent insulating layer 55 formed on the CMOS image sensor 10 is also illustrated in FIG. Contacts 32 (FIG. 1), such as in insulating layer 55, are formed by conventional processing methods to provide electrical connections to source/drain regions 22, floating diffusion regions 25, and other connections to the gates, as well as CMOS images. Other connections in sensor 10.Typically in a CMOS image sensor, such as CMOS image sensor unit 10 of Figures 1-2, incident light causes electrons to be concentrated in region 26. The maximum output signal produced by the source follower transistor with gate 50 is proportional to the number of electrons to be extracted from region 26. The maximum output signal increases with the electronic capacity or acceptability of region 26 to collect electrons. The electronic ability of a pinned photodiode generally depends on the level of noise of the image sensor and the dopant injected into the active layer.In the manufacture of CMOS image sensors, it is important to minimize the dark current in the photodiode. The dark current is generally attributed to leakage in the charge collection region 21 of the pinned photodiode 11, which strongly depends on the noisy implantation of the CMOS image sensor. The high dopant concentration in the electrical connection region 23 (Fig. 2) generally increases the dark current. In addition, defects and capture points in or near the photodiode depletion region strongly influence the magnitude of the dark current generated. The dark current is the result of the current generated from the capture point in or near the depletion region of the photodiode; as a result of the high electric field in the depletion region, tunneling can be induced to the band to induce carrier generation; from the lateral sidewalls of the photodiode Junction leakage; and tunneling effects from insulation angles such as stress induced and trapping.A common problem associated with the pinned photodiode 11 of Figure 2 is the dark current generation as a result of gate induced drain leakage (GIDL) in the transfer gate overlap region 27 (Fig. 2). Transfer gate overlap region 27 is under gate 30 and allows electrical connection between n-type photodiode depletion region 26 and diffusion node 25. As a result of the transfer gate overlap region 27 (Fig. 2), an undesired barrier potential occurs in this region, which affects the complete transfer of charge from the photodiode 11 when the photodiode 11 is completely depleted.CMOS imagers also generally suffer from poor signal to noise ratio and poor dynamic range as a result of insufficient collection and storage of charge collected in region 26. Due to the collection of electrons generated by photons in region 26, the size of the pixel electrical signal is very small, so the signal to noise ratio and dynamic range of the pixel should be as high as possible.Accordingly, there is a need for an improved active pixel photosensor for use in a CMOS imager that exhibits reduced dark current and reduces undesirable barriers that appear in the overlap region below the gate structure near the photodiode. Potential. There is also a need for a method of fabricating an active pixel photosensor that exhibits these improvements.Summary of the inventionIn one aspect, the present invention provides a pinned photodiode in which a pinned layer is laterally displaced a predetermined distance from an electrically active region of a transfer gate of a pixel sensor unit. The pinned layer is in contact with a charge collection region formed by an angled implant. The angle at which the charge collection region is implanted can be tailored such that the charge collection region contacts adjacent edges of the transfer gate of the pixel sensor unit and thereby minimizes the gate overlap region and the undesired barrier potential.In another aspect, the present invention provides a first method of forming a pinned photodiode by implanting a desired dopant in a region of the substrate that is laterally shifted by a predetermined distance from an electrically active portion of a transfer gate of the pixel sensor unit. A method of conductive pinning a surface layer. The second conductivity type doping region is formed by obliquely implanting and contacting the laterally displaced pinning layer. The desired dopant of the second conductivity type is implanted at a non-zero angle, wherein 0 degrees is defined as being perpendicular to the silicon substrate.These and other features and advantages of the present invention will become more apparent from the detailed description of the embodiments of the invention.DRAWINGS1 is a top plan view of an exemplary CMOS image sensor pixel;Figure 2 is a schematic cross-sectional view of the CMOS image sensor of Figure 1 taken along line 2-2';3 is a schematic cross-sectional view of a CMOS image sensor pixel illustrating an initial stage of fabrication and processing of a pinned photodiode in accordance with the present invention;Figure 4 is a schematic cross-sectional view of the CMOS image sensor segment of Figure 3 in a subsequent processing stage shown in Figure 3;Figure 5 is a top plan view of the CMOS image sensor pixel of Figure 4;Figure 6 is a schematic cross-sectional view of the CMOS image sensor pixel of Figure 3 in a subsequent processing stage shown in Figure 4;Figure 7 is a schematic cross-sectional view of the CMOS image sensor pixel of Figure 3 in a subsequent processing stage shown in Figure 6;Figure 8 is a top plan view of the CMOS image sensor pixel of Figure 7;Figure 9 is a schematic cross-sectional view of the CMOS image sensor pixel of Figure 3 in a subsequent processing stage shown in Figure 7;10 is a schematic cross-sectional view of the CMOS image sensor pixel of FIG. 4 at a subsequent processing stage shown in FIG. 4, in accordance with another embodiment of the present invention;Figure 11 is a schematic cross-sectional view of the CMOS image sensor pixel of Figure 4 after the processing stage shown in Figure 10;Figure 12 is a schematic cross-sectional view of the CMOS image sensor pixel of Figure 4 after the processing stage shown in Figure 11;Figure 13 is a schematic cross-sectional view of the CMOS image sensor pixel of Figure 4 after the processing stage shown in Figure 12;Figure 14 is a top plan view of a 3T pixel sensor unit fabricated in accordance with one embodiment of the present invention, partially similar to that illustrated in Figure 5;Figure 15 is a top plan view of the Figure 3 3T pixel sensor unit, partially similar to the fabrication stage illustrated in Figure 14 illustrated in Figure 8;Figure 16 is a schematic cross-sectional view of the 3T pixel sensor unit of Figure 14 taken along line 2-2' of the subsequent fabrication stage illustrated in Figure 15;Figure 17 illustrates a schematic diagram of a computer processor system including a CMOS image sensor fabricated in accordance with the present invention;Figure 18 is a schematic top plan view of a CCD image sensor at a manufacturing stage similar to that shown in Figure 5;Figure 19 is a schematic partial view of the CCD image sensor of Figure 18 similar to the manufacturing stage shown in Figure 8.Detailed waysThe drawings, which are incorporated in and constitute a The embodiments are described in sufficient detail to enable those skilled in the art to practice the present invention. It is understood that other embodiments can be utilized, and structurally, logically, without departing from the spirit and scope of the invention. Upper and electrical can be changed.The terms "wafer" and "substrate" are understood to mean semiconductor-based materials, including silicon, silicon-insulator (SOI) or silicon-sapphire (SOS) technology, doped and undoped semiconductors, supported by base semiconductor substrates and other semiconductor structures. Silicon epitaxial layer. Additionally, when "wafer" or "substrate" is referred to in the following description, previous processes may be utilized to form regions or junctions within or on a base semiconductor structure or foundation. In addition, the semiconductor need not be silicon based, but may be based on silicon germanium, silicon-insulator, silicon-sapphire, germanium or gallium arsenide, and the like.The term "pixel" refers to an image element unit that includes a photosensor and a transistor for converting electromagnetic radiation into an electrical signal. For purposes of illustration, a representative pixel is illustrated and described herein in the drawings, and the fabrication of all pixels in a general imager will be performed in a similar manner.DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS Referring now to the drawings in which like reference numerals are used, the same reference numerals are used to illustrate the same. FIG. 9 and 13 illustrate pixel sensor units 100 (FIG. 9) and 200 (FIG. 13) of two exemplary embodiments having respective staples The photodiodes 199, 299 are tied with respective pinned surface layers 188, 288 laterally displaced from the active area of the gate structure 130 and in contact with respective charge collection regions 126, 226 formed by oblique implantation.The process of forming the structure illustrated in Figure 9 will now be described with reference to Figures 3-9. Figure 3 shows the substrate 110 in a cross-sectional view as in the view of Figure 2. For illustrative purposes, substrate 110 is a silicon substrate. However, as indicated above, the invention is equally applicable to other semiconductor substrates.Figure 3 also illustrates an isolation region 155 formed in the substrate 110 and filled with a dielectric material, which may be an oxide material such as silicon oxide, such as SiO or SiO2, nitrogen oxides, nitride materials such as nitrogen. Silicon, silicon carbide, high temperature polymers or other suitable dielectric materials. However, in a preferred embodiment, isolation region 155 is a shallow trench isolation region and the dielectric material is a high density plasma (HDP) oxide, a material having a high capacity to effectively fill a narrow trench. Thus, for the sake of simplicity, in the present application, the isolation region 155 is referred to as the shallow trench isolation region 155. The shallow trench isolation region 155 has a depth of about 1000 to 4000 angstroms, preferably about 2000 angstroms.A multilayer transfer gate stack 130 formed on a silicon substrate 110 is also illustrated in FIG. Transfer gate stack 130 includes a first gate oxide layer 131 of silicon oxide grown or deposited on silicon substrate 110, a conductive layer 132 of doped polysilicon or other suitable conductive material, and a second insulating layer 133, which may be For example, silicon oxide (silicon dioxide), nitride (silicon nitride), oxynitride (silicon oxynitride), ON (oxide-nitride), NO (nitride-oxide) or ONO (oxide- Nitride-oxide formation. The first and second insulating layers 131, 133 and the conductive layer 132 may be formed by a conventional deposition method such as chemical vapor deposition (CVD) or plasma enhanced chemical vapor deposition (PECVD) or the like.Although the embodiment of the present invention will be described below with reference to the transfer gate stack 130 in which sidewall spacers are not formed on each side thereof, it should be understood that the present invention is not limited to the embodiment. Accordingly, the present invention also contemplates forming a gate stack of insulating sidewall spacers on each side of the transfer gate. The sidewall spacers may be formed, for example, with silicon dioxide, silicon nitride, silicon oxynitride, ON, NO, ONO, or TEOS, etc., as necessary.Additionally and as necessary, a silicide layer (not shown) may also be formed between the conductive layer 132 and the second insulating layer 133 within the multilayer gate stack 130. The gate structure of all other transistors in the imager circuit design preferably also has a separately formed silicide layer. The silicide layer may be titanium silicide, tungsten silicide, cobalt silicide, molybdenum silicide or tantalum silicide. This added conductive layer may also be a barrier/refractory metal such as TiN/W or WNx/W, or it may be formed entirely of WNx.The insulating layer 121 may be formed on the substrate 110 including the STI region 155 and the transfer gate 130, as also shown in FIG. The insulating layer 121 may preferably be an oxide layer formed by an oxidation or deposition method and has a thickness of about 10 angstroms to 3,000 angstroms, more preferably about 20 angstroms to about 1000 angstroms. Although the embodiment of the present invention is described below with reference to the insulating layer 121 formed on the substrate 110 including the transfer gate 130, it should be understood that the present invention also contemplates that the embodiment described below does not form the insulating layer 121.A doped layer or well 120 of the first conductivity type is also illustrated in Figure 3, for example p-type. As is known in the art, p-type well 120 can be formed within substrate 110 by implanting a p-type dopant in a region of the substrate directly below the active area of the pixel cell. The p-type well 120 may be formed after shallow trench isolation (STI) 155 and gate stack 130 are formed. It should be understood, however, that p-well 120 can also be formed prior to formation of shallow trench isolation (STI) 155 and/or gate stack 130. The implantation dose in the p-type well 120 is in the range of about 1 × 10 11 to about 3 × 10 14 atoms/cm 2 , and preferably in the range of about 1 × 10 12 to about 3 × 10 13 atoms / cm 2 .After forming the STI regions 155 and the transfer gates 130, a first photoresist layer 177 is then formed over the structure of FIG. 3 to a thickness of from about 1000 angstroms to about 20,000 angstroms, as shown in FIG. The first photoresist layer 177 is patterned to obtain a first opening 178 over the region between the edge of the substantially gate structure 130 and the isolation region 155 above the substrate 110, where a charge accumulation region is formed.As illustrated in FIG. 4, the first photoresist layer 177 is patterned such that on one side of the opening 178, the first photoresist layer 177 completely covers the isolation region 155 and extends to the photodiode region 101, where A photodiode will be formed. On the other side of the opening 178, the first photoresist layer 177 only partially covers the gate structure 130. Thus, the first photoresist layer 177 does not cover the gate structure 130 within a predetermined first offset distance D1 (FIG. 4) from the sidewall of the gate conductor 132 representing the electrically active portion of the gate structure 130. The predetermined first offset distance D1 is from about 100 angstroms to about 6000 angstroms, more preferably from about 300 angstroms to about 2000 angstroms.Figure 5 illustrates a top plan view of the structure of Figure 4.The first doping oblique implant 179 (FIG. 4) is performed using a second conductivity type, for example, an n-type dopant, to pass through the first of the photodiode regions 101 of the substrate 110 directly below the active area of the pixel cell. Opening 178 (Fig. 4) implants ions to form n-type region 126, as illustrated in Figure 6. The implanted n-doped region 126 is aligned with the edge of the transfer gate 130 and forms a photosensitive charge storage region for collecting electrons generated by the light.For the purposes of the present invention, the term "inclined implant" is defined as an implant at an angle of incidence that is at a non-zero angle to substrate 110, with 0 degrees being perpendicular to the silicon substrate. Thus, the term "inclined implant" refers to implantation at an angle of incidence of from 0 degrees to 90 degrees below the substrate.The first ion tilt implant 179 (Fig. 4) can be performed by placing the substrate 110 in an ion implanter and passing the first opening 178 (Fig. 4) at an energy of 10 keV to 1 MeV, 30 keV to 300 keV. Preferably, an appropriate n-type dopant is ion implanted into the substrate 110 to form an n-doped region 126. As exemplified in FIG. 4, an n-type dopant such as arsenic, antimony or phosphorous may be implanted from right to left with respect to the gate structure 130 and, for example, in the (x, y) plane. The implantation dose of the n-doping region 126 (Fig. 6) is in the range of about 1 × 10 11 to about 1 × 10 14 atoms/cm 2 , and preferably in the range of about 5 × 10 11 to about 1 × 10 13 atoms / cm 2 . . Multiple energy injections may be employed as necessary to design the contour of the n-doped region 126.The angle of the first dopant implant 179 can be designed such that the n-type region 126 substantially coincides with the edge of the gate structure 130 and is separated from the STI region 155 by a second offset distance D2 (Fig. 6). The second offset distance D2 is from about 0 angstroms to about 5000 angstroms, more preferably from about 500 angstroms to about 3,000 angstroms.The angle at which the first dopant tilt implant 179 is a function of the implant energy and the first offset distance D1 (Fig. 4). Accordingly, the first offset distance D1 can be tightly controlled by the injection angle and the injection energy. The first tilt implant 179 can be performed at an angle of incidence of from 0 degrees to about 60 degrees with respect to the substrate 110, more preferably from about 3 degrees to about 30 degrees.After the first tilt implant 179 (Fig. 4), the first photoresist layer 177 is removed by conventional techniques, such as, for example, oxygen plasma. The structure is depicted here in Figure 6.A second photoresist layer 167 (FIG. 7) is then formed over the insulating layer 121 to a thickness of from about 1000 angstroms to about 20,000 angstroms. The second photoresist layer 167 (FIG. 7) is patterned with a mask to obtain a second opening 168. Thus, on one side of the second opening 168, the second photoresist layer 167 overlaps the gate 130. On the other side of the second opening 168, the second photoresist layer 167 extends a distance D3 over the STI region 155 (the rightmost STI region in FIG. 7). The third offset distance D3 (Fig. 7) may be from about 0 angstroms to about 5000 angstroms, more preferably from about 300 angstroms to about 1500 angstroms. As a result of the oblique implant, the p-type implant 289 is displaced from the gate edge of the transistor 130 by a distance x = D4 = t + H tan θ, where "t" is the sidewall thickness of the insulating layer 121 and "H" is the gate stack The height includes the thickness of the insulating layer 121 above the transistor gate stack 130. The distance D4 is from about 0 angstroms to about 5,000 angstroms, more preferably from about 300 angstroms to about 3,000 angstroms.Figure 8 illustrates a top plan view of the structure of Figure 7.The second dopant tilt implant 189 (FIG. 7) is performed using a first conductivity type, for example, a p-type dopant, to implant ions into the pixel cell active region directly below and laterally across the second opening 168 (FIG. 7). The STI region 155 is spaced from the substrate region of D3 to form a p-type pinned surface layer 188, as illustrated in FIG. The second oblique implant 189 can be performed in a left-to-right direction with respect to the gate structure 130, and for example, on the (x, y) plane, at about 0 to about 60 degrees from the substrate 110. The incident angle is preferably from about 0 degrees to about 30 degrees.As shown in FIG. 9, the implanted p-type pinned surface layer 188 is aligned with and in contact with the edge of the isolation region 155 and laterally displaced from the gate stack 130 by an offset distance D4, depending on the implantation angle of the implant 187. Thus, by laterally shifting from the gate structure 130, the p-type pinned layer 188 avoids the formation of any barriers near the transfer gate region and eliminates the occurrence of any transfer gate overlap regions that affect charge from the charge collection region 126. The transfer to the floating diffusion region 125 also ensures a good electrical connection to the substrate through the p-well 120.Ion implantation can be carried out by placing the substrate 110 in an ion implanter and passing the second p-type doping agent through the second opening 168 (Fig. 7) at an energy of 500 eV to 100 keV, preferably 1 keV to 30 keV. The substrate 110 is ion implanted to form a p-type pinned surface layer 188. A p-type dopant such as boron, germanium, indium or magnesium can be used for the second injection. The implantation dose of the p-type pinned surface layer 188 (Fig. 9) is in the range of about 1 x 10 12 to about 1 x 10 14 atoms/cm 2 , more preferably about 4 x 10 12 to about 4 x 10 13 atoms/cm 2 .After the second oblique implant 189 of FIG. 7, the second photoresist layer 167 is removed by conventional techniques, such as, for example, oxygen plasma, to form a pnp photodiode 199, including regions 188 and 126, as illustrated in FIG. of. Also adjacent to the charge collection region 126 and adjacent to the gate structure 130, a floating diffusion region 125 is formed by methods known in the art.As a result of the oblique implantation of the charge collection region 126 and the pinned surface layer 188, the photovoltaics in the pinned surface layer 188 with oblique implants and lateral shifts and the obliquely implanted charge collection regions 126 are compared to conventional 0 degree implants. In the diode 199, the ion implantation channel effect is reduced. In addition, the n-type doping region 126 formed by the oblique implantation is aligned with the edge of the transfer gate 130, and the transfer gate overlap region is eliminated, which generally occurs under the transfer gate 130 as described above. Thus, any undesired barrier potential that affects the transfer of charge from the n-type charge collection region 126 to the floating diffusion region 125 is eliminated.The device of the pixel sensor unit 100 including the reset transistor, the source follower transistor, and the row select transistor is then formed by a well-known method. Contacts and wiring can also be formed using conventional process steps to connect the gate lines to other connections in the pixel unit 100. For example, the entire surface may be covered with a passivation layer such as silicon dioxide, BSG, PSG or BPSG, which is planarized and etched by CMP to provide contact holes, which are then metallized to provide a reset gate as needed Contacts for poles, transfer gates, and other pixel gate structures. The structure of the pixel sensor unit can also be interconnected by conventional multilayer conductors and insulators to other circuit structures.Figures 10-13 illustrate yet another embodiment of the present invention in which only charge collection regions 226 (Fig. 13) are formed by oblique implantation. The structure of Figure 10 is similar to the structure of Figure 7; however, the structure of Figure 10 is subjected to a straight surface p-type implant (defined as being implanted at an angle of about 0 degrees) to form pinned layer 288 (Figure 11), unlike The oblique injection 189 is performed as in the first embodiment.A direct implant 169 (FIG. 10) is performed to implant a p-type ion such as boron or indium into the region of the substrate 110 directly below the surface of the substrate and laterally displaced from the gate structure 130 by a distance "t", as shown in FIG. Corresponds to the thickness of the sidewall insulator 121. More preferably, from about 1 keV to about 30 keV, from 500 eV to about 100 keV, a p-type dopant is ion implanted into the substrate 110 through opening 168 (FIG. 10) to form a lateral shift of about 10 from the electrically active region of gate stack 130. A p-type pinned surface layer 288 having a offset distance "t" of about 3000 angstroms, about 20 angstroms to about 1000 angstroms. This is achieved by adjusting the thickness of the deposited insulating layer 121. The implantation dose of the p-type pinned layer 288 (Fig. 11) is in the range of about 1 x 10 12 to about 1 x 10 14 atoms/cm 2 , more preferably about 4 x 10 12 to about 4 x 10 13 atoms/cm 2 .Figures 12-13 illustrate the formation of an n-type region 226 by a method similar to that described above for forming an n-type doping region 126 (Figures 6-9). Thus, the doped oblique implant 179a (FIG. 12) having a right-to-left direction with respect to the gate 130 is performed by an opening 178 formed in the second photoresist layer 177 (FIG. 12). The oblique implant 179a is performed using a second conductivity type, for example, an n-type dopant, to implant ions into the active region of the pixel unit and the substrate region directly below the pinned layer 288 that is laterally displaced to form n Type doping region 226, as illustrated in Figure 13. As in the first embodiment, the implanted n-doped region 126 is aligned with the transfer gate 130 and forms a photosensitive charge storage region for collecting electrons generated by the light.The dopant implant 179a (Fig. 12) can be performed by placing the substrate 110 in an ion implanter and passing through an opening 178 (Fig. 12) at an energy of 10 keV to 1 MeV, preferably about 30 keV to 300 keV, which will be appropriate. The n-type dopant ions are implanted into the substrate 110 to form an n-doped region 226 underlying the p-type pinned layer 288. An n-type dopant such as arsenic, antimony or phosphorus is implanted from right to left with respect to the gate structure 130. The implantation dose in the n-doping region 226 (Fig. 13) is in the range of about 1 × 10 11 to about 1 × 10 14 atoms/cm 2 , and preferably in the range of about 5 × 10 11 to about 1 × 10 13 atoms / cm 2 . If necessary, multiple energy injections can also be used to design the contour of the n-doping region 226.As in the above embodiment, after the doping agent is obliquely implanted 179a, the photoresist layer 177 is removed by conventional techniques to complete the formation of the pnp photodiode 299 formed by regions 288 and 226, as illustrated in FIG. .Although the description of the above embodiments has referred to the formation of pnp photodiodes, such as pnp photodiodes 199 (Fig. 9) and 299 (Fig. 13) having n-type charge collection regions formed in the vicinity of respective pinning layers 188, 288, It should be understood that the invention is not limited to the embodiment. Accordingly, the present invention is equally applicable to an n-p-n photodiode including a p-type charge collection region formed by oblique implantation. Of course, the dopants and conductivity types of all structures will vary accordingly, and the transfer gates correspond to PMOS transistors. The invention is also applicable to p-n or n-p photodiodes, that is, photodiodes that do not include a "pinning" or "surface" layer.In addition, although the present invention has been described above with reference to 4T pixel units such as the pixel sensor unit 100 (FIG. 9) and 200 (FIG. 13), the present invention is equally applicable to a three-transistor (3T) unit, a five-transistor (5T) unit. Or a six-transistor (6T) unit. As is known in the art, a 3T pixel unit is different from a 4T pixel unit in that the transfer transistor is omitted. The 5T pixel unit is different from the 4T pixel unit in that a photogate transistor or a CMOS shutter transistor is added. For example, Figures 14-16 illustrate the formation of a 3T pixel cell 300 (Figure 16) having a pinned photodiode 399 that includes a pinned surface layer 388 that is laterally shifted from the active region of the reset transistor gate 40. The bit is in contact with the charge collection region 326 formed by oblique implantation. The formation of pinning surface layer 388 and charge collection region 326 is performed by a method similar to that used to form pinning surface layers 188, 288 and charge collection regions 126, 226, as described above with reference to Figures 3-13. 14 is partially similar to FIG. 5 and illustrates a schematic top plan view of charge collection region 326 forming opening 378 in previous photoresist layer 177. 15 is partially similar to FIG. 8, and illustrates a schematic top plan view of opening 368 in photoresist layer 167 after charge collection region 326 is formed and before surface layer 388 is formed.A typical processor-based system 600 for connecting a CMOS imager having pixels constructed in accordance with the present invention is illustrated in FIG. A processor-based system is an exemplary system with digital circuitry that can include a CMOS imager. Such a system may include, but is not limited to, computer systems, camera systems, scanners, machine vision, vehicle navigation, video telephony, surveillance systems, autofocus systems, star tracking systems, motion detection systems, images for high definition television Stabilizing systems and data compression systems, all of which can utilize the present invention.A processor-based system, such as a computer system, for example, typically includes a central processing unit (CPU) 644, such as a microprocessor that communicates with an input/output (I/O) device 646 over a bus 652. CMOS imager 642 communicates with the system via bus 652. Computer system 600 also includes random access memory (RAM) 648 and may include peripheral devices such as floppy disk drive 654 and compact disk (CD) ROM drive 656 or flash memory card 657, which are also in communication with CPU 644 via bus 652. It may also be desirable to integrate the processor 654, CMOS image sensor 642, and memory 648 on a single IC chip.Although the invention has been described above with reference to a 4T pixel unit that is part of a CMOS imager, the invention is equally applicable to photodiodes, such as having an n-type formed adjacent to respective pinning layers 188, 288 that are part of a CCD imager. The pnp photodiodes 199 (Fig. 9) and 299 (Fig. 13) of the charge collection region. For example, FIG. 18 illustrates a top plan view of CCD imager 700 showing photodiode n-type implant region 178 similar to FIG. 19 illustrates a portion of CCD imager 700 of FIG. 18 and photodiode p-type implant region 168, which is similar to FIG.The above description and drawings are to be regarded as illustrative only exemplary embodiments Modifications and substitutions of specific process conditions and structures may be made without departing from the spirit and scope of the invention. Accordingly, the invention is not limited by the foregoing description and drawings, but only by the scope of the appended claims. |
A method, an apparatus, and a system have been disclosed. An embodiment of the method includes an autonomous memory device receiving a set of instructions, the memory device executing the set of instructions, combining the set of instructions with any data recovered from the memory device in response to the set of instructions into a packet, and transmitting the packet from the memory device. |
claimed is:A method comprising:receiving a set of instructions at an autonomous memory device;executing the set of instructions in the memory device;combining, into a packet, the set of instructions with any data recovered from the memory device in response to the set of instructions; and transmitting the packet from the memory device.The method of claim 1 wherein receiving the set of instructions at the memory device and transmitting the packet from the memory device respectively comprise receiving the set of instructions from a network coupled to the memory device and transmitting the packet to the network.The method of claim 1 wherein receiving the set of instructions comprises receiving a packet comprising the set of instructions and the method further comprising parsing the received packet comprising:loading a program counter with an initial program counter value associated with the received set of instructions;loading an instruction memory with the set of instructions; andloading a register file with a set of initial conditions associated with the set of instructions.The method of claim 3 wherein executing the set of instructions comprises: calculating a new program counter value after executing a first instruction of the set of instructions; andstoring the new program counter value in the program counter.The method of claim 1 wherein executing the set of instructions comprises executing a first instruction in a first execution unit and a second instruction in a second execution unit wherein the execution of the first and second instructions is substantially in parallel. The method of claim 1 wherein the memory device is a first node of a plurality of nodes and transmitting the packet from the memory device comprises transmitting the packet to a second node of the plurality of nodes.The method of claim 6 and further comprising:receiving initial conditions from a third node of the plurality of nodes; and storing the initial conditions in a file register.The method of claim 1 wherein the set of instructions comprise a fence flag and storing the set of instructions comprises:storing one or more instructions prior to the fence flag in instruction memory and one or more instructions succeeding the fence flag in the instruction memory.The method of claim 8 and further comprising:executing the one or more instructions prior to the fence flag in a first execution unit; andexecuting the one or more instructions after the fence flag in a second execution unit.The method of claim 9 wherein executing the one or more instructions prior to the fence flag is performed substantially simultaneously with executing the one or more instructions after the fence flag.The method of claim 1 wherein executing the set of instructions comprises: providing a plurality of operands to a program counter execution unit;providing an operator to the program counter execution unit; andgenerating an updated program counter value in response to results from the execution of the operator on the plurality of operands. An apparatus comprising:a packet parser configured to receive a packet comprising instructions and a starting location;instruction memory coupled to the packet parser and configured to receive the instructions;a program counter coupled to the instruction memory and the packet parser, the program counter configured to initially receive the starting location from the packet parser and retrieve an instruction from the instruction memory at the starting location;a plurality of execution units coupled to the instruction memory for executing the instructions;a parser coupled to the plurality of execution units, the parser configured to control reading of data from a local memory;a register file coupled to the parser and the instruction memory and configured to store the data from the parser and the packet parser; anda packet generator coupled to the instruction memory and the register file, the packet generator configured to generate a packet for transmission, that comprises the set of instructions and the data.The apparatus of claim 12 wherein each of the plurality of execution units comprise:a plurality of arithmetic logic units (ALUs); anda multiplexing function coupled between outputs of at least two of the plurality of the arithmetic logic units.The apparatus of claim 13 wherein the plurality of ALUs comprise an ALU associated with each instruction from the instructions.The apparatus of claim 13 wherein each of the plurality of execution units implements an if-then-else statement. |
METHODS AND SYSTEMS FOR AUTONOMOUS MEMORYPRIORITY APPLICATION[0001] This application claims the benefit of priority to U.S. Application Serial No. 14/094,273, filed December 2, 2013, which is incorporated herein by reference in its entirety.BACKGROUND[0002] Memory devices are typically provided as internal, semiconductor, integrated circuits in computers or other electronic devices. There are many different types of memory including random-access memory (RAM), read only memory (ROM), dynamic random access memory (DRAM), synchronous dynamic random access memory (SDRAM), and non-volatile (e.g., flash) memory.[0003] A number of non- volatile memory devices can be combined to make a solid state drive (SSD) that can emulate a mechanically-operated hard disk drive in a computer system. Solid state drives can provide faster access with greater reliability than mechanical hard drives due to the lack of moving parts.[0004] Due at least in part to the increasing performance of computer systems, memory and solid state drive manufacturers can be under constant pressure to increase the performance of their memory in order to try to keep pace with computer system performance increases. There are general needs to make reading and writing to memory more efficient to relieve any operations burden on computer systems.BRIEF DESCRIPTION OF THE DRAWINGS[0005] FIG. 1 illustrates a functional block diagram of an embodiment of an autonomous memory processing apparatus.[0006] FIG. 2 illustrates a block diagram of an embodiment of a packet parser accordance with the embodiment of FIG. 1. [0007] FIG. 3 illustrates a block diagram of an embodiment of a program counter in accordance with the embodiment of FIG. 1.[0008] FIG. 4 illustrates a block diagram of an embodiment of an instruction memory in accordance with the embodiment of FIG. 1.[0009] FIG. 5 illustrates a block diagram of an embodiment of decode logic in accordance with the embodiment of FIG. 1.[0010] FIG. 6 illustrates a block diagram of an embodiment of a register file in accordance with the embodiment of FIG. 1.[0011] FIGs. 7A and 7B illustrate block diagrams of an embodiment of execution units in accordance with the embodiment of FIG. 1.[0012] FIG. 8 illustrates a block diagram of an embodiment of a parser in accordance with the embodiment of FIG. 1.[0013] FIG. 9 illustrates a block diagram of an embodiment of a packet generator in accordance with the embodiment of FIG. 1.[0014] FIG. 10 illustrates a diagram of an embodiment of a format for instruction execution in accordance with the embodiment of FIG. 1.[0015] FIG. 11 illustrates a block diagram of an embodiment of a memory system.[0016] FIG. 12 illustrates a flowchart of an embodiment of operation of the autonomous memory processing apparatus in an autonomous memory device.DETAILED DESCRIPTION[0017] In the following detailed description, reference is made to the accompanying drawings that form a part hereof and in which is shown, by way of illustration, specific embodiments. In the drawings, like numerals describe substantially similar components throughout the several views. Other embodiments may be utilized and structural, logical, and electrical changes may be made without departing from the scope of the present disclosure. The following detailed description is, therefore, not to be taken in a limiting sense. [0018] The present disclosure is not limited to any one type of memory. The autonomous memory processing apparatus can be associated with any type of memory device, group of memory devices, or memory technology including semiconductor memory, optical memory, or magnetic memory. For example, the memory might include no n- volatile (e.g., NAND Flash, NOR Flash, phase change memory (PCM)) or volatile (e.g., DRAM, SRAM).[0019] As used herein, a node can include a packet parser for parsing received packets, a packet generator for generating packets to be transmitted from the node to a network, and a network port that can interface the node with any network. The node can additionally include a processing element for controlling operation of the node as well as memory for storing data. In other embodiments, the node can include additional hardware and/or software/firmware for additional functions. An autonomous memory device having the autonomous processing apparatus can be considered a node.[0020] FIG. 1 illustrates a functional block diagram of an embodiment of an autonomous memory processing apparatus. Such an apparatus can be associated with memory 100 and can be used to relieve a memory bandwidth bottleneck in central processing unit (CPU)-based computing systems. The autonomous memory processing apparatus can be located in an autonomous memory device.[0021] The autonomous memory processing apparatus can include a packet parser 101, a program counter 107, instruction memory 105, decode logic 103, a register file 109, a parser l l5, a p acket generator 111 , one or more execution units (EUs) 113, and a page buffer 117. The elements and the architecture of FIG. 1 are for purposes of illustration only as other embodiments can use other elements and other architectures.[0022] FIG. 2 illustrates a block diagram of the packet parser 101. The packet parser 101 can be coupled to and accept data packets from a network (e.g., external network to the memory 100). The packet parser 101 can also be coupled to an input of the program counter 107 so that the packet parser 101 can load the program counter 107 with a program count (e.g., instruction memory location) that was received in a packet from the network. The packet parser 101 can also be coupled to an output of the program counter 107 so that the program counter 107 can load its present program count (e.g., instruction memory location) into the packet parser 101. The packet parser 101 can further be coupled to inputs of the instruction memory 105 and the register file 109 to enable loading of data (e.g., instructions) received in packets from the network into instruction memory 105 and the register file 109.[0023] FIG. 3 illustrates a block diagram of the program counter 107. For purposes of illustration, the program counter 107 is shown as a 32 bit counter. However, other embodiments might use other program counter sizes.[0024] The program counter 107 can have inputs from the packet parser 101 and a program counter execution unit (PCEU) 114 that can be part of the one or more execution units 113. The program counter 107 can have an output coupled to the instruction memory 105.[0025] The program counter 107 can contain program count values (e.g., instruction memory locations) to access particular instruction locations in the instruction memory 105 that can contain a program (e.g., executable instructions). The program count values can be set from particular data fields in incoming packets, as determined by and received from the packet parser 101, or calculated values from the program counter execution unit 114. The program counter 107 can then output the value of the program count (e.g., 32-bit register) to the instruction memory 105.[0026] FIG. 4 illustrates a block diagram of the instruction memory 105. The instruction memory 105 can include a number of registers for storing a program (e.g., executable instructions). The packet parser 101 can be coupled to a write port of the instruction memory 105. The instruction memory 105 can be written to by the packet parser 101 such that instructions received within incoming packets, as determined by the packet parser 101 , can be loaded from the packets into the instruction memory 105.[0027] The instruction memory 105 can include two address ports that can each accept an address for accessing a particular location within the instruction memory 105. One address can come from the program counter 107. The other address can come from the packet generator 111.[0028] During one operation, the instruction memory 105 can output an instruction (e.g., data port) from a location indicated by the address of the program counter 107. This instruction can be decoded and executed by the execution units 113 in order to instruct the execution units 113 as to an operation to perform. This instruction can give the execution units 113 operands as well as an index into the register file 109 to instruct the register file 109 as to what data to output to the execution units 113 for processing.[0029] FIG. 5 illustrates a block diagram of the decode logic 103. The decode logic 103 can include execution unit decode logic 501, parser decode logic 502, and a demultiplexing function 503 (e.g., demultiplexer).[0030] An input to the demultiplexing function 503 can be coupled to an instruction stream from the output of the instruction memory 105. One or more control bits in the instruction stream can be used to select the destination (e.g., EU decode logic 501, parser decode logic 502) of a particular instruction in the instruction stream.[0031] If the instruction is sent to the EU decode logic 501, the EU decode logic 501 can process the instruction in order to send the instruction to one of the execution units 113. The instruction can instruct one of the execution units 113 as to what type of operation to perform as well as to give one of the execution units 113 an operand to be used during execution of the instruction. The operand can index into a register of the register file 109 and instruct that register as to what data to output so that one of the execution units 113 can process that data.[0032] The demultiplexing function 503 can also send the instruction to the parser decode logic 502 that is coupled to the parser 115. The instruction can control the parser decode logic 502 that in turn instructs the parser which segments of the page buffer 117 to access in order to read data from a particular segment of the page buffer 117 into one of the execution units 113 for processing.[0033] FIG. 6 illustrates the block diagram of the register file 109. The register file 109 can include inputs from the packet parser 101, the packet generator 111, one or more of the execution units 113, and a memory read indication. The memory read indication can be a signal that is generated by the parser 115 indicating when a memory operation has been completed. The register file 109 can include outputs to the packet generator 111 , the execution units 113, and the parser 115.[0034] The register file 109 can include memory (e.g., plurality of registers) to store variables while processing by the execution units 113 is occurring. These variables can include data retrieved from the memory in response to one or more instructions. The register file 109 can be written to by the packet parser 101 in order to set initial conditions within the registers and can be read from by the packet generator 111. Each of the execution units 113 can receive arguments from the register file 109 through multiplexing functions. The output to the packet generator 111 can be used to bundle data stored in a register of the register file 109 into a packet for transmission to the network.[0035] FIG. 7A illustrates a block diagram of an embodiment of the execution units 113 (e.g., execution units (0-N) in general while FIG. 7B illustrates a block diagram of an embodiment of the program counter execution unit 114 in particular. The PCEU 114 can be considered to be part of the group of execution units 113 but can have a different architecture than other execution units 113.[0036] There is no requirement for a specific number of execution units 113 that can be included in a particular autonomous memory processing apparatus. One apparatus might have a single execution unit 113 while another apparatus might have multiple (e.g., hundreds) of execution units.[0037] FIG. 7 A illustrates that the execution units 113 can include four arithmetic logic units (ALUs) 701-704. The outputs of ALUl 703 and ALU2 704 can be input to a multiplexing function 706. Which ALU 703, 704 output is selected can be determined by an output of Comp ALU 702 whose output can be used as the selection signal for the multiplexing function 706. The fourth ALU, ALU Out 701, can have an output as a register address Rdto the register file 109 that can indicate to the register file 109 where to store the result of the operation performed by the execution units 113.[0038] The lower three ALU's 702-704 and multiplexing function 706 can perform if-then-else operations. The multiplexing function 706 can provide the "if some condition" where the condition is determined by the Comp ALU 702. Thus, if a condition is true, then the output of one ALU (e.g., ALUl 703) is selected by the output of the Comp ALU 702, otherwise the output of the other ALU (e.g., ALU2 704) is selected by the output of the Comp ALU 702.[0039] For example, if it is assumed that ALUl 703 has operand inputs OPERANDI (Ri) and OPERAND2 (R2) and command input OPERATOR 1 and ALU2 704 has operand inputs OPERAND3 (R3) and OPERAND4 (R4) and command inputOPERATOR2, the if-then-else statement can look like: if (Condition)thenOperandi OPERATOR1 Operand2elseOperand3 OPERATOR2 Operand4where "Operandi OPERATOR1 Operand2" can be provided by ALU1 703, "Operand3 OPERATOR2 Operand4" can be provided by ALU2 704, and "if(Condition)" can be provided by Comp ALU 702 and the multiplexing function 706.[0040] As described subsequently with reference to the format of instructions of FIG. 10, the operands and operators can either be provided by instructions or the instructions can indicate which register the operand value is located. For example, OPERANDI (Rl) might be located in register Ri, OPERAND (R2) might be located in register R2, OPERAND (R3) might be located in register R3, OPERAND (R4) might be located in register R4.[0041] ALU1 703 and ALU2 704 can perform the same operation or different operations. In other words, OPERATOR1 and be the same as OPERATOR2 or OPERATOR 1 can be different than OPERATOR2.[0042] ALU Out 701 can have operand inputs R5 and R6(e.g., registers R5 and R6) and command input OPERATOR3. The result, Rd, as generated by ALU Out 701 performing the command OPERATOR3 on values from R5 and R6determines where the results of the execution unit 113 are stored.[0043] Comp ALU 702 can have operand inputs R7 and Rs (e.g., registers R7 and Rs) and command input OPERATOR4. As previously discussed, the result of performing command OPERATOR4 on values from R7and Rs determines the selection of the multiplexing function 106.[0044] Typical operations that can be used as commands (e.g., OPERATOR1, OPERATOR2, OPERATOR3, OPERATOR4) in the above ALU's 701-704 can include addition, subtraction, logical AND, logical OR, logical NOT, logical NOR, equal to, less than or equal to, less than, not equal to, greater than or equal to, or greater than. These operations are for purposes of illustration only as other embodiments can use other operations.[0045] FIG. 7B illustrates the architecture of the program counter execution unit(PCEU) 114. This architecture can be similar to the execution units 0-n 113 but without the ALU Out 701. Since the PCEU 114 can be dedicated to determining a new address for the program counter 107, ALU Out 701 is not included since the location to store the results of the PCEU 114 operation will be the program counter 107.[0046] The PCEU 114 can include Comp ALU 710 with operand inputs R9 and R10 and command input OPERATOR5. ALU1 711 can include operand inputs Rl l and R12 and command input OPERATOR6. ALU2 712 can include operand inputs R13 and R14 and command input OPERATOR7.[0047] The outputs of ALU1 711 and ALU2 712 can be input to the multiplexing function 714. The output of Comp ALU 710 can provide the selection signal for the multiplexing function 714. Thus, as in the previously described execution units 113, the PCEU 114 can provide an if-then-else statement where the multiplexing function 714 provides the "if some condition" where the condition is determined by the Comp ALU 710. Thus, if a condition is true, then the output of one ALU (e.g., ALU1 711) is selected by the output of the Comp ALU 710, otherwise the output of the other ALU (e.g., ALU2 712) is selected by the output of the Comp ALU 710. The result can be loaded into the program counter 107.[0048] As in the previously described execution units 113, the operators and commands to be used in the PCEU 114 can be either loaded from an instruction from the instruction memory or the instruction can indicate which register can contain the value.[0049] FIG. 8 illustrates the block diagram of the parser 115. The parser 115 can include a memory write port that includes the address to be written to as well as the data. A memory read address port can provide the address to the memory to read from such that the read data can be read into a memory read data port. The parser 115 can also output the memory read indication signal when the memory read operation has been completed. The parser 115 can further include an output to the execution units 113, an input from the register file 109, and a configuration input from the parser decode logic 502.[0050] The parser 115 can have direct access to the memory 100 so that it can directly read from or write to the page buffer 117 of memory 100. The parser 115 has access to the entire length of the page buffer 117 so, to make processing more manageable, it can subdivide the page buffer 117 into smaller segments (e.g., regularly defined segments). For example, the parser 115 might operate on the first 100 bytes of the page buffer, then the next 100 bytes, and continue this until the entire page buffer 117 has been read/written. To accomplish this, the parser 115 can be given an address from the packet parser 101 that determines which segment of the page buffer 117 from which to read.[0051] The parser 115 can receive a configuration input from the register file 109 that can instruct the parser 115 how to parse the contents of the page buffer 117. The parser 115 can generate the memory read indication signal that instructs the executing program that new content is available in the register file 109.[0052] FIG. 9 illustrates the block diagram of an embodiment of the packet generator 111. The packet generator can include inputs from the instruction memory 105 and the register file 109 and outputs to the instruction memory 105 and the register file 109. The packet generator 111 additionally has an output to the network in order to output any generated packets.[0053] The packet generator 111 can generate an address for the instruction memory 105 and an address for the register file 109 in order to read data from these elements 105, 109. The packet generator 111 can then use the read data (e.g., instructions from the instruction memory 105 and context (e.g., data, results from memory read, results from performed operations)) from the register file 109, bundle this data, and generate a packet to be transmitted over the network.[0054] FIG. 10 illustrates an embodiment of a format of instruction execution in accordance with the embodiment of FIG. 1. Each instruction 1001-1003 can be stored in the instruction memory for execution by the execution units 113. [0055] The illustrated embodiment of the instruction includes four instructions 1000- 1003. Each instruction can be associated with a different ALU of the execution units 113. Thus, if the execution units 113 included a different quantity of ALU's, the execution format could include a different quantity of instructions 1000-1003.Reference is made to both FIG. 10 and the ALU' s of FIG. 7 A in the following discussion.[0056] The first instruction 1000 (e.g., Instruction D) can represent the destination register (e.g., Rd) of a result of an operation by one of the execution units 113. As discussed previously, the ALU Out 701 can generate an address of the destination register Rdin which to store the results of the execution unit 113. Thus, the ALU Out 701 can be associated with the first instruction 1000 for generating register Rd.[0057] The second instruction 1001 (e.g., Instruction C) can represent the condition of the if- then-else statement represented by the execution unit 113. In the illustrated embodiment, the condition is represented by comparison value Vc. As discussed previously, the Comp ALU 702 can generate the condition used as the select signal for the multiplexing function 706. Thus, the Comp ALU702 can be associated with the second instruction 1001 for comparison of whether Vc is true.[0058] The third instruction 1002 (e.g., Instruction T) can represent the "then" result of the if- then-else statement represented by the execution unit 113. In the illustrated embodiment, the "then" result is represented by Vt- Value if true. As discussed previously, the ALUl 703 can generate the "then" result. Thus, the ALUl 703 can be associated with the third instruction 1002 for "then" result being Vt.[0059] The fourth instruction 1003 (e.g., Instruction F) can represent the "else" result of the if- then-else statement represented by the execution unit 113. In the illustrated embodiment, the "else" result is represented by Vf- Value if false. As discussed previously, the ALU2 704 can generate the "else" result. Thus, the ALU2 704 can be associated with the fourth instruction 1003 for the "else" result of Vf.[0060] Using the condition of Vc, the "then" result of Vt, the "else" result of Vf, and the result register of Rd, the if-then-else statement can be represented by: if (Vc == TRUE)thenReg[Rd] := V,elseReg[Rd] := Vf[0061] FIG. 11 illustrates a block diagram of an embodiment of a memory system that can incorporate the autonomous memory processing apparatus 130 of FIG. 1. The memory system can include a controller 1100 (e.g., CPU) that can communicate over a network 1120 with one or more memory devices (e.g., SSD) 1101, 1102. The network 1120 might be a wired bus or wireless communications (e.g., WiFi).[0062] The memory device 1101 can include local memory 100 (e.g., RAM, DRAM, SRAM, NAND Flash, NOR Flash, phase change memory (PCM)) that makes up the storage portion of the memory device 1101 as well as the autonomous memory processing apparatus 130 of FIG. 1. The autonomous memory processing apparatus 130 can be located relatively close to the memory 100 (e.g., same die, same die stack, same memory module). For example, the autonomous memory processing apparatus 130 might be included in circuitry at the bank level of the memory 100. Each bank might have a different autonomous memory processing apparatus 130 so that one memory chip might have multiple instances of the autonomous memory processing apparatus 130 operating substantially simultaneously. As used herein, local memory 100 can be memory that is connected to the autonomous memory processing apparatus 130 without going over the network.[0063] Each of the devices of the system of FIG. 11 can be considered a node. Each node can communicate over the network 1120 with the other nodes. Each of the nodes might be substantially similar or one or more of the nodes can have a different architecture. For example, the first memory device 1101 might have only a single execution unit 113 in addition to the program counter execution unit 114 while the second memory device 1102 might have more than one execution unit 113 in addition to the program counter execution unit 114. [0064] Thus, as subsequently described, the controller 1100 (e.g., source node) can send messages (e.g., packets) containing instructions and the current processing state of the source node to the memory device 1101 (e.g., target node). In another embodiment, the first memory device 1101 might be the source node while the second memory device 1102 might be the target node.[0065] The instructions can include a command (e.g., search, sort, compare) to the memory device 1101. The memory device 1101 can perform the task instructed by the command without intervention by the controller. The autonomous memory processing apparatus 130 can send and receive messages to and from other nodes 1100, 1102, send and receive processing instructions and states to and from other nodes 1100, 1102, restore and save program states, execute processing instructions, read and write local memory, and/or support multiple processing contexts in a single node.[0066] The autonomous memory processing apparatus 130 architecture can provide dynamic, seamless flexibility of adding and removing execution units 113 (e.g., comprising ALUs), thus giving nodes additional processing power as needed. The dynamic adding and removal of execution units 113 in an autonomous memory processing apparatus 130 can be illustrated in the following example of operation.[0067] A typical prior art program can be generated as follows:Instruction 1 (ADD Register 1 , Register2, Register3) Instruction2 (SUB Register2, Register3, Register4)[0068] As in a typical prior art CPU system, there are implied dependencies in these instructions. For example, Instruction2 may not be able to execute before (or in the same cycle as) Instructionl because the value in Register2 would be overwritten before Instructionl has had a chance to execute.[0069] In the autonomous memory processing apparatus architecture, a more complex execution unit (EU) architecture can be used in order to reduce the number of cycles required to execute a program. Each EU can contain a number of different ALUs (e.g., four ALUs) that each perform distinct tasks. Thus, programs written for the autonomous memory processing apparatus can be generated as the following (assuming an architecture with one EU plus the PCEU): [PCEU Instructionl] [EU1 Instructionl][PCEU Instruction2] [EU1 Instruction2][0070] Each [EU# Instruction^ can appear as the following, as illustrated in FIG. 10:[Destination Instruction] [Comparison Instruction] [If-true Instruction] [If-falseInstruction][0071] Also, as part of the autonomous memory processing apparatus architecture, processors can have a different number of EUs embedded within them. This can enable an architecture that has four EUs and one PCEU, for instance:[PCEU Instructionl] [EU1 Instructionl] [EU2 Instructionl] [EU3 Instructionl] [EU4Instructionl][PCEU Instruction] [EU1 Instruction2] [EU2 Instruction2] [EU3 Instruction2] [EU4Instruction2[0072] Either one of these EU's instructions may be empty due to the fact that there may not be additional work to perform in this cycle. This may be due to the lack of parallelism in a particular stage of a program.[0073] The autonomous memory processing apparatus architecture can enable interaction between a heterogeneous set of autonomous memory processing apparatus engines in a system (e.g. one apparatus, "A", may have one EU plus the PCEU, while another apparatus, "B", in the same interconnected system, may have 4 EUs plus the PCEU). If it is assumed that, in this scenario, apparatus A needs to send its context to apparatus "B", the program can be packaged into a sequential stream of instructions and shipped to apparatus "B". Apparatus "B" can then schedule them in the same way on its hardware as follows:[PCEU Instructionl] [EU1 Instructionl ] [EMPTY] [EMPTY] [EMPTY][PCEU Instruction] [EU1 Instruction2] [EMPTY] [EMPTY] [EMPTY][0074] This can lead to lost parallelism resulting in inefficiencies in a system since every program would eventually approach that of the narrowest autonomous memory processing apparatus.[0075] The instructions may not be bundled into the parallel EUs without ensuring that there are not any dependencies between the instructions. Since this kind of comparison could be computationally expensive in a typical prior art system, the autonomous memory processing apparatus can use the concept of an instruction "fence" flag. The "fence" flag enables an application writer or compiler to mark where an instruction stream no longer has any dependencies on the previous instructions in that stream. This information can enable an instruction stream to be passed around and scheduled on a heterogeneous set of processors without significant processing overhead.[0076] For example, the following instruction stream: [PCEU Instruction] [EU Instructionl] [EU Instruction2] [EU Instruction3] [Fence Marker/Instruction] [EU Instruction^ [EU Instructions ] [EU Instruction6] [EU Instruction?] [FenceFlag/Instruction], could be scheduled in the following way on the autonomous memory processing apparatus "A" (where [F] indicates a "fence" marker): [PCEU] [1] [PCEU] [2] [F] [PCEU] [3] [PCEU] [4] [PCEU] [5] [PCEU] [6] [F] [PCEU] [7], and could be scheduled in the autonomous memory processing apparatus "B" as: [PCEU] [1] [2] [3] [X] [F] [PCEU] [4] [5] [6] [7].[0077] The "fence" instruction can be processed by packet-in logic while it is being loaded into the instruction memory of the given autonomous memory processing apparatus (e.g., "A" or "B"). The presence of a "fence" flag can be stored in the instruction memory, but may be meaningless outside the context of scheduling. However, it is stored as a flag in the instruction memory so that packet-out logic can reconstruct the original stream.[0078] As an example of operation of the autonomous memory processing apparatus (e.g., memory search), a packet can be received by the packet parser 101 from a network (e.g., memory network). The packet parser 101 can parse the packet into segments. Some segments can be context in that they may contain register contents that represent a state a previous node was in when the packet left the previous node.[0079] The packet may contain a starting location in the instruction memory 105 for the program to be executed. This starting point can be loaded into the program counter 107. The packet can also contain a set of instructions to be loaded into the instruction memory 105 and a set of initial conditions that can be loaded into the register file 109. The initial conditions can be variables being sent by instructions from a previous node. The initial conditions can also be constants for use by the currently executing program.[0080] The value in the program counter 107 determines which instruction is read from the instruction memory 105 to be executed. The next value in the program counter 107 might be an increment from the previous value or a calculated value as determined by the program counter execution unit 114.[0081] The instructions can set the configuration of the parser 115. The parser 115 can be configured, through execution of the instructions, to remove variables from the page buffer 117 and eventually to perform a memory read operation.[0082] When the memory read operation occurs, the variables can be removed out of the page buffer 117 content in real-time and presented to the execution units 113 as inputs. Other potential inputs can be read from the register file, as determined by program instructions, and can be presented to the execution units 113 for processing. As described previously, the "fence" can provide the ability to execute several consecutive instructions in parallel. The instructions that cannot be executed in parallel can be held off and executed during a subsequent cycle.[0083] The execution units 113 can process those input arguments as a plurality of sets of input arguments, each set being processed in parallel. Thus, multiple execution units 113 can generate output variables that can then either get transferred back to the register file, transferred to the parser 115 to eventually be written to the page buffer 117 as data for one or more memory write operations, or the output variables could go into the register file to generate some particular action. The action might be to generate a packet by the packet generator 111 or to initiate a new memory read or memory write operation.[0084] The page buffer 117 content (e.g., result of a search command) might be presented to the packet generator 111 to be included in a packet to be transmitted over the network to a requesting node. The packet might include a message to the requesting node indicating that the task (e.g., search) has been completed and the results are included in the packet.[0085] As a broader example of operation, a network might include a fabric of autonomous memory devices, each including at least one autonomous memory processing apparatus. A group of data can be stored across the fabric of memory devices. When it is desired to search the entire group of data for a particular list of data, a search program can be pushed into one autonomous memory device to search that device for the particular list of data. When the program determines that the data stored within that particular autonomous memory device has been searched and all of the data from the list is not present, the program can be bundled into one or more packets and transferred to another autonomous memory device where the autonomous memory processing apparatus of that device can continue the search. This bundling of the program can continue until the entire fabric of autonomous memory devices has been searched or the list of data has been completed. In some embodiment, the data found in a particular autonomous memory device can also be bundled into the packet(s) with the program to be transferred.[0086] Such an embodiment is illustrated in the flowchart of FIG. 12. The illustrated method can be executed in the system of FIG. 11 by the autonomous memory processing apparatus 130 in the autonomous memory device 1101.[0087] The memory device 1101 can receive a packet 1201 that is provided to the autonomous memory processing apparatus 130. The apparatus 130 can parse the packet 1203 to remove the instructions, program counter, and data as discussed previously. The instructions can then be executed 1205 to perform the desired task on the data stored in the memory 100. The instructions, and any data generated by the instructions, can then be bundled into a packet 1207 for transmission on the network 1209.[0088] An apparatus may be defined as circuitry, an integrated circuit die, a memory device, a memory array, or a system.CONCLUSION[0089] One or more embodiments of the autonomous memory processing apparatus within an autonomous memory device can perform processing of instructions to relieve memory bandwidth bottlenecks of traditional CPU-based computing systems. Packets containing a set of instructions (e.g., the program) and/or data can be transferred amongst nodes so that the data in the memory in those nodes can be operated on by the instructions independent of control from the source node or the CPU.[0090] Although specific embodiments have been illustrated and described herein, it will be appreciated by those of ordinary skill in the art that any arrangement that is calculated to achieve the same purpose may be substituted for the specific embodiments shown. Many adaptations will be apparent to those of ordinary skill in the art.Accordingly, this application is intended to cover any adaptations or variations. |
A method and apparatus for controlling the power supply for a system. In one embodiment, a power supply apparatus comprises a rechargeable battery; and a battery charger coupled to the battery and comprising a first circuit to generate an output that controls whether the battery is to provide power to the system to supplement the power provided by the power source when an input voltage from the power source of undetermined output power is less than a predetermined level. |
CLAIMSWe claim:1. A power supply apparatus for use with a power source of undetermined output power that provides power to a system load, the power supply apparatus comprising:a rechargeable battery; anda battery charger coupled to the battery and comprisinga first circuit to generate an output that controls whether the battery is to provide power to the system to supplement the power provided by the power source when an input voltage from the power source of undetermined output power is less than a predetermined level. 2. The power supply apparatus defined in Claim 1 wherein the first circuit comprises a first operational amplifier to amplify a difference between a first reference voltage value and the input voltage from the power source to produce a first output, wherein the first reference voltage represents a lower voltage limit below which the battery charger causes the battery to provide power to the system to supplement the power provided by the power source.3. The power supply apparatus defined in Claim 2 wherein the battery charger further comprises:a second operational amplifier to amplify a difference between a second reference voltage value and output voltage of the battery to produce a second output, wherein the second reference voltage represents an upper voltage limit above which the battery charger prevents the battery from being charged by power from the power source;a third operational amplifier to amplify a difference between a third reference voltage value and a value representing input current from the power source to produce a third output, wherein the third reference voltage represents an upper current limit below which the battery charger holds the input current from the power source to prevent the input current from the power source from crashing the system; anda fourth operational amplify a difference between a fourth reference voltage value and a value representing charger current output from the charger to produce a fourth output, wherein the fourth reference voltage represents an upper current limit above which the battery charger prevents the charging of the battery.4. The power supply apparatus defined in Claim 3 further comprising a second circuit coupled to the first, second, third and fourth operational amplifiers to combine the first, second, third and fourth outputs to produce a combined voltage.5. The power supply apparatus defined in Claim 4 wherein the second circuit comprises an integrator to integrate the first, second, third, and fourth outputs into the combined voltage. 6. The power supply apparatus defined in Claim 3 wherein the battery charger is operable to throttle the charger duty cycle in response to any one of the first, second, third or fourth outputs indicating that corresponding inputs of the first, second, third and fourth operational amplifier exceed the first, second, third or fourth reference values, respectively.7. The power supply apparatus defined in Claim 3 further comprising a diode connected to each of the first, second, third and fourth outputs.8. The power supply apparatus defined in Claim 2 further comprising a diode connected to the first output.9. A battery charger for coupling to a rechargeable battery, a system load, and a power source of undetermined output power that provides power to the system load, the battery charger comprising:a first circuit to generate an output that controls whether the battery is to provide power to the system to supplement the power provided by the power source when an input voltage from the power source of undetermined output power is less than a predetermined level, wherein the first circuit comprises a first operational amplifier to amplify a difference between a first reference voltage value and the input voltage from the power source to produce a first output, wherein the first reference voltage represents a lower voltage limit below which the battery charger causes the battery to provide power to the system to supplement the power provided by the power source.10. The battery charger defined in Claim 9 wherein the battery charger further comprises: a second operational amplifier to amplify a difference between a second reference voltage value and output voltage of the battery to produce a second output, wherein the second reference voltage represents an upper voltage limit above which the battery charger prevents the battery from being charged by power from the power source;a third operational amplifier to amplify a difference between a third reference voltage value and a value representing input current from the power source to produce a third output, wherein the third reference voltage represents an upper current limit below which the battery charger holds the input current from the power source to prevent the input current from the power source from crashing the system; anda fourth operational amplify a difference between a fourth reference voltage value and a value representing charger current output from the charger to produce a fourth output, wherein the fourth reference voltage represents an upper current limit above which the battery charger prevents the charging of the battery.11. The battery charger defined in Claim 10 further comprising a second circuit to combine the first, second, third and fourth outputs to produce a combined voltage from the first, second, third and fourth operational amplifiers.12. The battery charger defined in Claim 11 wherein the second circuit comprises an integrator to integrate the first, second, third, and fourth outputs into the combined voltage.13. The battery charger defined in Claim 10 wherein the battery charger is operable to limit, throttle or control the charger duty cycle in response to any one of the first, second, third or fourth outputs indicating that corresponding inputs of the first, second, third and fourth operational amplifier exceed the first, second, third or fourth reference values, respectively.14. The battery charger defined in Claim 10 further comprising a diode connected to each of the first, second, third and fourth outputs.15. The battery charger defined in Claim 9 further comprising a diode connected to the first output.16. A method for controlling a battery charger that is coupled to a rechargeable battery, a system load, and a power source of undetermined output power that provides power to the system load, the method comprising:receiving at least one reference voltage value from a reference source, the at least one reference voltage value representing a lower voltage limit below which the battery charger causes the battery to provide power to the system to supplement the power provided by the power source; andgenerating a first output that controls whether the battery is to provide power to the system to supplement the power provided by the power source when an input voltage from the power source of undetermined output power is less than a predetermined level, including generating the first output by amplifying, with an operational amplifier, a difference between a first reference voltage value and the input voltage from the power source.17. The method defined in Claim 16 wherein method further comprises:generating a second output, using a second operational amplifier, by amplifying a difference between a second reference voltage value and output voltage of the battery, wherein the second reference voltage represents an upper voltage limit above which the battery charger prevents the battery from being charged by power from the power source;generating a third output, using a third operational amplifier, by amplifying a difference between a third reference voltage value and a value representing input current from the power source, wherein the third reference voltage represents an upper current limit below which the battery charger holds the input current from the power source to prevent the input current from the power source from crashing the system; andgenerating a fourth output, using a fourth operational amplifier, by amplifying a difference between a fourth reference voltage value and a value representing charger current output from the charger to produce a fourth output, wherein the fourth reference voltage represents an upper current limit above which the battery charger prevents the charging of the battery.18. The method defined in Claim 17 further comprising combining the first, second, third and fourth outputs to produce a combined voltage from the first, second, third and fourth operational amplifiers.19. The method defined in Claim 18 wherein combining the first, second, third and fourth outputs to produce the combined voltage is performed, at least in part, by an integrator that integrates the first, second, third, and fourth outputs into the combined voltage.20. The method defined in Claim 16 wherein the battery charger is operable to throttle the charger duty cycle in response to any one of the first, second, third or fourth outputs indicating that corresponding inputs of the first, second, third and fourth operational amplifier exceed the first, second, third or fourth reference values, respectively. |
POWER SUPPLY CONTROL SYSTEMFIELD OF THE INVENTIONEmbodiments of the present invention relate to the field of power supply control systems; more particularly, embodiments of the present invention relate to battery chargers that control a power supply system that provides power to a system load when the power source provides an undetermined output power to a system load.BACKGROUND OF THE INVENTIONToday, the mobile industry is moving towards using the power sources with potentially unreliable or fluctuating and time dependent power capabilities. These sources include wireless power, solar power, as well as Universal Serial Bus (USB) Power Delivery (PD) power supplies. Also, there is continued drive for higher relative turbo power by System-on-a-Chips (SoCs) and other integrated circuits used in computing systems. Even while this occurs, there are still demands for smaller adapter sizes.More specifically, with respect to wireless energy, one of the issues that worries power delivery engineers is the fact that the amount of power transmitted from the power source to the charger of the mobile device varies dependent on the distance from the power source and the receiver, the device orientation and the like. With respect to solar panels, the energy that can be captured and converted to electrical energy is strongly dependent on the time of day, strength of solar radiation and the like. Thus, in both these cases, it may be unknown to a computing system being powered by such energy sources as to the amount of power that they may receive.Some customers may also experience issues with USB adapters when the actual output power of the power adapter may not be necessarily known beforehand, or when a universal USB adapter is made by a second-tier supplier and may be designed for lower temperature or may even have less than required output power capability.For all of the cases discussed above, the system being powered (i.e., the system load) will in some cases require too much power from the power source to support system operation and/or charge the battery, and a number of corner cases arise when the system does not know the actual maximum output power capability of the source. This include the situation of when the power adapter is designed for limited periods of peak power, but cannot handle longer duration high load. A real life situation would be for the power source to be periodically shut off because its power capability is exceeded, and the system may cut off a battery charger in the power system from the input power. The result of this corner case could be damage to the system circuitry or the power source, audio noise irritating to the end-users, as well as the system drawing less power from the power source than possible. BRIEF DESCRIPTION OF THE DRAWINGSThe present invention will be understood more fully from the detailed description given below and from the accompanying drawings of various embodiments of the invention, which, however, should not be taken to limit the invention to the specific embodiments, but are for explanation and understanding only.Figure 1 is a block diagram of a mobile computing system connected to a power source with limited power capability.Figure 2 illustrates a simplified schematic for a charger controller for a charger integrated circuit (IC).Figure 3 illustrates one embodiment of a control system for a charger IC.Figure 4 is a flow diagram of one embodiment of a process for controlling a power supply system.Figure 5 depicts a block diagram of a system load.DETAILED DESCRIPTION OF THE PRESENT INVENTIONIn the following description, numerous details are set forth to provide a more thorough explanation of the present invention. It will be apparent, however, to one skilled in the art, that the present invention may be practiced without these specific details. In other instances, well- known structures and devices are shown in block diagram form, rather than in detail, in order to avoid obscuring the present invention.Figure 1 is a block diagram of a power supply system providing power to a system load.Referring to Figure 1, a power source 101 provides power through a resistance to a charger 102 to a system load 104. In one embodiment, the system load is a mobile computing system, such as a smartphone, tablet, laptop computer. Such systems often have a processor and a memory that are powered by power from power source 101 and rechargeable battery 103. In one embodiment, rechargeable battery 103 is a lithium-ion battery pack. Note that the embodiments described herein are not limited to use of a lithium-ion battery pack and other rechargeable batteries may be used. The actual system may also have elements of system 104 or all of it connected directly to the output of power source 101.Power source 101 comprises a power source of undetermined output power. In one embodiment, power source 101 is a wireless power source. In another embodiment, power source 101 is a solar power source. In another embodiment, power source 101 is a Universal Serial Bus (USB) Power Delivery (PD) power supply. Power source 101 provides a voltage that may change due to changes in conditions (for example, the user may change the position of the wireless power, the cloud may change the amount of power that the solar panel captures, etc.), or its power capability may vary due to the same factors. In one embodiment, charger 102 comprises a narrow voltage direct current (NVDC)-type charger. In some situations, a charger will not know that power source 101 has less available power (which means that the voltage source is lower voltage or higher resistance). In this case, if the system/battery power consumption exceeds the capability of power source 101, a traditional charger will attempt to draw full power from power source 101, and this will result in power source 101 shutting down or charger 102 turning off switch 115 (e.g., pass field effect transistors (FETs) because the input voltage to charger 102 is too low. Note in one embodiment an electronic lossless resistor, such as resistor 111, i.e. the load line, may be used in order to protect power source 101 from over-current. In one embodiment, the power source uses a current protection/current limiting in order to prevent over-current.Figure 2 shows a simplified part of a schematic for a typical charger controller for a charger such as could be used as charger 102 of Figure 1. Referring to Figure 2, charger 200 includes three operational amplifiers 201-203, each with two inputs and an output. The output of operational amplifiers 201 is coupled to the input of diode 204. The output of operational amplifiers 202 is coupled to the input of diode 205. The output of operational amplifiers 203 is coupled to the input of diode 206. The outputs of diodes 204-206 are coupled to compensator 210 that combines the outputs of diodes 204-206, which represents the outputs of operational amplifiers 201-203, into output EOA.The charger duty cycle is controlled based on three system variables: battery voltage (EA1), input current (EA2) and the charger/battery current (EA3). If any of the variables exceed a reference level (VrefReg for battery voltage, Vrefjac for input current and Dim for battery current), the duty cycle will be throttled and the charger will consume less power from the power source. VrefReg represents the predetermined upper voltage limit below which the battery is held when charging to avoid damaging the battery. Operational amplifier 201 compares the battery voltage to the reference value VrefReg and if the battery voltage exceeds VrefReg, the battery charger prevents the battery from being charged by power from the power source.Vrefjac represents an upper current limit below which operation of the power source is held to avoid crashing the system. Operational amplifier 202 compares the input current from the power source to the reference value Vrefjac and if the input current exceeds Vrefjac, the battery charger prevents the input current from the power source from reaching the system to avoid crashing the system by cutting the charge current to the battery or the charger output current. Dim represents an upper battery current limit below which the current used when charging the battery is held to avoid damaging the battery. Operational amplifier 203 compares the battery charge current or the charger output current to the reference value Dim for the battery current and if the battery current exceeds Dim, the battery charger prevents the battery charger from charging the battery with power from the power source.In one embodiment, all the limits, including the battery current and voltage as well as the power source maximum output, are preset in the system.In one embodiment, an additional comparison is added to the control system which allows the system load to efficiently operate from a power source of undetermined output power capability or when the output capability of the power source is less than expected. Figure 3 shows one possible embodiment and illustrates simplified control diagram for such a charger with an added control loop to accommodate the case when the power source output power may be not known.Referring to Figure 3, one additional amplifier, operational amplifier 301 is added to the circuit, along with a corresponding diode 302. In order to prevent the power source to be overpowered by the charger (or system current in case of traditional or Hybrid Power Boost chargers), the charger uses an extra control loop for the input voltage. If the input voltage droops below a pre-determined level, referred to herein as reference value VinRef (this situation will occur if the power drawn from the power source is too high), the charger goes into a duty-cycle limiting mode, and lowers down the output of the charger and lowers the input power of the charger (i.e., lowers the power taken for the power source). The charger may also use the battery to supplement the power source. This results in the stabilization of the output voltage of the variable power source at the minimum level, thereby preventing it from shutting down. The additional loop will effectively over-power all other loops, and the charger will operate at aDC/DC converter, and will draw just enough power to keep the power source output voltage as a pre-determined voltage level. Note that input voltage is already monitored by the charger, so there is very little additional cost to employ such an addition. Thus, the battery charger has a new variable "minimum input voltage" that is used to control the charger duty cycle and output and input power of the charger, such that when the input voltage droops below this level, the charger limits the duty cycle.In one embodiment, the battery charger includes a circuit to generate, when the power source voltage of undetermined output power is less than a predetermined level (e.g., a system minimum reference level), an output that controls whether the battery is to provide power to the system to supplement the power provided by the power source or the charger is to lower the charging current of the battery until the input voltage is returned to the predetermined level.A similar protection can be used on the Hybrid Power Boost (HPB) and traditional chargers, with very few changes. For some systems where the load is connected directly to the power source, if the power source voltage droops to the minimum value, the charger may supplement the power from the power source by delivering the power from the battery to the system (i.e. operating in the opposite direction). By supplementing the power source output power to the system, the charger will maintain the power source output voltage at apredetermined level.If the voltage continues to droop below the pre-set limit, the charger will detect that the power source is removed, and the pass FETs will be turned off. In the case of traditional or HPB chargers, the charger will stop supplementing the input power source if it is determined that the power source is not delivering any power (or below some threshold), or if the delivered power from the source is below the losses in the charger operating the reverse mode.If the power source re-acquires higher output power capability, then its output voltage will naturally go up (the power source will provide more power than is taken from it by the load), and the charger input voltage loop will automatically come out of regulation and another loop (battery current or voltage) will control the charger operation. In one embodiment, the minimum voltage level of the adapter for this regulation is pre-set by the customers, and is same for the charger and the adapter. The level can also be negotiated between the USB charger and the adapter through the USB PD and be flexible if needed.Figure 4 is a flow diagram of one embodiment of a process for controlling a power supply system. The power supply system includes a rechargeable battery, a system load, and a power source of undetermined output power that provides power to the system load. In oneembodiment, the process is performed by processing logic that may comprise hardware(circuitry, dedicated logic, etc.), software (such as is run on a general purpose computer system or a dedicated machine), firmware, or a combination of the three.The process begins by processing logic receiving at least one reference voltage value from a reference source, where the at least one reference voltage value represents a lower voltage limit below which the battery charger causes the battery to provide power to the system to supplement the power provided by the power source (processing block 401).Next, processing logic generates a first output that controls whether the battery is to provide power to the system to supplement the power provided by the power source when an input voltage from the power source of undetermined output power is less than a predetermined level, including generating the first output by amplifying, with an operational amplifier, a difference between a first reference voltage value and the input voltage from the power source(processing block 402).Also, processing logic generates a second output, using a second operational amplifier, by amplifying a difference between a second reference voltage value and output voltage of the battery, where the second reference voltage represents an upper voltage limit above which the battery charger prevents the battery from being charged by power from the power source (processing block 403).Similarly, processing logic generates a third output, using a third operational amplifier, by amplifying a difference between a third reference voltage value and a value representing input current from the power source, where the third reference voltage represents an upper current limit above which the battery charger prevents the input current from the power source from reaching the system to avoid crashing the system (processing block 404).Likewise, processing logic generates a fourth output, using a fourth operational amplifier, by amplifying a difference between a fourth reference voltage value and a value representing charger current output from the charger to produce a fourth output, wherein the fourth reference voltage represents an upper current limit above which the battery charger prevents the battery charger from charging the battery (processing block 405).Thereafter, processing logic combines the first, second, third and fourth outputs to produce a combined voltage from the first, second, third and fourth operational amplifiers (processing block 406). In one embodiment, combining the first, second, third and fourth outputs to produce the combined control voltage is performed, at least in part, by the compensator that determines the charger duty cycle based on the first, second, third, and fourth outputs into the combined voltage. Determining the charger duty cycle using a compensator is performed in a manner well- known in the art.In response to the combined voltage, processing logic controls the charger duty cycle in response to any one of the first, second, third or fourth outputs indicating that corresponding inputs of the first, second, third and fourth operational amplifier exceed the first, second, third or fourth reference values, respectively (processing block 407).Figure 5 is one embodiment of a system level diagram 500 that may incorporate the techniques described above. For example, the techniques described above may be used in conjunction with a processor in system 500 or other part of system 500.Referring to Figure 5, system 500 includes, but is not limited to, a desktop computer, a laptop computer, a netbook, a tablet, a notebook computer, a personal digital assistant (PDA), a server, a workstation, a cellular telephone, a mobile computing device, a smart phone, an Internet appliance or any other type of computing device. In another embodiment, system 500 implements the methods disclosed herein and may be a system on a chip (SOC) system.In one embodiment, processor 510 has one or more processor cores 512 to 512N, where 512N represents the Nth processor core inside the processor 510 where N is a positive integer. In one embodiment, system 500 includes multiple processors including processors 510 and 505, where processor 505 has logic similar or identical to logic of processor 510. In one embodiment, system 500 includes multiple processors including processors 510 and 505 such that processor 505 has logic that is completely independent from the logic of processor 510. In such an embodiment, a multi-package system 500 is a heterogeneous multi-package system because the processors 505 and 510 have different logic units. In one embodiment, processing core 512 includes, but is not limited to, pre-fetch logic to fetch instructions, decode logic to decode the instructions, execution logic to execute instructions and the like. In one embodiment, processor 510 has a cache memory 516 to cache instructions and/or data of the system 500. In another embodiment of the invention, cache memory 516 includes level one, level two and level three, cache memory, or any other configuration of the cache memory within processor 510.In one embodiment, processor 510 includes a memory control hub (MCH) 514, which is operable to perform functions that enable processor 510 to access and communicate with a memory 530 that includes a volatile memory 532 and/or a non-volatile memory 534. In one embodiment, memory control hub (MCH) 514 is positioned outside of processor 510 as an independent integrated circuit.In one embodiment, processor 510 is operable to communicate with memory 530 and a chipset 520. In such an embodiment, SSD 580 executes the computer-executable instructions when SSD 580 is powered up.In one embodiment, processor 510 is also coupled to a wireless antenna 578 to communicate with any device configured to transmit and/or receive wireless signals. In one embodiment, wireless antenna interface 578 operates in accordance with, but is not limited to, the IEEE 802.11 standard and its related family, HomePlug AV (HPAV), Ultra Wide Band(UWB), Bluetooth, WiMAX, or any form of wireless communication protocol.In one embodiment, the volatile memory 532 includes, but is not limited to, Synchronous Dynamic Random Access Memory (SDRAM), Dynamic Random Access Memory (DRAM), RAMBUS Dynamic Random Access Memory (RDRAM), and/or any other type of random access memory device. Non-volatile memory 534 includes, but is not limited to, flash memory (e.g., NAND, NOR), phase change memory (PCM), read-only memory (ROM), electrically erasable programmable read-only memory (EEPROM), or any other type of non- volatile memory device.Memory 530 stores information and instructions to be executed by processor 510. In one embodiment, chipset 520 connects with processor 510 via Point-to-Point (PtP or P-P) interfaces517 and 522. In one embodiment, chipset 520 enables processor 510 to connect to other modules in the system 500. In one embodiment, interfaces 517 and 522 operate in accordance with a PtP communication protocol such as the Intel QuickPath Interconnect (QPI) or the like.In one embodiment, chipset 520 is operable to communicate with processor 510, 505, display device 540, and other devices 572, 576, 574, 560, 562, 564, 566, 577, etc. In one embodiment, chipset 520 is also coupled to a wireless antenna 578 to communicate with any device configured to transmit and/or receive wireless signals.In one embodiment, chipset 520 connects to a display device 540 via an interface 526. In one embodiment, display device 540 includes, but is not limited to, liquid crystal display (LCD), plasma, cathode ray tube (CRT) display, or any other form of visual display device. In addition, chipset 520 connects to one or more buses 550 and 555 that interconnect various modules 574, 560, 562, 564, and 566. In one embodiment, buses 550 and 555 may be interconnected together via a bus bridge 572 if there is a mismatch in bus speed or communication protocol. In one embodiment, chipset 520 couples with, but is not limited to, a non-volatile memory 560, a mass storage device(s) 562, a keyboard/mouse 564, and a network interface 566 via interface 524, smart TV 576, consumer electronics 577, etc.In one embodiment, mass storage device 562 includes, but is not limited to, a solid state drive, a hard disk drive, a universal serial bus flash memory drive, or any other form of computer data storage medium. In one embodiment, network interface 566 is implemented by any type of well-known network interface standard including, but not limited to, an Ethernet interface, a universal serial bus (USB) interface, a Peripheral Component Interconnect (PCI) Express interface, a wireless interface and/or any other suitable type of interface.While the modules shown in Figure 5 are depicted as separate blocks within the system 500, the functions performed by some of these blocks may be integrated within a single semiconductor circuit or may be implemented using two or more separate integrated circuits.In a first example embodiment, a power supply apparatus for use with a power source of undetermined output power that provides power to a system load comprises a rechargeable battery and a battery charger coupled to provide a charging current to the battery, where the battery charger comprises a first circuit to generate, when the power source voltage of the undetermined output power is less than a predetermined level, an output that controls whether the battery is to provide power to the system to supplement the power provided by the power source or the charger is to lower the charging current of the battery unit until the power source voltage returns to the predetermined level..In another example embodiment, the subject matter of the first example embodiment can optionally include that the first circuit comprises a first operational amplifier to amplify a difference between a first reference voltage value and the input voltage from the power source to produce a first output, wherein the first reference voltage represents a lower voltage limit below which the battery charger causes the battery to provide power to the system to supplement the power provided by the power source.In another example embodiment, the subject matter of the first example embodiment can optionally include that the battery charger further comprises: a second operational amplifier to amplify a difference between a second reference voltage value and output voltage of the battery to produce a second output, wherein the second reference voltage represents an upper voltage limit above which the battery charger prevents the battery from being charged by power from the power source; a third operational amplifier to amplify a difference between a third reference voltage value and a value representing input current from the power source to produce a third output, wherein the third reference voltage represents an upper current limit below which the battery charger holds the input current from the power source to prevent the input current from the power source from crashing the system; and a fourth operational amplify a difference between a fourth reference voltage value and a value representing charger current output from the charger to produce a fourth output, wherein the fourth reference voltage represents an upper current limit above which the battery charger prevents the charging of the battery. In another example embodiment, the subject matter of this example embodiment can optionally include a second circuit coupled to the first, second, third and fourth operational amplifiers to combine the first, second, third and fourth outputs to produce a combined voltage. In another example embodiment, the subject matter of this example embodiment can optionally include that the second circuit comprises a compensator to integrate the first, second, third, and fourth outputs into the combined voltage.In another example embodiment, the subject matter of the first example embodiment can optionally include that the battery charger is operable to control (e.g., throttle, limit, etc.) the charger duty cycle in response to any one of the first, second, third or fourth outputs indicating that corresponding inputs of the first, second, third and fourth operational amplifier exceed the first, second, third or fourth reference values, respectively.In another example embodiment, the subject matter of the first example embodiment can optionally include a diode connected to each of the first, second, third and fourth outputs.In another example embodiment, the subject matter of the first example embodiment can optionally include a diode connected to the first output.In a second example embodiment, a battery charger for coupling to a rechargeable battery, a system load, and a power source of undetermined output power that provides power to the system load comprises a first circuit to generate an output that controls whether the battery is to provide power to the system to supplement the power provided by the power source when an input voltage from the power source of undetermined output power is less than a predetermined level, wherein the first circuit comprises a first operational amplifier to amplify a difference between a first reference voltage value and the input voltage from the power source to produce a first output, wherein the first reference voltage represents a lower voltage limit below which the battery charger causes the battery to provide power to the system to supplement the power provided by the power source.In another example embodiment, the subject matter of the second example embodiment can optionally include that the battery charger further comprises: a second operational amplifier to amplify a difference between a second reference voltage value and output voltage of the battery to produce a second output, wherein the second reference voltage represents an upper voltage limit above which the battery charger prevents the battery from being charged by power from the power source; a third operational amplifier to amplify a difference between a third reference voltage value and a value representing input current from the power source to produce a third output, wherein the third reference voltage represents an upper current limit below which the battery charger holds the input current from the power source to prevent the input current from the power source from crashing the system; and a fourth operational amplify a difference between a fourth reference voltage value and a value representing charger current output from the charger to produce a fourth output, wherein the fourth reference voltage represents an upper current limit above which the battery charger prevents the charging of the battery.In another example embodiment, the subject matter of the second example embodiment can optionally include a second circuit to combine the first, second, third and fourth outputs to produce a combined voltage from the first, second, third and fourth operational amplifiers. In another example embodiment, the subject matter of this example embodiment can optionally include that the second circuit comprises a compensator to integrate the first, second, third, and fourth outputs into the combined voltage.In another example embodiment, the subject matter of the second example embodiment can optionally include that the battery charger is operable to control (e.g., limit, throttle, etc.) the charger duty cycle in response to any one of the first, second, third or fourth outputs indicating that corresponding inputs of the first, second, third and fourth operational amplifier exceed the first, second, third or fourth reference values, respectively.In another example embodiment, the subject matter of the second example embodiment can optionally include a diode connected to each of the first, second, third and fourth outputs.In another example embodiment, the subject matter of the second example embodiment can optionally include a diode connected to the first output.In a third example embodiment, a method for controlling a battery charger that is coupled to a rechargeable battery, a system load, and a power source of undetermined output power that provides power to the system load, comprises: receiving at least one reference voltage value from a reference source, the at least one reference voltage value representing a lower voltage limit below which the battery charger causes the battery to provide power to the system to supplement the power provided by the power source or the charger to lower the charging current of the battery until the power source voltage is returned to a predetermined level; and generating a first output that controls whether the battery is to provide power to the system to supplement the power provided by the power source when the power source voltage from the power source of undetermined output power is less than the predetermined level, including generating the first output by amplifying, with an operational amplifier, a difference between a first reference voltage value and the input voltage from the power source.In another example embodiment, the subject matter of the third example embodiment can optionally include that the method further comprises: generating a second output, using a second operational amplifier, by amplifying a difference between a second reference voltage value and output voltage of the battery, wherein the second reference voltage represents an upper voltage limit above which the battery charger prevents the battery from being charged by power from the power source; generating a third output, using a third operational amplifier, by amplifying a difference between a third reference voltage value and a value representing input current from the power source, wherein the third reference voltage represents an upper current limit below which the battery charger holds the input current from the power source to prevent the input current from the power source from crashing the system; and generating a fourth output, using a fourth operational amplifier, by amplifying a difference between a fourth reference voltage value and a value representing charger current output from the charger to produce a fourth output, wherein the fourth reference voltage represents an upper current limit above which the battery charger prevents the charging of the battery. In another example embodiment, the subject matter of this example embodiment can optionally include combining the first, second, third and fourth outputs to produce a combined voltage from the first, second, third and fourth operational amplifiers. In another example embodiment, the subject matter of this example embodiment can optionally include combining the first, second, third and fourth outputs to produce the combined voltage is performed, at least in part, by a compensator that integrates the first, second, third, and fourth outputs into the combined voltage.In another example embodiment, the subject matter of the third example embodiment can optionally include that the battery charger is operable to throttle the charger duty cycle in response to any one of the first, second, third or fourth outputs indicating that corresponding inputs of the first, second, third and fourth operational amplifier exceed the first, second, third or fourth reference values, respectively.Some portions of the detailed descriptions above are presented in terms of algorithms and symbolic representations of operations on data bits within a computer memory. These algorithmic descriptions and representations are the means used by those skilled in the data processing arts to most effectively convey the substance of their work to others skilled in the art.An algorithm is here, and generally, conceived to be a self-consistent sequence of steps leading to a desired result. The steps are those requiring physical manipulations of physical quantities.Usually, though not necessarily, these quantities take the form of electrical or magnetic signals capable of being stored, transferred, combined, compared, and otherwise manipulated. It has proven convenient at times, principally for reasons of common usage, to refer to these signals as bits, values, elements, symbols, characters, terms, numbers, or the like.It should be borne in mind, however, that all of these and similar terms are to be associated with the appropriate physical quantities and are merely convenient labels applied to these quantities. Unless specifically stated otherwise as apparent from the following discussion, it is appreciated that throughout the description, discussions utilizing terms such as "processing" or"computing" or "calculating" or "determining" or "displaying" or the like, refer to the action and processes of a computer system, or similar electronic computing device, that manipulates and transforms data represented as physical (electronic) quantities within the computer system's registers and memories into other data similarly represented as physical quantities within the computer system memories or registers or other such information storage, transmission or display devices.The present invention also relates to apparatus for performing the operations herein. This apparatus may be specially constructed for the required purposes, or it may comprise a general purpose computer selectively activated or reconfigured by a computer program stored in the computer. Such a computer program may be stored in a computer readable storage medium, such as, but is not limited to, any type of disk including floppy disks, optical disks, CD-ROMs, and magnetic-optical disks, read-only memories (ROMs), random access memories (RAMs), EPROMs, EEPROMs, magnetic or optical cards, or any type of media suitable for storing electronic instructions, and each coupled to a computer system bus.The algorithms and displays presented herein are not inherently related to any particular computer or other apparatus. Various general purpose systems may be used with programs in accordance with the teachings herein, or it may prove convenient to construct more specialized apparatus to perform the required method steps. The required structure for a variety of these systems will appear from the description below. In addition, the present invention is not described with reference to any particular programming language. It will be appreciated that a variety of programming languages may be used to implement the teachings of the invention as described herein.A machine-readable medium includes any mechanism for storing or transmitting information in a form readable by a machine (e.g., a computer). For example, a machine- readable medium includes read only memory ("ROM"); random access memory ("RAM"); magnetic disk storage media; optical storage media; flash memory devices; etc.Whereas many alterations and modifications of the present invention will no doubt become apparent to a person of ordinary skill in the art after having read the foregoing description, it is to be understood that any particular embodiment shown and described by way of illustration is in no way intended to be considered limiting. Therefore, references to details of various embodiments are not intended to limit the scope of the claims which in themselves recite only those features regarded as essential to the invention. |
The present disclosure includes apparatuses, methods, and systems for data relocation in memory having two portions of data. An embodiment includes a memory having a plurality of physical blocks of memory cells, and a first and second portion of data having a first and second, respectively, number of logical block addresses associated therewith. Two of the plurality of physical blocks of cells do not have data stored therein. Circuitry is configured to relocate the data of the first portion that is associated with one of the first number of logical block addresses to one of the two physical blocks of cells that don't have data stored therein, and relocate the data of the second portion that is associated with one of the second number of logical block addresses to the other one of the two physical blocks of cells that don't have data stored therein. |
What is Claimed is:1. An apparatus, comprising:a memory having a plurality of physical blocks of memory cells, wherein:tiie memory includes a first portion of data having a first number of logical block addresses associated therewith, and a second portion of data having a second number of logical block addresses associated therewith; and two of the plurality of physical blocks of memory cells do not have data stored therein;circuitry configured to:relocate the data of the first portion that is associated with one of the first number of logical block addresses to one of the two of the plurality of physical blocks of memory cells that do not have data stored therein; andrelocate the data of the second portion that is associated with one of the second number of logical block addresses to the other one of the two of the plurality of physical blocks of memory cells that do not have data stored therein.2. The apparatus of claim 1 , wherein die two of the plurality of physical blocks of memory cells that do not have data stored therein separate the first portion of data and the second portion of data in the memory.3. The apparatus of claim 1 , wherein:the one of the first number of logical block addresses is a last one of the first number of logical block addresses; andtiie one of the second number of logical block addresses is a last one of the second number of logical block addresses.4. The apparatus of claim 1, wherein:the first portion of data comprises user data; andtiie second portion of data comprises system data.5. Hie apparatus of claim 1, wherein: the first portion of date comprises date that has been accessed at or above a particular frequency during program or sense operations performed on the memory; andtiie second portion of data comprises data that has been accessed below the particular frequency during program or sense operations performed on the memory.6. Ίίib apparatus of claim 1, wherein:tiie first portion of data comprises operating system data; andtiie second portion of data comprises multimedia date.7. A method of operating memory, comprising:relocating data included in a first portion of the memory and associated with one of a number of logical block addresses of the first portion of the memory to a physical block of the memory that has no data stored therein; and relocating data included in a second portion of the memory and associated with one of a number of logical block addresses of the second portion of the memory to another physical block of the memory that has no data stored therein.8. The method of claim 7, wherein the method includes:using algebraic mapping to identify a location in the memory of physical block to which the data in the first portion of tiie memory has been relocated; andusing algebraic mapping to identify a location in the memory of the other physical block to which tiie data in the second portion of the memory has been relocated.9. The method of claim 8, wherein the method includes:using the algebraic mapping to identify the location in the memory of the physical block to which the data in the first portion of the memory has been relocated during an operation to sense that relocated data; and using die algebraic mapping to identify the location in the memory of the other physical block to which the data in die second portion of die memory has been relocated during an operation to sense that relocated data.10. The method of claim 7, wherein the method includes relocating the data included in die first portion of the memory and relocating the data included in the second portion of the memory responsive to a triggering event.11. The method of claim 10, wherein the triggering event is a particular number of program operations being performed on the memory.12. The method of claim 10, wherein the triggering event is a power state transition occurring in die memory.13. The method of claim 7, wherein;relocating die data included in the first portion of the memory to the physical block of the memory results in a third physical block of the memory having no data stored therein; aftidrelocating the data included in the second portion of die memory to die other physical block of the memory results in a fourth physical block of the memory having no data stored therein; anddie method further includes:relocating data included in the first portion of the memory and associated with a different one of the number of logical block addresses of the first portion of the memory to the third physical block of the memory; andrelocating data included in the second portion of the memory and associated with a different one of die number of logical block addresses of die second portion of die memory to the fourth physical block of the memory.14. An apparatus, comprising:a memory having a plurality of physical blocks of memory cells, wherein: die memory includes a first portion of data having a first number of logical block addresses associated therewith, and a second portion of data having a second number of logical block addresses associated therewith; and two of the plurality of physical blocks of memory cells do not have data stored therein and separate the first portion of data and die second portion of date in die memory;circuitry configured to:relocate die date of die first portion that is associated with a last one of die first number of logical block addresses to one of the two of the plurality of physical blocks of memory cells that do not have date stored therein; andrelocate the date of the second portion that is associated with a last one of the second number of logical block addresses to the other one of the two of die plurality of physical blocks of memory cells that do not have date stored therein.IS. The apparatus of claim 14, wherein a size of each respective one of the first number of logical block addresses is different than a size of each respective one of the second number of logical block addresses.16. The apparatus of claim 14, wherein the circuitry is included in the memory.17. The apparatus of claim 14, wherein the circuitry is included in a controller of the apparatus.18. The apparatus of claim 14, wherein die circuitry includes:a first register configured to store a value indicating a physical block address for the data of the first portion that is associated with a first one of the first number of logical block addresses;a second register configured to store a value indicating a physical Mock address for one of the two of the plurality of physical blocks of memory cells that do not have data stored therein: a third register configured to store a value indicating a physical block address for the other one of the two of the plurality of physical blocks of memory cells that do not have data stored therein; anda fourth register configured to store a value indicating a relative position of a first one of the second number of logical block addresses in die second portion of date.19. A method of operating memory, comprising:relocating data included in a first portion of the memory and associated with a last one of a number of logical block addresses of the first portion of die memory to a physical block of the memory that has no data stored therein and is before the first portion of the memory in the memory; andrelocating data included in a second portion of the memory and associated with a last one of a number of logical block addresses of the second portion of the memory to a physical block of the memory that has no data stored therein and is after die second portion of the memory in the memory.20. The method of claim 19, wherein the method includes:using algebraic mapping to identify a physical block address for the physical block to which the date associated with the last one of the number of logical block addresses of die first portion of the memory has been relocated; andusing algebraic mapping to identify a physical block address for the physical block to which the data associated with the last one of the number of logical block addresses of die second portion of the memory has been relocated.21. The method of claim 19, wherein the number of logical block addresses for the first portion of the memory and the number of logical block addresses for the second portion of the memory are randomized.22. The method of claim 21 , wherein the method includes relocating the data incl uded in the second portion of the memory immediately upon relocating the data included in the first portion of the memory.23. The method of claim 19, wherein the method includes:performing an operation on die memory upon relocating the data included in the first portion of the memory; andrelocating die data included in the second portion of the memory upon performing the operation on die memory. |
DATA RELOCATION IN MEMORY HAVING TWO PORTIONS OF DATATechnical Fieldroooi] The present disclosure relates generally to semiconductor memory and methods, and more particularly, to data relocation in memory having two portions of data.Background[0002] Memory devices are typically provided as internal,semiconductor, integrated circuits and/or external removable devices in computers or other electronic devices. There are many different types of memory including volatile and non-volatile memory. Volatile memory can require power to maintain its data and can include random-access memory (RAM), dynamic random access memory (DRAM), and synchronous dynamic random access memory (SDRAM), among others. Non-volatile memory can provide persistent data by retaining stored data when not powered and can include NAND flash memory, NOR flash memory, read only memory (ROM), and resistance variable memory such as phase change random access memory (PCRAM), resistive random access memory (RRAM), and magnetic random access memory (MRAM), among others.[0003] Memory devices can be combined together to form a solid state drive (SSD), an embedded MultiMediaCard (e.MMC), and/or a universal flash storage (UFS) device. An SSD, e.MMC, and/or UFS device can include nonvolatile memory (e.g., NAND flash memory and/or NOR flash memory), and/or can include volatile memory (e.g., DRAM and/or SDRAM), among various other types of non-volatile and volatile memory. Non-volatile memory may be used in a wide range of electronic applications such as personal computers, portable memory sticks, digital cameras, cellular telephones, portable music players such as MP3 players, movie players, among others.[0004] Flash memory devices can include memory cells storing data in a charge storage structure such as a floating gate, for instance. Flash memory devices typically use a one-transistor memory cell that allows for high memory densities, high reliability, and low power consumption. Resistance variable memory devices can include resistive memory cells that can store data based on the resistance state of a storage element (e.g., a resistive memory element having a variable resistance).[0005] Memory cells can be arranged into arrays, and memory cells in an array architecture can be programmed to a target (e.g., desired) state. For instance; electric charge can be placed on or removed from the charge storage structure (e.g., floating gate) of a flash memory cell to program the cell to a particular data state. The stored charge on the charge storage structure of the cell can indicate a threshold voltage (Vt) of die cell. A state of a flash memory cell can be determined by sensing the stored charge on the charge storage structure (e.g., the Vt) of the cell.[0006] As an additional example, resistive memory cells can be programmed to store data corresponding to a target data state by varying the resistance level of the resistive memory element. Resistive memory cells can be programmed to a target data state (e.g., corresponding to a particular resistance state) by applying sources of an electrical field or energy, such as positive or negative electrical pulses (e.g., positive or negative voltage or current pulses) to the cells (e.g., to the resistive memory element of the cells) for a particular duration. A state of a resistive memory cell can be determined by seising current through the cell responsive to an applied interrogation voltage. The sensed current, which varies based on the resistance level of the cell, can indicate die state of die cell.[0007] A single level memory cell (SLC) can be programmed to a targeted one of two different data states, which can be represented by the binary units I or 0. Some flash and resistive memory cells can be programmed to a targeted one of more than two data states (e.g., 1111, 0111, 0011, 101.1 , 1001, 0001, 0101, 1101, 1100, 0100, 0000, 1000, 1010, 0010, 0110, and 1110). Such cells may be referred to as multi state memory cells, multiunit cells, or multilevel cells (MLCs). MLCs can provide higher density memories without increasing the number of memory cells since each cell can represent mote than one digit (e.g., more than one bit). Brief Description of the Drawings[0008] Figure 1 illustrates a diagram of a portion of a memory array having a number of physical blocks in accordance with an embodiment of the present disclosure.[0009] Figure 2 is a block diagram of a computing system including a host and an apparatus in the form of a memory de vice in accordance with an embodiment of the present disclosure.[6010] Figure 3 is a block diagram of circuitry for performing data relocation operations in memory in accordance with an embodiment of die present disclosure.[0011] Figure 4 illustrates a conceptual example of a data relocation operation performed in memory in accordance with an embodiment of the present disclosure.[0012] Figure 5A illustrates a conceptual example of a sequence of data relocation operations performed in memory in accordance with an embodiment of die present disclosure.[0013] Figure 5B is a table of values associated with a sequence of data relocation operations performed in memory in accordance with an embodiment of the present disclosure.Detailed Description[0014] The present disclosure includes apparatuses, methods, and systems for data relocation in memory having two portions of data. An embodiment includes a memory having a plurality of physical blocks of memory cells, and a first and second portion of data having a first and second, respectively, number of logical block addresses associated therewith. T wo of the plurality of physical blocks of cells do not have data stored therein. Circuitry is configured to relocate the data of the first portion that is associated with one of the first number of logical block addresses to one of the two physical blocks of cells that don’t have data stored therein, and relocate the data of the second portion that is associated with one of the second number of logical block addresses to the other one of the two physical blocks of cells that don’t have data stored therein. [8015] A wear-leveling operation can include and/or refer to an operation to relocate data currently being stored in one physical location of a memory to another physical location of the memory. Performing such wearleveling operations can increase the performance (e.g., increase the speed, increase the reliability, and/or decrease the power consumption) of the memory, and/or can increase die endurance (e.g., lifetime) of the memory.[0016] Previous wear-leveling operations may use tables to relocate the data in the memory. However, such tables may be large (e.g., may use a large amount of space in the memory), and may cause die wear-leveling operations to be slow. In contrast, operations (e.g., wear-leveling operations) to relocate data in accordance with die present disclosure may maintain an algebraic mapping (e.g., an algebraic mapping between logical and physical addresses) for use in identifying the physical location (e.g., physical block) to which the data has been relocated. Accordingly, operations to relocate data in accordance with the present disclosure may use less space in the memory, and may be fester, than previous wear-leveling operations.[0017] Further, die memory may include (e.g., be separated and/or divided into) two different portions (e.g., logical regions) of data, as will be further described herein. In such instances, previous wear-leveling operations may have to be independently applied to each respective portion of the memory (e.g., separate operations may need to be Used for each respective portion), and the data of each respective portion may only be relocated across a fraction of the memory (e.g., the data of each respective portion may remain in separate physical regions of the memory). However, such an approach may be ineffective at increasing the performance and/or endurance of the memory. For instance, since the size of, and/or workload on, the two different logical regions can be different, one of die physical regions may be stressed more than the other one in such an approach.[0018] In contrast, operations (e.g., wear-leveling operations) to relocate data in accordance with the present disclosure may work (e.g., increase performance and/or endurance) more effectively on memory that includes two different portions than previous wear-leveling operations. For example, an operation to relocate data in accordance with the present disclosure may be concurrently applied to each respective portion of the memory (e.g., the same operation can be used on both portions). Further, the data of each respective portion may be relocated across die entire memory (e.g., the data of each respective portion may slide across all the different physical locations of the memory). Accordingly, operations to relocate data in accordance with the present disclosure may be able to account (e.g., compensate) for a difference in size and/or workload of the two portions.[0019] Further, previous wear level operations may not beimplementabie in hardware. In contrast, operations (e.g., wear-leveling operations) to relocate data in accordance with the present disclosure may be implementabie (e.g., completely implementabie) in hardware. For instance, operations to relocate data in accordance with the present disclosure may be implementabie in the controller of the memory, or within the memory itself. Accordingly, operations to relocate data in accordance with the present disclosure may not impact the latency of the memory, and may not add additional overhead to the memory.[0020] Although embodiments are not limited to a particular type of memory or memory device, operations (e.g., wear-leveling operations) to relocate data in accordance with the present disclosure can be performed (e.g., executed) on a hybrid memory device that includes a first memory array that can be a storage class memory and a number of second memory arrays that can be NAND flash memory. For example, die operations can be performed on die first memory array and/or the second number of memory arrays to increase the performance and/or endurance of the hybrid memory.[0021] As used herein,“a”,“an”, or“a number of” can refer to one or more of something, and“a plurality of” can refer to one or more such things.For example, a memory device can refer to one or more memory devices, and a plurality of memory devices can refer to two or more memory devices..Additionally, the designators“R”,“B”,“S”, and“N”, as used herein, particularly with respect to reference numerals in the drawings, indicates that a number of the particular feature so designated can be included with a number ofembodiments of the present disclosure.[0022] The figures herein follow a numbering convention in which the first digit or digits correspond to the drawing figure number and the remaining digits identify an element or component in the drawing. Similar elements or components between different figures may be identified by the use of similar digits. For example, 214 may reference element“14” in Figure 2, and a similar element may be referenced as 314 in Figure 3.[0023] Figure 1 illustrates a diagram of a portion of a memory array 101 having a number of physical blocks in accordance with an embodiment of the present disclosure. Memory array 101 can be, for example, a NAND flash memory array. As an additional example, memory array 101 can be a storage class memory (SCM) array, such as, for instance, a 3D XPoint memory array, a ferroelectric RAM (FRAM) «ray, or a resistance variable memory array such as a PCRAM»RRAM, or spin torque transfer (STT) array, among others. Memory array 101 can be part of a hybrid memory, as will be further described herein (e.g., in connection with Figure 2). Further, although not shown in Figure 1, memory array 101 can be located on a particular semiconductor die along with various peripheral circuitry associated with the operation thereof.[0024] As shown in Figure 1 , memory array 101 has a number of physical blocks 107-0 (BLOCK 0), 107-1 (BLOCK 1), . . ., 107-B (BLOCK B) of memory cells. The memory cells can be single level cells and/or multilevel cells such as, for instance, two level cells, triple level cells (TLCs) or quadruple level cells (QLCs). As an example, die number of physical blocks in memory array 101 may be 128 blocks, 512 blocks, or 1,024 blocks, but embodiments are not limited to a particular power of two or to any particular number of physical blocks in memory array 101.[0025] A number of physical blocks of memory cells (e.g., blocks 107-0,107-1, . . 107-B) can be included in a plane of memory cells, and a number of planes of memory cells can be included on a die. For instance, in the example shown in Figure 1, each physical block 107-0, 107-1, . . 107-B can be part of a single die. That is, the portion of memory array 101 illustrated in Figure I can be a die of memory celts.[0026] As shown in Figure 1, each physical block 107-0, 107-1, . . ., 107-B includes a number of physical rows (e.g., 103-0, 103-1, . . 103-R) of memory cells coupled to access lines (e.g., word lines). The number of rows (e.g., word lines) in each physical block can be 32, but embodiments arc not limited to a particular number of rows 103-0, 103-1, . . ., 103-R per physical block. Further, although not shown in Figure 1 , the memory cells can be coupled to sense lines (e.g., data lines and/or digit lines).[0027] As one of ordinary skill in tiie art will appreciate, each row 103-0,103-1, . . ., 103-R can include a number of pages of memory cells (e.g., physical pages). A physical page refers to a unit of programming and/or sensing (e.g., a number of memory cells dial are programmed and/or sensed together as a functional group). In foe embodiment shown in Figure 1, each row 103-0, 103- 1, . . ., 103-R comprises one physical page of memory cells. However, embodiments of the present disclosure are not so limited. For instance, in an embodiment, each row can comprise multiple physical pages of memory ceils(e.g., one or more even pages of memory cells coupled to even-numbered bit lines, and one or more odd pages of memory cells coupled to odd numbered bit lines). Additionally, for embodiments including multilevel cells, a physical page of memory cells can store multiple pages (e.g., logical pages) of data (e.g., an upper page of data and a lower page of data, with each cell in a physical page storing one or more bits towards an upper page of data and one or more bits towards a lower page of data).[0028] In an embodiment of the present disclosure, and as shown inFigure 1 , a page of memory cells can comprise a number of physical sectors105-0, 105-1, . . ., 105-S (e.g., subsets of memory cells). Each physical sector 105-0, 105-1, . . ., 105-S of cells can store a number of logical sectors of data. Additionally, each logical sector of data can correspond to a portion of a particular page of data. As an example, a first logical sector of data stored in a particular physical sector can correspond to a logical sector corresponding to a first page of data, and a second logical sector of data stored in the particular physical sector can correspond to a second page of data. Each physical sector 105-0, 105-1, . . ., 105-S, can store system and/or user data, and/or can include overhead data, such as error correction code (ECC) data, logical block address (LBA) data, and metadata.[0029] Logical block addressing is a scheme that can be used by a host for identifying a logical sector of data. For example, each logical sector can correspond to a unique logical block address (LBA). Additionally, an LBA may also correspond (e.g., dynamically map) to a physical address, such as a physical block address (PBA), that may indicate the physical location of that logical sector of data in the memory. A logical sector of data can be a number of bytes of data (e.g., 256 bytes, 512 bytes, 1,024 bytes, or 4,096 bytes). However, embodiments are not limited to these examples. Further, in an embodiment of the present disclosure, memory array 101 can be separated and/or divided into a first logical region of data having a first number of LBAs associated therewith, and a second logical region of data having a second number of LBAs associated therewith, as will be further described herein (e.g., in connection with Figure 2).[0030] It is noted that other configurations for the physical blocks 107-0,107-1, . . 107-B, rows 103-0, 103-1, . . 103-R, sectors 105-0, 105-1, . . 105- S, andpages are possible. For example, rows 103-0, 103-1». . 103-R of physical blocks 107-0, 107-1, . . 107-B can each store data corresponding to a single logical sector which can include, for example, more or less than 512 bytes of data.[0031] Figure 2 is a block diagram of a computing system 200 including a host 202 and an apparatus in the form of a memory device 206 in accordance with an embodiment of the present disclosure. As used herein, an“apparatus” can refer to, but is not limited to, any of a variety of structures or combinations of structures, such as a circuit or circuitry, a die or dice, a module or modules, a device or devices, or a system or systems, for example. Further, in an embodiment, computing system 200 can include a number of memory devices analogous to memory device 206.[0032] hi the embodiment illustrated in Figure 2»memory device 206 can include a first type of memory (e.g., a first memory array 210) and a second type of memory (e.g., a number of second memory arrays 212-1 ,..., 212-N).The memory device 206 can be a hybrid memory device, where memory device 206 includes the first memory array 210 that is a different type of memory than the number of second memory arrays 212-1,... , 212-N. The first memory array 210 can be storage class memory (SCM), which can be a non-volatile memory that acts as main memory for memory device 206 because it has faster access time than foe second number of memory arrays 212-1, ..., 212-N. For example, the first memory array 210 can be 3D XPoint memory, FRAM, or resistance variable memory such as PCRAM, RRAM, or STT, among others. The second number of memory arrays 212-1, ., 212-N can act as a data store (e.g. .storage memory) jfbr memory device 206, and can be HAND flash memory, among other types of memory.[0033] Although the embodiment illustrated in Figure 2 includes one memory array of the first type of memory, embodiments of the present disclosure are not So limited. For example, in an embodiment, memory device 206 can include a number of SCM arrays. However, memory device 206 may include less of the first type of memory than the second type of memory. For example, memory army 210 may store less data than is stored in memory arrays 212-1, . . ., 212-N.[0034] Memory array 210 and memory arrays 212-1 , . . ., 212-N can each have a plurality of physical blocks of memory cells, in a manner analogous to memory array 101 previously described in connection with Figure I . Further, the memory (e.g., memory array 210, and/or memory arrays 212-1, . . ., 212-N) can include (e.g., be separated and/or divided into) two different portions (e.g., logical regions ) of data. For instance, the memory may include a first portion of data having a first number (e.g., first quantity) of logical block addresses (LBAs) associated therew ith, and a second portion of data having a second number (e.g., second quantity) of LBAs associated therewith. The first number of LBAs can include, for instance, a first sequence of LBAs, and the second number of LBAs can include, for instance, a second sequence of LBAs.[6635] As an example, the first portion of data may comprise user data, and the second portion of data may coinprise system data. As an additional example, the first portion of data may comprise data that has been accessed (e.g., data whose associated LBAs have been accessed) at or above a particular frequency during program and/or sense operations performed on the memory, and the second portion of data may comprise data that has been accessed (e.g.,, data whose associated LBAs have been accessed) below the particular frequency during program and/or sense operations performed on the memory. In such an example, the first portion of data may comprise data that is classified as“hot” data, and the second portion of data may comprise data that is classified as “cold” data. As an additional example, the first portion of data may comprise operating system data (e.g., operating system files), and the second portion of data may comprise multimedia data (e.g., multimedia files). In such an example, the first portion of data may comprise data that is classified as“critical” data, and the second portion of data may comprise data that is classified as“non- critical’" data,[0036] The first and second number of LBAs may be the same (e.g., the first and second portions of data may be the same size), or the first number of LBAs may be different than the second number of LBAs (e.g., the sizes of the first portion of data and the second portion of data may be different). For instance, the first number ofLBAs may be greater than the second number of LBAs (e.g., die size of the first portion of data may be larger than the size of the second portion of data). Further, the size of each respective one of the first number of LBAs may be die same as die size of each respective one of the second number of LBAs, or the size of each respective one of the first number of LBAs may be different than the size of each respective one of the second number of LB As. For instance, die size of each respective one of the first number of LBAs may be a multiple of the size of each respective one of the second number of LBAs. Further, the LBAs associated with each respective portion of the memory can be randomized. For instance, the LBAs can be processed by a static randomizer.[0037] In an embodiment, at least two of the plurality of physical blocks of die memory may not have valid data stored therein. For instance, two of the physical blocks of the memory may be blanks. These physical blocks may separate (e.g., be between) the first portion of data and the second portion of data in die memory. For instance, a first one of these two physical blocks may be after the first portion of data and before the second portion of data, and a second one of the two physical blocks may be after the second portion and before the first portion. These physical blocks may be referred to herein as separation blocks. An example illustrating a memory having two different portions of data separated by two such separation blocks will be further described herein (e.g., in connection with Figure 4).[0038] As illustrated in Figure 2, host 202 can be coupled to the memory device 206 via interface 204. Host 202 and memory device 206 cancommunicate (e.g., send commands and/or data) on interface 204. Host 202 can be a laptop computer, personal computer, digital camera, digital recording and playback device, mobile telephone, PDA, memory card reader, or interface hub, among other host systems, and can include a memory access device (e.g., a processor). One of ordinary skill in the art will appreciate that“a processor” can intend one or more processors, such as a parallel processing system, a number of coprocessors, etc.[0039] Interface 204 can be in the form of a standardized physical interface. For example, when memory device 206 is used for information storage in computing system 200, interface 204 can be a serial advanced technology attachment (SATA) physical interface, a peripheral component interconnect express (PCIe) physical interface, a universal serial bus (USB) physical interface, or a small computer system interface (SCSI), among other physical connectors and/or interfaces. In general, however, interface 204 can provide an interface for passing control, address, information (e.g., data), and other signals between memory device 206 and a host (e.g., host 202) having compatible receptors for interface 204.Memory device 206 includes controller 208 to communicate with host 202 and with the first memory array 210 and the number of second memory arrays 212-1,..., 212-N. Controller 208 can send commands to perform operations on the first memory array 210 and the number of second memory arrays 212-1,..., 212-N. Controller 208 can communicate with the first memory array 210 and the number of second memory arrays 212-1,..., 212-N to sense (e.g., read), program (e.g., write), move, and/or erase data, among other operations.Controller 208 can be included on the same physical device (e.g., the same die) as memories 210 and 212-1, . . 212-N. Alternatively, controller 208 can be included on a separate physical device that is communicatively coupled to the physical device dial includes memories 210 and 212-1, . . 212- N. In an embodiment, components of controller 208 can be spread across multiple physical devices (e.g., some components on die same die as the memory, and some components on a different die, module, or board) as a distributed controller.[0042] Host 202 can include a host controller to communicate with memory device 206. The host controller can send commands to memory device 206 via interface 204. The host controller can communicate with memory device 206 and/or the controller 208 on the memory device 206 to read, write, and/or erase data, among other operations. [6043] Controller 208 on memory device 206 and/or die host controller on host 202 can include control circuitry and/or logic (e.g., hardware and firmware). In an embodiment, controller 208 on memory device 206 and/or the host controller on host 202 can be an application specific integrated circuit (ASIC) coupled to a printed circuit board including a physical interface. Also, memory device 206 and/or host 202 can include a buffer of volatile and/or nonvolatile memory and a number of registers.[0044] For example, as shown in Figure 2, memory device can include circuitry 214. In the embodiment illustrated in Figure 2, circuitry 214 is included in controller 208. However, embodiments of the present disclosure are not so limited. For instance, in an embodiment, circuitry 214 may be included in (e.g., on the same die as) memory 210 and/or memories 212-1, . . ., 212-N (e.g., instead of in controller 208).[0045] Circuitry 214 can comprise, for instance, hardware, and can perform wear leveling operations to relocate data stored in memory array 210 and/or memory arrays 212-1, . . ., 212-N in accordance with the present disclosure. For example, circuitry 214 can relocate the data of the first portion of data that is associated with a particular one of the first number of LBAs to one of die two separation blocks, and can relocate the data of the second portion of data that is associated with a particular one of the second number of LBAs to the other one of the two separation blocks.For instance, circuitry 214 can relocate the data of the first portion that is associated with the last one of the first number of LBAs (e.g., the last LBA in the first sequence of LBAs) to the second separation block (e.g., die separation block that is after die second portion and before the first portion), and circuitry 214 can relocate the data of the second portion that is associated with the last one of die second number of LBAs (e.g., the last LBA in the second sequence of LBAs) to die first separation block (e.g., die separation block that is after the first portion and before the second portion). Such a data relocation may result in two different physical blocks of die memory having no valid data stored therein (e.g., may result in two different physical blocks of die memory becoming the separation blocks). For instance, relocating the data of the first portion associated with the last one of the first number of LBAs may result in a different physical block becoming the separation block that is after the second portion and before the first portion, and reloading the data of the second portion associated with the last one of die second number of LBAs may result in a different physical block becoming the separation block that is after the first portion and before the second portion. Further, relocating the data of the first portion associated with die last one of the first number of LBAs may result in a different one of the first number of LBAs (e.g., the next-to-last LBA in the first sequence of LBAs) becoming file last one of the first number of LBAs, and relocating the data of the second portion associated with the last one of the second number of LBAs may result in a different one of the second number of LBAs (e.g., die next-to-last LBA in the second sequence of LBAs) becoming the last one of the second number of LBAs. An example illustrating such a data relocation operation will be further described herein (e.g., in connection with Figures 3 and 4).[6047] In an embodiment, circuitry 214 may perform an operation to relocate the data responsive to a triggering event The triggering event may be, for example, a particular number of program operations, such as, for instance,100 program operations, being performed (e.g., executed) on the memory. For instance, a count»: (not shown in Figure 2) can be configured to send an initiation signal in response to the particular number of program operations being performed, and circuitry 214 may perform the operation to relocate the data in response to receiving the initiation signal from the counter. As an additional example, the triggering event may be a power state transition occurring in the memory, such as, for instance, memory device 206 going from active mode to stand-by mode, idle mode, or power-down mode.[0048] In an embodiment, the data of the second portion may be relocated immediately upon the data of the first portion being relocated.However, in some instances, the operation to relocate the data may need to be suspended in order to perform an operation, such as a program or sense operation, requested by host 202. In such an instance, the operation requested by the host can be performed upon the data of the first portion being relocated (e.g., upon the relocation of the data being completed), and the data of the second portion may be relocated upon the requested operation being performed (e.g., upon the operation being completed). Once the data as been relocated, circuitry 214 can use algebraic mapping to identify die physical location in the memory to which the data has been relocated. For example, circuitry 214 can use algebraic mapping (e.g., algebraic logical to physical mapping) to identify (e.g., compute) the location in the memory (e.g., the PBA) of tire physical block to which the data of tire first portion has been relocated (e.g., tire location of the second separation block), and the location in the memory (e.g., the PBA) of the physical block to which the data in the second portion of the memory has been relocated (e.g., the location of the first separation block). For instance, circuitry 214 can use the algebraic mapping to identify the location in the memory of the physical block to which the data of the first portion has been relocated during an operation to sense that relocated data (e.g., upon receiving a request from host 202 to read one of the first number of LB As), and to identify the location in the memory of the physical block to which the data of the second portion has been relocated during an operation to sense that relocated data (e.g., upon receiving a request from host 202 to read one of the second number of LBAs). Such an algebraic mapping will be further described herein (e.g., in connection with Figure 3).[0050] Circuitry 214 can perform additional (e.g., subsequent) wear leveling operations to further relocate the data stored in memory array 210 and/or memory arrays 212-1, . . ., 212-N throughout the lifetime of the memory. For instance, circuitry 214 can perform an additional (e.g., subsequent) operation to relocate the data responsive to an additional (e.g., subsequent) triggering event.[0051] For example, in an operation to relocate data in the memory that is performed subsequent to the example operation previously described herein, circuitry 214 can relocate the data of the first portion that is associated with the different one of the first number of LBAs that has now become the last one (e.g., the one that was previously the next-to-last LBA in the first sequence of LBAs) to the different physical block that has now become the separation block that is after the second portion and before the first portion, and circuitry 214 can relocate the data of the second portion that is associated with the different one of the second number of LBAs that has now become the last one (e.g., the one that was previously the next-to-last LBA in die second sequence of LBAs) to the different physical block that has now become the separation block that is after the first portion and before the second portion. Such a data relocation may once again result in two different physical blocks of die memory becoming the separation blocks, and different ones of the first and second number of LB As becoming the last one of the first and second number of LBAs, respectively, and subsequent data relocation operations can continue to be performed in an analogous manner. An example illustrating a sequence of such subsequent data relocation operations will be further described herein (e.g., in connection with Figures 5A-5B).[0052] The embodiment illustrated in Figure 2 can include additional circuitry, logic, and/or components not illustrated so as not to obscure embodiments of the present disclosure. For example, memory device 206 can include address circuitry to latch address signals provided over I/O connectors through I/O circuitry. Address signals can be received and decoded by a row decoder and a column decoder, to access memory arrays 210 and 212-1, . . .,212-N. Further, memory device 206 can include a main memory, such as, for instance, a DRAM or SDRAM, that is separate from and/or in addition to memory arrays 210-1 and 212-l, . . ., 212-N.[0053] Figure 3 is a block diagram of circuitry 314 for performing data relocation operations in memory in accordance with an embodiment of the present disclosure. Circuitry 314 can be, for example, circuitry 214 previously described in connection with Figure 2.[0054] As shown in Figure 3, circuitry 314 can include four registers(e.g., a first register 320-1, a second register 320-2, a third register 320-3, and a fourth register 320-4). Register 320-1 can store a value indicating (e.g., pointing to) the PBA for die data of die first portion of the memory that is associated with the first one of the first number of LBAs (e.g., the first LBA in the first sequence of LBAs). This value can be referred to herein as“SA”.[0055] Register 320-2 can store a value indicating die PBA for one of the two physical blocks of the memory that do not have data stored therein, and register 320-3 can store a value indicating the PBA for the other one of the two physical blocks that do not have data stored therein. For example, register 320-2 can store a value indicating die PBA for the physical block Aat is after the first portion of data and before the second portion of data (e.g., the first separation block), and register 320-3 can store a value indicating the PBA for the physical block that is after the second portion and before the first portion (e.g., foe second separation block). The value stored by register 320-2 can be referred to herein as“gA’\ and foe value stored by register 320-3 can be referred to herein as“gB".[0056] Register 320-4 can store a value indicating the relative position of the first one of the second number of LBAs (e.g., the first LBA in die second sequence of LBAs) in the second portion of the data of the memory. This value can be referred to herein as“TB”[0057] Circuitry 314 can use the values stored in registers 320-1, 320-2,320-3, and 320-4 (e.g.,SA, gA, gB, and ft$) to perform a data relocation operation in memory accordance with the present disclosure. For instance, circuitry 314 can use the below example of code, representing executable instructions, to perform a data relocation operation in accordance with the present disclosure. In the below example,“PRBL” represents the total number (e.g., quantity) of PBAs in the memory,“PL” represents foe total number of LBAs associated with the first portion of data (e.g., the first number of LBAs),“nu” represents the total number of LB As associated with the second portion of data (e.g., the second number of LBAs), and“sep” represents the number of PBAs for each respective separation block.6: end if7: end procedure[0058] Further, circuitry 314 can use the values stored in registers 320-1,320-2, 320-3, and 320-4 (e.g.,SA, gA, gn, and IB) to perform an algebraic mapping (e.g., an algebraic logical to physical mapping) to identify the physical location in the memory (e.g., die PBA) to which the data has been relocated (e.g., during a sense operation, as previously described herein). For instance, circuitry 314 can compute the PBA, represented below as“p”, for any LBA, represented below as“1”, associated with the first portion of data using the below “First Portion Mapping", and circuitry 314 can compute the PBA for any LBA associated with the second portion of data using the below“Second Portion Mapping":[0059] Figure 4 illustrates a conceptual example 430 of a data relocation operation performed in memory in accordance with an embodiment of the present disclosure. The memory may be, for example, memory array 210 and/or memory arrays 212-1, . . ., 212-N previously described in connection with Figure 2, and the data relocation operation may be performed by, for example, circuitry 214 and/or 314 previously described in connection with Figures 2 and 3, respectively.[0060] hi the example illustrated in Figure 4, the memory includes ten physical blocks of memory cells, represented by PBAs 0 through 9 (e.g., PBA 0 corresponds to die first physical block, PBA 1 corresponds to the second physical block, PBA 2 corresponds to the third physical block, etc.). Further, in die example illustrated in Figure 4, the memory includes (e.g., is separated and/or divided into) two different portions (e.g., logical regions) of data, which can be referred to as portion A and portion B. As shown in Figure 4, die data of portion A has five LBAs associated therewith (e.g., LBAs AO through A4), and the data of portion B has three LBAs associated therewith (e.g., LBAs BO through B2). In the example illustrated in Figure 4, the LBA size is the same for portion A and portion B. Further,“SA” shown in Figure 4 indicates the PBA for the data of portion A associated with LBA AO, and shown in Figure 4 indicates the relative position of LBA BO in portion B (e.g., the relative position of LBA BO in the sequence of LBAs associated with portion B).[0061] Further, in die example illustrated in Figure 4, two of die physical blocks of die memory do not have valid data stored therein, and separate (e g., are between) portion A and portion B. These blocks may be referred to as separation blocks (as previously described herein), and are represented in Figure 4 by a symbol. In the example illustrated in Figure 4,“gA*indicates the PBA for the separation block that is after portion A and before portion B, and “ga” indicates die PBA for the separation block that is after portion B and before portion A.[0062] hi the example illustrated in Figure 4, each respective separation block comprises one PBA, which is the same size as the LBA sizes for portions A and B. However, in an example in which the LBA sizes for portions A and B are different, such as, for instance, in which the size of one portion is a multiple of the other, the number of PBAs in each separation block may be equal to that multiple. For instance, if foe LBA size for portion B is equal to foe PBA size, but foe LBA size for portion A is four PBAs, the number of PBAs in each separation block would be four.[0063] The column labelled“Step 0” in Figure 4 shows the data allocation within the memory before foe data relocation operation is performed. For instance, as illustrated in Figure 4, before the data relocation operation is performed the data of portion A is located in PBAs 0 through 4, the data of portion B is located in PBAs 6 through 8, foe separation block that is after portion A and before portion B is located at PBA 5 (e.g., as indicated by gA), and the separation block that is after portion B and before portion A is located at PBA 9 (e.g., as indicated by gn). Further, foe relative position of LBA B0 in portion B before foe data relocation operation is performed is first (e.g., LBA B0 is first in the sequence of LBAs associated with portion B, as indicated by the value for n>), as illustrated in Figure 4.[0064] The data relocation operation can be divided into two substeps, which are represented by the“Step IB” and“Step I A” columns in Figure 4. As shown in Figure 4, during the first substep (e.g., Step 1 B), the date of portion B associated with the last of the LBAs of portion B (e.g., the data associated with LBA B2) is relocated to the separation block that is after portion A and before portion B (e.g., to the separation block indicated by gA). That is, the first substep (e.g., Step IB) moves die data associated with die last of the LBAs of portion B up, to be before die first of die LBAs of portion B (e.g., the data associated with LBA B2 is moved up to be before the data associated with LBAs BO and Bl). During the second substep (e.g., Step 1 A), the data of portion A associated with the last of the LB As of portion A (e.g., the data associated with LBA A4) is relocated to die separation block dial is after portion B and before portion A (e.g., to the separation block indicated by gn), as illustrated in Figure 4. That is, the second substep moves die data associated with last of the LBAs of portion A down, to be after die LBAs of portion B (e.g., die data associated with LBA A4 is moved down to be after the data associated with LBAs B2, BO, and Bl).[0065] As a result of the data relocation operation (e.g., after Steps IB and 1 A are performed), PBA 4 becomes the separation block that is after portion A and before portion B, and PBA 8 becomes die separation block that is after portion B and before portion A, as illustrated in Figure 4 (e.g. as indicated by gA and gn, respectively, after Step IB). Also as a result of the data relocation operation, the next-to-last of the LBAs of portion A (e.g., LBA A3) becomes the last of the LBAs of portion A before portion B, and the next-to-last of the LBAs of portion B (e.g.,, LBA Bl) becomes die last of the LBAs of portion B, as illustrated in Figure 4). Also as a result of the data relocation operation, the data of portion A becomes located in PBAs 0 through 3 and 9, and the data of portion B becomes located in PBAs 5 through 7, as illustrated in Figure 4. Also as a result of the date relocation operation, the relative position of LBA BO in portion B becomes second (e.g., LBA BO becomes second in the sequence of LBAs associated with portion B, as indicated by the value for ¾), as illustrated in Figure 4. In the example illustrated in Figure 4, each subset is run one» during die data relocation operation. However, in an example in which the LBA sizes for portions A and B are different, such as, for instance, in which the size of one portion is a multiple of the other, Step 1 A and/or Step IB may need to be ran more than once. For instance, in an example in which the LBA size for portion B is equal to the PBA size, but the LB A size for portion A is four PBAs, Step 1 B would be ran four times, and Step 1 A would be run once.[0067] Figure 5A illustrates a conceptual example 540 of a sequence of data relocation operations performed in memory in accordance with an embodiment of the present disclosure. The memory is analogous to the memory previously described in connection with Figure 4, and the data relocation operations may be performed by, for example, circuitry 214 and/or 314 previously described in connection with Figures 2 and 3, respectively. Figure 5B is a table 545 of values for SA, gA, gn, and RBassociated with the sequence of data relocation operations.SA, gA, gn, and m are analogous to SA, gA, ga, and m, respectively, previously described in connection with Figure 4. Each“Step” column in Figures 5 A and 5B represents the effect of performing both substep B and subset A (e.g.,“Step 1” is the combination of substep IB and substep 1A described in connection with Figure 4).[0068] The column labelled“Step 0” in Figure 5 A shows the data allocation within the memory before the first data relocation operation of the sequence is performed, and is analogous to the“Step 0” column described in connection with Figure 4. For example, as shown in Figure 5B, the values for SA, gA, gB, and i¾ in the Step 0 column (e.g., before the first data relocation operation is performed) are 0, 5, 9, and 0, respectively, as described in connection with Figure 4.[0069] The columns labelled“Step 1” through“Step 6" in Figure 5 A show the data allocation within the memory after the performance of each respective data relocation operation in the sequence. For example, the Step 1 column shows the data allocation after the performance of the first data relocation operation, «id is analogous to the combined effect of substep IB and substep 1 A described in connection with Figure 4. The Step 2 column shows the data allocation after the performance of the second data relocation operation, the Step 3 column shows the data allocation after the performance of the third data relocation operation, etc. Further, the columns labelled“Step 1” through“Step 6” in Figure SB show die values for SA, gA, gu, and re after the performance of each respective date relocation operation in the sequence. For example, the data allocation and values shown in the Step 1 columns in Figures 5A and 5B, respectively, are analogous to die data allocation and SA, gA, gB, and rB values after the relocation operation described in connection with Figure 4.[0070] As shown in the Step 2 column of Figure 5A, after the performance of the second date relocation operation in the sequence, die date of portion B dial was associated with the last of the LBAs of portion B after Step 1 (e.g., the date associated with LBA Bl) is relocated to the separation block dial was after portion A and before portion B after Step 1 (e.g., to PBA 4), and the data of portion A that was associated with the last of the LBAs of portion A after Step 1 (e.g., the data associated with LBA A3) is relocated to the separation block that was after portion B and before portion A after Step 1 (e.g., to PBA 8). Further, as shown in the Step 2 column of Figure 5 A, PBA 3 has become the separation block that is after portion A and before portion B, and PBA 7 has become die separation block that is after portion B and before portion A, as indicated by the values for g,\and ga, respectively, in the Step 2 column of Figure 5B. Further, as shown in the Step 2 column of Figure 5A, the LBA that was die next-to-last of die LBAs of portion A after Step 1 (e.g., LBA A2) has become the last of the LBAs of portion A before portion B, and the LBA that was die next-to-last of die LBAs of portion B after Step 1 (e.g.,, LBA BO) has become the last of the LBAs of portion B. Further, as shown in the Step 2 column of Figure 5A, the date of portion A has become located in PBAs 0 through 2 and 8 through 9, and the date of portion B has become located in PBAs 4 through 6, with the PBA for the data of portion A associated with LBA AO remaining at PBA 0, as indicated by the value for SAin the Step 2 column of Figure 5B. Further, as shown in the Step 2 column of Figure 5 A, the relative position of LBA BO in portion B has become third, as indicated by the value for r» in the Step 2 column of Figure 5B.[0071] As shown in die Step 3 column of Figure 5A, after the performance of the third data relocation operation in the sequence, the date of portion B that was associated with the last of the LBAs of portion B after Step 2 (e.g., the date associated with LBA BO) is relocated to the separation block that was after portion A and before portion B after Step 2 (e.g., to PBA 3), and the data of portion A that was associated with the last of the LBAs of portion A after Step 2 (e.g., the data associated with LBA A2) is relocated to the separation block that was after portion B and before portion A after Step 2 (e.g., to PBA 7). Fur titer, as shown in the Step 2 column of Figure 5 A, PBA 2 has become the separation block that is after portion A and before portion B, and PBA 6 has become the separation block that is after portion B and before portion A, as indicated by the values for g,\and ge, respectively, in the Step 3 column of Figure 5B. Further, as shown in the Step 3 column of Figure 5A, the LBA that was thenext-to-last of the LBAs of portion A after Step 2 (e.g., LBA A 1) has become the last of the LBAs of portion A, and the LBA that was the next-to-last of the LBAs of portion B after Step 2 (e.g.,, LBA B2) has become the last of the LBAs of portion B. Further, as shown in die Step 3 column of Figure SA, the data of portion A has become located in PBAs 0 through 1 and 7 through 9, and the data of portion B has become located in PBAs 3 through 5, with the PBA for the data of portion A associated with LB A AO remaining at PBA 0, as indicated by the value for SAin the Step 3 column of Figure 5B. Further, as shown in the Step 3 column of Figure 5A, the relative position of LBA BO inportion B has become first, as indicated by the value for ¾ in the Step 3 column of Figure 5 B.[0072] The fourth, fifth, and sixth data relocation operations of the sequence can continue in an analogous manner, as shown in the Step 4, Step 5, and Step 6 columns, respectively, in Figures 5A and 5B. As such, it can be seen that the effect of the data relocation operations is to sequentially move the data associated with the last of the LBAs of portion B up to be before the first of die LBAs of portion B, and sequentially move the data associated with last of the LBAs of portion A down to be after the LBAs of portion B, such that portion B moves up and portion A moves down throughout the operation of the memory. Accordingly, not only do die data relocation operations of the sequence move the data associated with each respective LBA to a different PBA throughout the operation of the memory, but the two different portions of data are not static (e.g., the PBAs at which die data of each respective portion are located continue to change throughout die operation of the memory). That is, the data of each respective portion may be relocated across the entire memory throughout the operation of the memory. [6073] Each respective data relocation operation in die sequence can be performed responsive to a separate triggering event, as previously described herein (e.g., in connection with Figure 2). Although the example sequence illustrated in Figures 5 A-5B includes six data relocation operations,embodiments of the present disclosure are not so limited. For instance, additional date relocation operations can continue to be performed (e.g., responsive to subsequent triggering events) in an analogous manner throughout the lifetime of the memory.[0074] Although specific embodiments have been illustrated and described herein, those of ordinary skill in the art will appreciate that an arrangement calculated to achieve the same results can be substituted for the specific embodiments shown. This disclosure is intended to cover adaptations or variations of a number of embodiments of the present disclosure. It is to be understood that the above description has been made in an illustrative fashion, and not a restrictive one. Combination of the above embodiments, and other embodiments not specifically described herein will be apparent to those of ordinary skill in the art upon reviewing the above description. The scope of a number of embodiments of the present disclosure includes other applications in which die above structures and methods are used. Therefore, the scope of a number of embedments of the present disclosure should be determined with reference to the appended claims, along wife the full range of equivalents to which such claims are entitled.[0075] In the foregoing Detailed Description, some features are grouped together in a single embodiment for the purpose of streamlining the disclosure. This method of disclosure is not to be interpreted as reflecting an intention that the disclosed embodiments of the present disclosure have to use more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive subject matter lies in less than all features of a single disclosed embodiment. Thus, the following claims are hereby incorporated into the Detailed Description, wife each claim standing on its own as a separate embodiment. |
Various examples are directed to systems and methods for programming memory. A programming appliance may receive a command file comprising a first pre-generated digital signature. The first pre-generated digital signature may be associated with a memory system, with a first command and with a first memory system counter value. The programming appliance may send to a memory system a first command message. The first command system may comprise the first command and the first pre-generated digital signature. |
1.A method for programming a memory system, the method comprising:Receiving, by the programming device, a command file including a first pre-generated digital signature, the first pre-generated digital signature being associated with the memory system, the first command, and the first memory system counter value; andThe programming device sends a first command message to the memory system, where the first command message includes the first command and the first pre-generated digital signature.2.The method of claim 1, further comprising:Verifying the first pre-generated digital signature by the memory system using the current memory system counter value and the memory system encryption key; andThe first pre-generated digital signature is executed by the memory system.3.The method according to claim 1, wherein the command file further comprises a second pre-generated digital signature, the second pre-generated digital signature and the memory system, the second command, and the first memory system counter value after the Is associated with the counter value of the second memory system, and the method further includes:After sending the first command message to the memory system, sending a second command message to the memory system, the second command message including the second command and the second pre-generated digital signature.4.The method according to claim 3, wherein the command file further includes a third pre-generated digital signature, the third pre-generated digital signature and the third command and the second memory system counter value after the third memory system Associating the counter values, the method further includes:After sending the second command message, determining that the command sequence data indicates the third command; andSending a third command message to the memory system, the third command message including the third command and the third pre-generated digital signature.5.The method according to claim 1, wherein the command file includes a second pre-generated digital signature, the second pre-generated digital signature and the first command and a second different from the value of the first memory system counter The memory system counter values are correlated, and the method further includes:Query the memory system by the programming device to receive the current memory system counter value; andThe programming device selects the first pre-generated digital signature based at least in part on the current memory system counter value and the first memory system counter value.6.The method of claim 1, further comprising:Query the memory system to receive the first current memory system counter value;Determining that the first current memory system counter value is less than the first memory system counter value;Query the memory system to receive a second current memory system counter value greater than the first current memory system counter value; andIt is determined that the second current memory system counter value is equal to the first memory system counter value.7.The method of claim 1, further comprising:Query the memory system to receive the first current memory system counter value;Determining that the first current memory system counter value is less than the first memory system counter value; andAn instruction to increment the memory system counter is sent to the memory system.8.The method according to claim 1, wherein the command file includes a first pre-generated digital signature sequence corresponding to a first command sequence and a second pre-generated digital signature sequence corresponding to a second command sequence, the first pre-generated digital signature sequence Generating a sequence of digital signatures includes the first pre-generated digital signature.9.8. The method of claim 8, wherein the second sequence of pre-generated digital signatures further comprises the first pre-generated digital signature.10.A system for programming a memory, the system comprising:A programming device configured to perform operations including the following:Receiving a command file including a first pre-generated digital signature, the first pre-generated digital signature being associated with the memory system, the first command, and the first memory system counter value; andSend a first command message to the memory system, the first command message including the first command and the first pre-generated digital signature.11.The system according to claim 10, wherein the command file further comprises a second pre-generated digital signature, the second pre-generated digital signature and the memory system, the second command, and the first memory system counter value after the Is associated with the counter value of the second memory system, and wherein the programming device is further configured to perform operations including:After sending the first command message to the memory system, sending a second command message to the memory system, the second command message including the second command and the second pre-generated digital signature.12.The system according to claim 11, wherein the command file further includes a third pre-generated digital signature, the third pre-generated digital signature and the third command and the second memory system counter value after the third memory system The counter value is associated, and wherein the programming device is further configured to perform operations including the following:After sending the second command message, determining that the command sequence data indicates the third command; andSending a third command message to the memory system, the third command message including the third command and the third pre-generated digital signature.13.The system according to claim 10, wherein the command file includes a second pre-generated digital signature, the second pre-generated digital signature and the first command and a second different from the value of the first memory system counter The memory system counter values are associated, and wherein the programming device is further configured to perform operations including:Query the memory system to receive the current memory system counter value; andThe first pre-generated digital signature is selected based at least in part on the current memory system counter value and the first memory system counter value.14.The system of claim 10, wherein the programming device is further configured to perform operations including:Query the memory system to receive the first current memory system counter value;Determining that the first current memory system counter value is less than the first memory system counter value;Query the memory system to receive a second current memory system counter value greater than the first current memory system counter value; andIt is determined that the second current memory system counter value is equal to the first memory system counter value.15.The system of claim 10, wherein the programming device is further configured to perform operations including:Query the memory system to receive the first current memory system counter value;Determining that the first current memory system counter value is less than the first memory system counter value; andAn instruction to increment the memory system counter is sent to the memory system.16.The system according to claim 10, wherein the command file includes a first pre-generated digital signature sequence corresponding to a first command sequence and a second pre-generated digital signature sequence corresponding to a second command sequence, the first pre-generated digital signature sequence Generating a sequence of digital signatures includes the first pre-generated digital signature.17.The system of claim 16, wherein the second pre-generated digital signature sequence further includes the first pre-generated digital signature.18.The system of claim 10, further comprising the memory system, wherein the memory system is programmed to perform operations including:Verifying the first pre-generated digital signature by the memory system using the current memory system counter value and the memory system encryption key; andThe first pre-generated digital signature is executed by the memory system.19.A non-transitory computer-readable medium including instructions thereon, which when executed by at least one processor cause the at least one processor to perform operations including the following:Receiving a command file including a first pre-generated digital signature, the first pre-generated digital signature being associated with the memory system, the first command, and the first memory system counter value; andSend a first command message to the memory system, the first command message including the first command and the first pre-generated digital signature.20.The non-transitory computer-readable medium of claim 19, wherein the command file further includes a second pre-generated digital signature, the second pre-generated digital signature and the memory system, the second command, and the first A second memory system counter value after a memory system counter value is associated, and the medium further includes instructions that, when executed by the at least one processor, cause the at least one processor to perform operations including:After sending the first command message to the memory system, sending a second command message to the memory system, the second command message including the second command and the second pre-generated digital signature. |
Secure memory system programmingPriority applicationThis application claims the priority rights of U.S. Application Serial No. 16/052,215 filed on August 1, 2018, which is incorporated herein by reference in its entirety.Background techniqueThe memory system is usually installed in a computer or other electronic device as an internal semiconductor integrated circuit. There are many different types of memory, including volatile memory and non-volatile memory.Volatile memory requires power to maintain its data and includes random access memory (RAM), dynamic random access memory (DRAM), or synchronous dynamic random access memory (SDRAM).Non-volatile memory can retain stored data when power is off, and includes flash memory, read-only memory (ROM), electrically erasable programmable ROM (EEPROM), static RAM (SRAM), erasable Programming ROM (EPROM), resistance variable memory (such as phase change random access memory (PCRAM)), resistive random access memory (RRAM) and magnetoresistive random access memory (MRAM) or 3D XPointTM memory, etc.Flash memory is used as a non-volatile memory for various electronic applications. Flash memory systems generally include one or more sets of single transistor, floating gate, or charge trap memory cells that allow high memory density, high reliability, and low power consumption.Two common types of flash memory array architectures include NAND architectures and NOR architectures named in logical forms, and the basic memory cell configuration of each architecture is arranged in a logical form. The memory cells of the memory array are usually arranged in a matrix. In one example, the gate of each floating gate memory cell in a row of the array is coupled to an access line (e.g., a word line). In the NOR architecture, the drain of each memory cell in a column of the array is coupled to a data line (for example, a bit line). In the NAND architecture, the drain of each memory cell in the array string is coupled together in series between the source line and the bit line (source to drain).Both NOR and NAND architecture semiconductor memory arrays are accessed through a decoder, which activates a specific memory cell by selecting a word line coupled to its gate. In the NOR architecture semiconductor memory array, once activated, the selected memory cell will place its data value on the bit line, so that different currents flow according to the state of the specific cell during programming. In the NAND semiconductor memory array, a high bias voltage is applied to the drain-side select gate (SGD) line. The word lines coupled to the gates of the unselected memory cells of each group are driven with a specified pass voltage (for example, Vpass) to operate the unselected memory cells of each group as pass transistors (For example, to pass current in a way that is not limited by the value of the data it stores). The current then flows from the source line to the bit line through each group coupled in series (limited only by the selected memory cell of each group), thereby placing the current-encoded data value of the selected memory cell in Bit online.Each flash memory cell in the NAND or NAND architecture semiconductor memory array can be individually or collectively programmed to one or more programmed states. For example, a single-level cell (SLC) can represent one of two programmed states (e.g., 1 or 0), representing one bit of data. However, flash memory cells can also represent one of more than two programmed states, allowing higher density memory to be manufactured without increasing the number of memory cells, because each cell can represent more than A binary number (for example, more than one bit). Such cells may be referred to as multi-state memory cells, multi-digit cells or multi-level cells (MLC). In some instances, MLC can refer to a memory cell that can store two data bits per cell (for example, one of four programmed states), and a three-level cell (TLC) can refer to a memory cell that can store three data bits per cell. A memory cell with four data bits (for example, one of the eight programmed states), and each cell in a four-level cell (QLC) can store four data bits. MLC is used herein in its broader context and can refer to any memory cell that can store more than one data bit per cell (ie, can represent more than two programmed states).Some memory arrays are two-dimensional (2D) structures arranged on the surface of a semiconductor substrate. In order to increase the memory capacity of a given area and reduce costs, the size reduction of individual memory cells has been reduced. However, there are technical limitations on the reduction of the size of individual memory cells, and therefore there are technical limitations on the memory density of the 2D memory array. In response, three-dimensional (3D) memory structures (such as 3D and non-architectural semiconductor memory systems) are being developed to further increase memory density and reduce memory costs.Memory arrays or systems can be combined to form the storage volume of a memory system, such as solid-state drives (SSD), universal flash storage (UFSTM) devices, multimedia card (MMC) solid-state storage devices, embedded MMC devices (eMMCTM), etc. . Among other aspects, SSDs can be used as the main storage devices of computers, and have advantages over traditional hard drives with mobile components in terms of performance, size, weight, robustness, operating temperature range, and power consumption. For example, SSDs may have reduced search time, latency, or other delays associated with disk drives (e.g., electromechanical, etc.). SSD uses non-volatile memory cells (such as flash memory cells) to eliminate the need for internal battery supply, thereby making the drive more versatile and more compact.An SSD may include multiple memory devices, including multiple die or logic units (for example, the number of logic units or LUNs), and may include one or more processors or other controllers to perform operations on the memory devices or interact with external Logic functions required for system interface. Such SSDs may include one or more flash memory dies, including multiple memory arrays and peripheral circuitry on them. The flash memory array may contain multiple blocks of memory cells organized into multiple physical pages. In many instances, the SSD will also contain DRAM or SRAM (or other forms of memory die or other memory structures). The SSD can receive commands associated with memory operations from the host, such as for transferring data between the memory device and the host (for example, user data and associated integrity data, such as error data, address data, etc.) A read or write operation or an erase operation used to erase data from a memory device.Description of the drawingsIn the drawings that are not necessarily drawn to scale, the same numbers may describe similar components in different views. The same number with different letter suffixes can indicate different instances of similar components. The drawings generally show the various embodiments discussed in this document by way of example rather than by way of limitation.Figure 1 shows an example of an environment containing a host device, a memory system, and a programming device containing command files.Figure 2 shows another example environment that includes a programming device configured to program a memory system at multiple host devices.FIG. 3 is a flowchart showing an example of a process flow that can be executed by a programming device to send a command to the memory system.4 is a flowchart showing an example of a process flow that can be executed by a programming device to send a sequence of commands to a memory system.FIG. 5 is a flowchart showing an example of a process flow that can be executed by a programming device to send a command to a memory system.FIG. 6 is a flowchart showing an example of a process flow for sending a command message with a pre-generated digital signature to a memory system.FIG. 7 is a flowchart showing an example of a process flow for sending a command message with a pre-generated digital signature to a memory system.Figure 8 shows an example host as part of one or more devices with a memory device.Figure 9 is a block diagram showing an example of a machine on which one or more embodiments can be implemented.Detailed waysAspects of the present disclosure relate to secure memory system programming. During the creation of a memory system and/or a host system that utilizes the memory system, it is often desirable to configure the memory system. The programming device can provide the memory system with commands that instruct the memory system to perform various operations and/or perform various configurations.Unless the command is accompanied by a valid digital signature, some memory systems include security features that prevent the memory system from executing the command. The memory system verifies the command by checking the validity of the digital signature. Memory system commands verified by digital signatures are referred to herein as signed commands. In some memory systems, all commands are signed commands. In other memory systems, less than all commands are signed commands. For example, commands that affect security features, device provisioning, and/or other sensitive areas of operation may be signed, while regular commands (such as read or write requests) may be unsigned.The digital signature that accompanies the signed command can be created (and verified) using multiple input data elements containing encryption keys and memory system counter values. The digital signature can be created by a programming device or other suitable signature device (such as a hardware security module (HSM)). The digital signature can be generated using a symmetric key arrangement or an asymmetric key arrangement. In a symmetric key arrangement, both the signature device that generates the digital signature and the memory system that verifies the digital signature use the same encryption key, which may be the server root key for the memory system. In an asymmetric key arrangement, the signing device utilizes a private key that may be unknown to the memory system. The memory system uses a public key corresponding to the private key of the signing device.The digital signature can also be based on the counter value of the memory system counter. The signature device that generates the digital signature can query the memory system to receive the current value of the memory system counter. The signature device uses the encryption key, the command, and the current memory system counter value to generate a digital signature by executing an encryption function (such as a hash function). In a symmetric key arrangement, the signature device uses a secret encryption key known to the signature device and the memory device. In an asymmetric key arrangement, the signature device uses a private encryption key that is known by the signature device but may not be known to the memory system. The command message containing the command and the digital signature is sent to the memory system.The memory system verifies the digital signature by calculating the encrypted digest of the command from the command message, the current value of the memory system counter, and the memory system encryption key. The encryption digest is the output of a hash function or other suitable encryption function executed at the memory system using commands, the current value of the memory system counter, and the memory system encryption key.In a symmetric key arrangement, the storage system encryption key is a copy of the encryption key used by the signing device. In an asymmetric key arrangement, the storage system encryption key is the public key of the signing device. If the encrypted digest is equal to the digital signature containing the command message, the digital signature is verified and the memory system executes the command. If the encrypted digest is not equal to the digital signature containing the command message, the digital signature is not verified and the memory system does not execute the command.As described, if the device (such as a programming device) and the memory system have a set of free encryption keys, for example, the device and the memory system have the same symmetric key or the device has a private key and the memory system has a corresponding public key. Key, the device can instruct the memory system to execute the signed command.However, in some instances, providing a programming device with a symmetric key known to the memory system or a copy of the private key associated with a public key known to the memory system may create challenges. For example, an unauthorized actor who steals an encryption key from a programming device (e.g., a symmetric key or a private key) can then compromise the memory system by generating signed commands with a valid digital signature. In environments where a single programming device, for example, programming multiple memory systems at multiple host devices, this challenge can greatly increase. In that case, the programming device manages multiple encryption keys of multiple memory systems.The programming device can be implemented with security features to limit unauthorized access to the encryption key. For example, the programming device may be or may contain a hardware security module (HSM) that restricts physical and network access to the encryption keys it stores. However, increasing the security of programming devices still poses challenges. For example, the purchase, operation, and maintenance costs of programming devices with HSM or other suitable security features can be very high. This may limit the feasibility of implementing programmed devices at distributed locations. Likewise, even if proper security is used, providing encryption keys to multiple programming devices will still increase the number of personnel and facilities, which should be trustworthy to avoid security breaches.The various examples described herein address these and other challenges by, for example, using a command file containing one or more pre-generated digital signatures to provide secure memory system programming. The pre-generated digital signature can be used by the programming device to program one or more memory systems. In this way, the programming device may not need to receive the encryption key in order to program the memory system. Instead, the programming device uses a pre-generated digital signature from the command file to send command messages to the memory system.The pre-generated digital signature is generated by the HSM or other suitable generator device. The pre-generated digital signature corresponds to the specific memory system, the signed command, and the selected value of the memory system counter. The selected value of the memory system counter may be the value that the memory system counter is expected to have when the pre-generated digital signature is used. For example, as described herein, the selected value of the memory system counter may be a known initial value of the memory system counter, a predetermined number of increments greater than the known initial value, and/or the programming device can increment the memory system counter To the value. Signed commands are commands that can be executed using pre-generated signals. The generator device creates a pre-generated digital signature by performing an encryption operation using the signed command, the selected memory system counter value, and the encryption key (for example, a symmetric key or a private key) associated with a specific memory system.The programming device receives the command file and uses the pre-generated digital signature to create the command message. The command message contains the signed command and the pre-generated digital signature. The memory system uses its memory system encryption key (e.g., public key or symmetric key) and the signed command from the command message to verify the pre-generated digital signature.In some instances, the programming device determines that the current memory system counter value matches the selected memory system counter of the pre-generated digital signature, for example, by querying the memory system or incrementing the memory system counter as described herein.In some instances, the command file contains more than one pre-generated digital signature. For example, the command file may contain multiple pre-generated digital signatures for multiple memory systems at the same host device or at different host devices.In some examples, the command file contains one or more pre-generated digital signature sequences for a particular memory system. The sequence of pre-generated digital signatures corresponds to the sequence of commands to be executed at the memory system. The consecutive pre-generated digital signatures may correspond to consecutive commands in the command sequence. Likewise, successively pre-generated digital signatures can correspond to increased memory system counter values. In this way, the programming device can use the continuous pre-generated digital signature to send command messages to execute the command sequence at the memory device.In some examples, the command file contains multiple pre-generated signatures that are used for the same memory system and signed commands, but are associated with different memory system counter values. The programming device can query the memory system to determine its current value and select a pre-generated digital signature associated with a memory system counter value equal to the current memory system counter value.In some examples, the programming device is configured to increment the memory system counter until its current value is equal to the memory system counter value associated with the pre-generated digital signature. The programming device queries the memory system to receive the current value of the memory system counter. The device is then programmed to increment the memory system counter until its value matches the memory system counter value associated with the preselected digital signature.FIG. 1 shows an example of an environment 100 including a host device 105, memory systems 110A, 110B, and 110N, and a programming device 120 including a command file 126. The host device 105 communicates with one or more memory systems 110A, 110B, and 110N through the communication interface 162. The host device 105 and/or the memory devices 110A, 110B, and 110N may be included in various products, such as Internet of Things (IoT) devices (for example, refrigerators or other devices, sensors, motors or actuators, mobile communication devices, automobiles, UAVs, etc.), network devices (for example, routers, switches, etc.) or any other suitable products to support the processing, communication or control of the products. In some examples, the host device 105 and the memory systems 110A, 110B, and 110N are contained in a common board or package.In the example environment 100 of FIG. 1, the host device 105 includes a host controller 160. The host controller 160 may include a processor, an application specific integrated circuit (ASIC), a field programmable gate array (FPGA), or, among other functions, one or more other suitable components that can manage the memory devices 110A, 110B, and 110N. One or more communication interfaces 162 may be used to transfer data between the memory system 110A, 110B, 110N and one or more other components of the host device 105 (such as the host controller 160). Examples of such communication interfaces include Serial Advanced Technology Attachment (SATA) interface, Peripheral Component Interconnect Express (PCIe) interface, Universal Serial Bus (USB) interface, Universal Flash Storage (UFS) interface, eMMCTM interface, or one or Multiple other connectors or interfaces. The host device 105 may include a host system, an electronic device, a processor, a memory card reader, or one or more other electronic devices external to the memory systems 110A, 110B, and 110N. Although three memory systems 110A, 110B, and 110N are shown as part of the host system 105, in other examples, more or fewer memory systems may be included. In some examples, the host device 105 may be a machine having some or all of the components discussed with reference to the machine 900 of FIG. 9. Likewise, another example of the host device 105 is discussed with reference to FIG. 8.The example of FIG. 1 includes various additional features of the memory system 110A. The other memory systems 110B, 110N may contain the same feature or different features. In FIG. 1, the memory system 110A includes a memory controller 115 and a memory array 121. The memory array 121 includes a plurality of individual memory dies (for example, two-dimensional or three-dimensional (3D) and non-die stacking, or non-die stacking, etc.). In one example, the memory systems 110A, 110B, and 110N may be discrete memories or storage device components of the host device 105. In other examples, the memory systems 110A, 110B, and 110N may be part of an integrated circuit (for example, a system on a chip (SOC), etc.), stacked or otherwise containing one or more other components of the host device 105.The memory controller 115 may receive instructions from the host device 105, and may communicate with the memory array 121, such as transferring data to one or more of the memory cells, planes, sub-blocks, blocks, or pages of the memory array 121 (for example, Write or erase) or transfer (e.g., read) data from it. Among other things, the memory controller 115 may include circuitry or firmware that includes one or more components or integrated circuits. For example, the memory controller 115 may include one or more memory control units, circuits, or components that are configured to control access across the memory array 121 and communicate between the host device 105 and the host device 105. A conversion layer is provided between the memory systems 110A. The memory controller 115 may include one or more input/output (I/O) circuits, lines, or interfaces to transfer data to and from the memory array 121.Among other things, the memory controller 115 may include circuitry or firmware, such as multiple components or integrated circuits associated with various memory management functions. The management functions of the NAND memory unit may include wear leveling (eg, garbage collection or recycling), error detection or correction, block retirement, or one or more other memory management functions.The memory controller 115 may parse or format the command received from the host 105 into a host command (for example, a command received from the host), or parse or format into a device command (for example, a command associated with the operation of the memory array, etc.), Or generate device commands for the memory controller 115 or one or more other components of the memory system 110A (for example, to complete various memory management functions).For example, as described herein, when the host controller 160 receives a command message from the programming device 120, the host controller 160 sends the command message to the memory controller 115 of the appropriate memory system 110A. The memory controller 115 may verify the digital signature containing the command message and execute the command if the digital signature is verified. For unsigned commands, the memory controller 115 can execute the commands without first verifying the digital signature.The memory controller 115 may manage a set of management tables configured to maintain various information associated with one or more components of the memory system 110A (for example, related to the memory coupled to the memory controller 115). Various information associated with an array or one or more memory cells). For example, for a NAND memory system, the management table may include block ages, block erase counts, error history, or one or more error counts (e.g., , Write operation error count, read bit error count, read operation error count, erase error count, etc.) information. In some instances, if the number of detected errors for one or more of the error counts is above a threshold, the bit error may be referred to as an uncorrectable bit error. Among other things, the memory controller 115 may maintain a count of correctable or uncorrectable bit errors at the management table.The management table may also include one or more logical-to-physical (L2P) tables that include L2P pointers at the memory array 121 that correlate logical addresses with physical addresses. The management table may be stored at the RAM of the memory controller 115. In some instances, some or all of the management table is stored at the memory array 121. For example, the memory controller 115 may read the management table from the memory array 121 and/or cache some or all of the management table at the RAM of the memory controller 115.Among other things, the memory controller 115 may also include circuitry or components configured to control and write data to one or more of the memory systems 110A coupled to the memory controller 115 A memory cell, read data from the one or more memory cells, or erase memory operations associated with the one or more memory cells. Memory operations may be based on, for example, host commands received from the host device 105 (e.g., its host controller 160), or generated internally by the memory controller 115 (e.g., associated with wear leveling, error detection or correction, etc.).The memory controller 115 may include an error correction code (ECC) component 140. Among other things, the ECC component may include an ECC engine or other circuit systems that are configured to detect or correct and combine Errors associated with writing data to one or more memory cells of the memory system 110A coupled to the memory controller 115 or reading data from the one or more memory cells. The memory controller 115 may be configured to actively detect the occurrence of errors (for example, bit errors, operation errors, etc.) associated with various operations or data storage and recover from the occurrence of the errors, while maintaining the host device 105 and the memory system. The integrity of the data transferred between 110A, or the integrity of the stored data (for example, using redundant RAID storage, etc.), and the failure of memory resources (for example, memory cells) can be removed (for example, retired) , Memory array, page, block, etc.) to prevent future errors.In the example environment 100 of FIG. 1, the memory controller 115 also includes an encryption engine 142. For example, as described herein, the encryption engine 142 may be configured to perform encryption operations on data. The encryption engine 142 may include one or more key registers and one or more math engines. The key register can store the encryption key used to perform the encryption operation. For example, the key register may store a memory system encryption key (e.g., a public key of the signing device and/or a symmetric key that is also known to the signing device) used to evaluate the signed command. Although the key register is described as a component of the encryption engine 142, in some instances, the key register may be located elsewhere (e.g., a secure location at the memory array 121). The math engine may be configured to perform encryption operations using, for example, one or more encryption keys stored at the key register.The encryption engine 142 may be configured to perform one or more encryption operations to generate a digital signature as described herein. The encryption engine 142 may be configured to use any suitable encryption algorithm (for example, a cryptographic hash function, such as SHA algorithm (for example, SHA256), MD5 algorithm, etc.) to generate a digital signature. Encryption has a function that maps an input value to a hash value, usually abbreviated as a hash value. The hash function can be selected so that two different input values will be less likely to map to the same hash value. The encryption engine 142 may be configured to generate a digital signature by performing a hash function on the input value related to the thing being digitally signed. For example, the encryption engine 142 may concatenate the signed command to be executed, the memory system counter value, and the encryption key to form an input value. The encryption engine 142 may then perform a function on the input value to generate a digital signature.In some instances, the encryption engine 142 is configured to operate in conjunction with the communication interface between the host device 105 and the memory system 110A. For example, the encryption engine 142 may include a key register or other suitable storage location for storing an encryption key, the encryption key being used for encrypting and/or generating data from the memory system 110A and the host, for example, according to PCIe or other suitable interfaces. Digital signatures related to communications between devices 105.In some examples, the memory controller 115 also includes a memory device counter 146. The memory device counter 146 contains software or hardware for incrementing the counter value. The memory device counter 146 may be a monotonic counter configured such that the counter value always moves in a certain direction along a sequence of counters. For example, the memory device counter 146 starts with a known initial value (e.g., when the memory system 110A is manufactured). When an increment event occurs, the monotonic counter 146 is incremented from the known initial value to the next value along the counter sequence in the counter sequence direction. When a subsequent increment event occurs, the monotonic counter 146 increments to the next value along the counter sequence, and so on. The counter sequence may include, for example, a set of rising integers, a set of falling integers, a set of prime numbers, a set of even numbers, or any other suitable sequence. As used herein, if the first counter value is encountered along the counter sequence after the counter is incremented by one or more times from the second counter value in the direction of the counter sequence, the first counter value is said to be greater than the second counter value. The counter value.The increment event may include any suitable event at the memory system 110A. For example, an increment event may occur when the memory system 110A executes a command. Another example increment event may occur when the memory system 110A receives an instruction to increment the monotonic counter 146. Another example increment event may occur when the memory system 110A is reset or restarted.The memory array 121 may include a number of memory cells arranged in, for example, one or more devices, one or more planes, one or more sub-blocks, one or more blocks, and one or more pages. Waiting. As an example, a 48GB TLC NAND memory device may contain 18,592 bytes (B) of data per page (16,384+2208 bytes), 1536 pages per block, 548 blocks per plane, and 4 or more planes per device. As another example, a 32GB MLC memory device (each cell stores two data bits (ie, 4 programmable states)) can contain 18,592 bytes (B) of data per page (16,384+2208 bytes), and each block 1024 pages, 548 blocks per plane, and 4 planes per device, but with half the write time and twice the program/erase (P/E) cycle required by the corresponding TLC memory device. Other examples may include other numbers or arrangements. In some examples, the memory device or a portion thereof may selectively operate in SLC mode or in a desired MLC mode (such as TLC, QLC, etc.).The memory array 121 includes physical address locations 150A, 150B, and 150N. The physical address locations 150A, 150B, and 150N are locations on the memory array 121 that are uniquely associated with physical addresses. In operation, data is usually written to or read from the NAND memory array 121 in units of pages and erased in units of blocks. For example, physical address locations 150A, 150B, 150N may correspond to pages. However, some memory operations (for example, reading, writing, erasing, etc.) can be performed on larger or smaller memory cell groups as needed. Therefore, in some instances (e.g., for some operations), the physical address locations 150A, 150B, 150N contain more or less than one page. The data transfer size of the memory system 110A is generally referred to as a page, and the data transfer size of the host device 105 is generally referred to as a sector.Although a data page may contain multiple bytes of user data (for example, a data payload containing multiple data sectors) and its corresponding metadata, the page size generally only refers to the number of bytes used to store user data. As an example, a data page with a page size of 4KB may contain 4KB of user data (for example, assuming 8 sectors with a sector size of 512B) and multiple bytes (for example, 32B, 54B, 224B, etc.) of user data Corresponding metadata, such as integrity data (for example, error detection or error correction code data), address data (for example, logical address data, etc.), or other metadata associated with user data. The physical address locations 150A, 150B, 150N with storage of metadata and the like may be referred to as over-provisioned physical address locations.Different types of memory cells or memory arrays 120 may provide different page sizes or may require different amounts of metadata associated with them. For example, different memory device types may have different bit error rates, which may result in different amounts of metadata necessary to ensure the integrity of data pages (e.g., having lower bit error rates compared to memory devices with lower bit error rates). Memory devices with high bit error rates may require more bytes of error correction code data). As an example, compared to the corresponding single-level cell (SLC) and non-flash devices, multi-level cell (MLC) and non-flash devices may have a higher bit error rate. As such, MLC devices may require more metadata bytes for error data than corresponding SLC devices.FIG. 1 also shows a programming device 120 in communication with the host device 105. The programming device 120 may be or may include any suitable computing device or component, such as one or more servers, one or more processors, one or more ASICs, one or more FPGAs, and so on. The programming device 120 includes a programming device data storage device 122, which may include any suitable volatile or non-volatile data storage device. The data storage device 122 stores the command file 126. As described herein, the command file 126 contains one or more pre-generated digital signatures.The command file 126 is created by the generator device 124. The generator device 124 may include any suitable computing device or component, such as one or more servers, one or more HSMs, and so on. The generator device 124 accesses the encryption keys for the memory systems 110A, 110B, and 110N and creates one or more pre-generated digital signatures containing the command file 126. For example, in a symmetric arrangement, the generator device 124 accesses a symmetric encryption key shared with the corresponding memory system 110A, 110B, 110N. In an asymmetric arrangement, the generator device 124 accesses the private key corresponding to the public key stored at the respective memory system 110A, 110B, 110N.The generator device 124 provides the command file 126 to the programming device 120 in any suitable manner, such as a physical medium that is physically transmitted to the location of the programming device 120 via a wired or wireless network connection, by mail or otherwise. Wait.As described herein, the programming device 120 uses the command file 126 to program one or more of the memory systems 110A, 110B, and 110N. For example, the programming device 120 selects from the command file 126 pre-generated digital signatures, signed commands, and selected values of the memory device counter 146 associated with the memory systems 110A, 110B, and 110N. The selected value of the memory device counter 146 may be a known initial value of the counter 146 or another value. The programming device 120 generates a command message 128 containing a pre-generated digital signature and commands associated with the pre-generated digital signature.The command message 128 is provided to the host controller 160, which in turn provides the command message to the memory system 110A. The memory system 110A (eg, its controller 115) uses the command from the command message 128, the current value of the memory device counter 146, and the encryption key of the memory system to generate an encrypted digest. For example, the encryption digest can be generated using the encryption engine 142 to perform encryption operations on commands, memory device counter values, and encryption keys for the memory system 110A. If the check digital signature is equal to the pre-generated digital signature, the memory system 110A will execute the indicated command.Figure 2 shows another example environment 200 that includes a programming device 220 configured to program a memory system through a plurality of host devices 205A, 205B, 205N. Similar to the memory systems 110A, 110B, 110N of the host device 105 of FIG. 1, each host device 205A, 205B, 205N can communicate with one or more memory systems. Three host devices 205A, 205B, 205N are shown in Figure 2, however a single programming device can program the memory system at more or fewer host devices 205A, 205B, 205N than the host device shown. The programming device 220 sends command messages 228A, 228B, 228N to the corresponding host devices 205A, 205B, 205N. The command messages 228A, 228B, 228N contain commands and pre-generated signatures from the command file 226 stored at the data storage device 222 of the programming device 220. Each of the command messages 228A, 228B, 228N can be directed to the host device 205A, 205B, 205N, which directs the command message 228A, 228B, 228N to a specific memory system.In some examples, the command file (such as the command file 126, 226) contains multiple pre-generated digital signatures that can be referenced by the memory system, signed commands, and/or memory system counter values. The following Table 1 shows an arrangement of an example command file containing pre-generated digital signatures for each memory system described by unique identifiers (UID): UID0, UID1, and UIDN:Table 1:Signature command MS counter value pre-generated signature UID0 CMD0 MTC0 --- CMD0 MTC1 --- CMD0 MTC2 --- …… CMD0 MTCN --- UID1 CMD0 MTC0 --- CMD0 MTC1 --- CMD0 MTC2 --- …… CMD0 MTCN --- …… UIDN CMD0 MTC0 --- CMD0 MTC1 --- CMD0 MTC2 --- ……In Table 1, the pre-generated digital signature is not provided, but represented by "---". In this example, for each memory system (UID0, UID1, ..., UIDN), the command file contains the first signed command (CMD0) generated for multiple different memory device counter values (MTC0-MTCN) digital signature. In use, the programming device queries the appropriate host device to provide the current memory system counter value to one or more memory systems in communication with the host device. For each memory system (UID0, UID1, ..., UIDN), the programming device selects the pre-generated digital signature associated with the memory device, the first signed command (CMD0), and the current memory system for the memory system The counter value. The programming device then generates a command message of the corresponding memory system (UID0, UID1, ..., UIDN), which contains the first command (CMD0) and the selected pre-generated digital signature.Table 2 shows another example arrangement of command files (such as command files 126, 226) that contain pre-generated digital signature sequences for memory systems (UID0, UID1,..., UIDN):Table 2:Signature command MS counter value pre-generated signature UID0 CMD0 MTC0 --- CMD1 MTC1 --- …… CMDN MTCN --- UID1 CMD0 MTC0 --- CMD1 MTC1 --- …… CMDN MTCN --- …… UIDN CMD0 MTC0 --- CMD1 MTC1 --- …… CMDN MTCN ---The pre-generated digital signature sequence for each memory system corresponds to a sequence of signed commands (CMDO, CMD1, ..., CMDN). For example, the pre-generated digital signature sequence for the first memory system (UID0) includes: the first pre-generated digital signature associated with the first command (CMD0) and the first memory system counter value (MTC0); and the second command ( CMD1) a second pre-generated digital signature associated with a second memory system counter value (MTC1) greater than the first memory system counter value; and so on. The programming device can execute the signed command sequence at the memory system (UID0) (via an appropriate host device) by sending a command message containing a pre-generated digital signature associated with (UID0, CMD0, MTC0) to the memory system (UID0) . Executing the first command (UID0) at the memory system can increase the memory system counter from the memory system counter value (MTC0) to the memory system counter value (MTC1). The programming device will then send a second command message to the memory system (UID0), the second command message containing a pre-generated digital signature associated with (UID0, CMD1, MTC1), etc.In some instances, the command file may contain a pre-generated digital signature to support more than one command sequence per memory system. For example, Table 2 shows the pre-generated digital signature sequence starting with the memory system counter value (MTC0) to execute the command sequence (CMD0, CMD1, ..., CMDN). The example command file may also contain additional pre-generated digital signature sequences for one or more of the memory systems to execute additional command sequences. The additional pre-generated digital signature sequence may start with the same memory system counter value (for example, MTC0 in the example of Table 2) or with a different memory system counter value.In some instances, pre-generated digital signature sequences for different command sequences may share a common pre-generated digital signature. Referring to the example in Table 2, consider the example command sequence (CMDX, CMD1,..., CMDZ) starting with the memory system counter value (MTC0). Both this command sequence and the command sequence shown in Table 2 contain a pre-generated digital signature for the signed command (CMD1) at the memory system counter value MTC1. In some instances, the command file contains a copy of the pre-generated signature and, as such, can be part of multiple command sequences. A single copy of the pre-generated digital signature can be referenced to multiple command sequences. For example, the command file may contain command sequence data describing the command sequence supported by the command file and referencing the pre-generated digital signature sequence for the corresponding command sequence.Table 3 shows yet another example arrangement of command files (such as command files 126, 226) that contain pre-generated signature sequences starting with different memory system counter values. The pre-generated digital signature sequence in Table 3 corresponds to the sequence of signed commands (CMD0, CMD1, ..., CMDN). In the example of Table 3, the command file contains multiple pre-generated digital signature sequences and command sequences for each memory system. For example, as shown below, different pre-generated digital signature sequences and command sequences for a memory system can start with different memory system counter values. The programming device can query the current memory system counter value of the memory system and select a pre-generated digital signature sequence starting with the current memory system counter value.table 3:Signed command MS counter value Pre-generated signature UID0 CMD0 MTC0 --- CMD1 MTC1 ---… CMDN MTCN --- CMD0 MTC1 --- CMD1 MTC2 ---… CMDN MTCN+1 --- UID1 …… UIDNFIG. 3 is a flowchart showing an example of a process flow 300 that can be executed by a programming device to send a command to a memory system. At operation 302, the programming device receives a command file containing at least one pre-generated digital signature. The command file can be received from a generator device, such as HSM. The command file can be received in any suitable way. For example, the command file can be received via electronic media (such as e-mail). In some instances, the command file may also be received in physical form, such as on a storage device mailed or transported to the location of the programming device.At operation 304, the programming device selects a pre-generated digital signature from the command file. The selected pre-generated digital signature corresponds to the memory system (e.g., at the host), the signed command, and the memory system counter value. The programming device can select the pre-generated digital signature, the memory system to which the pre-generated digital signature is to be sent, and the expected value of the memory system counter based on the signed command to be sent. The expected value of the memory system counter is the value that the programming device expects the memory system counter to have. For example, if the memory system is newly manufactured, the expected value of the memory system counter may be a known initial value or a predetermined memory system counter value greater than the known initial value. (For example, it may be known that the memory system experiences a known number of increment events during manufacturing.) Likewise, in some instances, the programming device queries the memory system to receive the current memory system counter value and selects a pre-generated number based on the memory system's reply signature.At operation 306, the programming device sends a command message to the memory system. The command message contains the selected pre-generated digital signature and the signed command associated with the pre-generated digital signature. Sending a command message to the memory system may include sending a command message to a host device containing the memory system. The host controller can forward the command message to the memory system.4 is a flowchart showing an example of a process flow 400 that can be executed by a programming device to send a sequence of commands to a memory system. At operation 402, the programming device receives a command file containing at least one pre-generated digital signature. At operation 404, the programming device selects a pre-generated digital signature from the command file. For the first memory system, the first selected pre-generated digital signature corresponds to the first command of the command sequence, the first memory system, and the first memory system counter value. At operation 406, the programming device sends a command message containing the pre-generated digital signature selected at operation 404 and the signed command associated with the pre-generated digital signature.At operation 408, the programming device determines whether there are more commands in the command sequence at operation 408. For example, the programming device can view command sequence data, which can be contained in a command file. The command sequence data indicates the commands in the command sequence and/or the pre-generated digital signatures in the pre-generated digital signature sequence corresponding to the command sequence. If all commands in the command sequence have been sent to the memory system, programming is completed at operation 412. If there are additional commands in the command sequence, the programming device moves to the next command at operation 410, and then returns to operation 404 to select the pre-generated digital signature associated with the next command.FIG. 5 is a flowchart showing an example of a process flow 500 that can be executed by a programming device to send a command to a memory system. At operation 502, the programming device receives a command file containing at least one pre-generated digital signature. At operation 504, the programming device queries the current memory system counter value of the memory system. The query can be sent directly to the memory system or a host device or host controller in communication with the memory system. The memory system responds by providing its current memory system counter value.At operation 506, the programming device selects the pre-generated digital signature and current memory system counter value associated with the signed command from the command file. At operation 508, the programming device sends a command message containing the selected pre-generated digital signature and signed command to the memory system.FIG. 6 is a flowchart showing an example of a process flow 600 for sending a command message with a pre-generated digital signature to a memory system. The process flow 600 includes two columns 601 and 603. Column 601 contains operations that can be performed by the programming device. Column 603 contains features that can be executed by the memory system. The programming device may have a pre-generated digital signature at the beginning of the process flow 600. In some instances, communication between the memory system and the programming device is facilitated by a host device that communicates with the memory system.At operation 602, the programming device queries the current memory system counter value of the memory system, for example, by sending a query 605. At operation 604, the memory system receives the query 605. At operation 606, the memory device provides a counter value message 607 containing the current memory system counter value.At operation 608, the programming device receives the counter value message 607 and determines whether the current memory system counter value is equal to the memory system counter value associated with the pre-generated digital signature. If the current memory system counter value does not match the memory system counter value associated with the pre-generated digital signature, the programming device enters error processing at operation 610. Therefore, the process flow enters error handling at operation 612. Error handling may include, for example, ending the process flow 600 and/or selecting a different pre-generated digital signature associated with the memory system counter value that matches the current memory system counter value.If the current memory system counter value matches the memory system counter value associated with the pre-generated digital signature, the programming device sends a command message 609 to the memory system at operation 612. The command message 609 contains the signed command and the pre-generated digital signature associated with the pre-generated digital signature.At operation 614, the memory system verifies the command message 609. The verification command message may include a check digital signature based on the command, the current value of the memory system counter, and the encryption key generation. If the check digital signature matches the pre-generated digital signature, the memory system executes the command at operation 616.FIG. 7 is a flowchart showing an example of a process flow 700 for sending a command message with a pre-generated digital signature to a memory system. The process flow 700 includes two columns 701 and 703. Column 701 contains operations that can be performed by the programming device. Column 703 contains features that can be executed by the memory system. In some instances, communication between the memory system and the programming device is facilitated by a host device that communicates with the memory system.In the process flow 700, the programming device increments the memory system counter until the memory system counter value matches the memory system counter value associated with the pre-generated signature. The pre-generated signature can be associated with a single command. In some instances, the pre-generated signature may be associated with a sequence of commands (e.g., it may be the first pre-generated signature in the sequence). As described with respect to the process flow 700, incrementing the memory system counter may enable the programming device to use command files with fewer pre-generated signatures. For example, a programming device may not need to use a command file containing more than one pre-generated digital signature for a given combination of a memory device and a signed command, as in the examples of Tables 1 and 3 above.At the beginning of the process flow 700, the programming device has a pre-generated digital signature. The pre-generated digital signature may be associated with an independent signed command or, in some instances, with the first command of the command sequence. At operation 702, the programming device queries the current memory system counter value of the memory system, for example, by sending a query 705. The query 705 may be directed to the host device or its host controller associated with the memory device. At operation 704, the memory system receives the query 705. At operation 706, the memory device provides a counter value message 707 containing the current memory system counter value.At operation 708, the programming device determines whether the current memory system counter value matches the memory system counter value associated with the pre-generated digital signature. If there is no match, the programming device determines at operation 710 whether the current value of the memory system counter is greater than the memory system counter value associated with the pre-generated digital signature.If the current memory system counter value is higher than the pre-generated digital signature (for example, further along the counter sequence), this indicates that the pre-generated digital signature may not be suitable for use. For example, because the memory system counter is monotonic, if it has incremented beyond the memory counter value associated with the pre-generated digital signature, the pre-generated digital signature may not be used. The programming device enters error processing at operation 713. Error handling may include ending the process flow 700. In some instances, error handling includes selecting a different pre-generated digital signature from the command file and starting the process flow 700 again.Consider an example of using the command file arrangement of Table 1 above, where the pre-generated digital signature used by the process flow 700 is associated with a memory device (UID0), a signed command (CMDO), and a memory system counter value (MTC0). If the current memory system counter value is greater than the memory system counter value (MTC0), the programming device can select a different pre-generated digital signature from the command file associated with the higher memory system counter value. For example, the programming device may select a different pre-generated digital signature associated with a memory system counter value that is greater than or equal to the current memory system counter value provided by the memory device at operation 706.Consider another example of using the command file arrangement of Table 3 above, where the pre-generated digital signature used by the process flow 700 is the first pre-generated digital signature corresponding to the memory system counter value (MTC0) and corresponds to the command sequence (CMD0, CMD1,..., CMDN) is the first pre-generated digital signature of the pre-generated digital signature sequence. If the current memory system counter value is greater than the memory system counter value (MTC0), the programming device can select a different pre-generated digital signature sequence corresponding to the command sequence (CMD0, CMD1, ..., CMDN). For example, the programming device may use the first pre-generated digital signature from another pre-generated digital signature sequence that also corresponds to the command sequence (CMD0, CMD1, ..., CMDN) to re-execute the process flow 700.Referring now back to operation 710, if the current value of the memory system counter is not greater than the memory system counter associated with the pre-generated digital signature, the programming device causes the memory system to increment the memory system counter at operation 714. The programming device can send an increment command 709. The increment instruction 709 may be any action that prompts an increment event at the memory system. For example, the increment instruction 709 may be an explicit instruction that causes the memory system to increment its memory system counter. In another example, the increment instruction may be an instruction to cause the host device or host controller to reset the memory system. In response to the increment instruction 709, the memory system at operation 716 increments its memory system counter.After instructing the memory system to increment its memory system counter, the programming device returns to operation 702 and queries the current counter value of the memory system as described. In some instances, the programming device may predict the new current value of the memory system counter after incrementing from the previously provided current value and the counter sequence. If the programming device predicts the new current value of the memory system counter, it may skip operation 702 and proceed to operation 708 instead (e.g., without re-querying the current counter value of the memory system).Process flow 700 may be performed until the current value of the memory system counter matches the selected value of the memory system counter associated with the pre-generated digital signature at operation 708. When that happens, the programming device sends a command message 711 to the memory system at operation 718. The command message 711 contains the signed command and the pre-generated digital signature associated with the pre-generated digital signature.The memory system verifies the command message 711 at operation 720. The verification command message may include a check digital signature based on the command, the current value of the memory system counter, and the encryption key generation. If the check digital signature matches the pre-generated digital signature, the memory system executes the command.In some example arrangements, the command file contains multiple pre-digital signatures for the same combination of the memory system and the signed command. Tables 1 and 3 above describe example command files with this arrangement. As described herein, if the current memory system counter value can take a range of values, this can increase the flexibility of programming the device. It may also create opportunities for unauthorized actors who obtain command files to use the pre-generated digital signatures therein to make unexpected changes at the storage system.Consider an example of using the command file arrangement shown in Table 1, where the programming device causes the memory system (UID0) to execute the command (CMD0) using a pre-generated signature associated with the memory device counter value (MTC0). The command file also contains a pre-generated signature for the signed command (CMDO), which corresponds to other larger memory system counter values (MTC1, MTC2..., MTCN). Therefore, as long as the memory counter value at the memory system is still lower than (MTCN), an unauthorized actor who owns the command file may cause the memory system to execute the signed command (CMDO) again.Consider another example using the command file configuration of Table 3 above, where the programming device uses a pre-generated digital signature sequence starting at the memory system counter value (MTC0) to complete the command sequence (CMD0, CMD1, CMDN) at the memory device (CMD0) ). At the end of the command sequence, the current memory system counter value may still be lower than the memory system counter value associated with some of the pre-generated digital signatures. This means that an unauthorized actor in possession of the command file may be able to cause the memory system to execute another signed command. For example, after executing the command (CMDN) with the memory system counter value (MTCN), an unauthorized actor may be able to cause the memory system to execute the signed command again using the pre-generated digital signature corresponding to the memory counter value (MTCN+1) (CMDN).Figure 8 shows an example host device 810 (e.g., host 105) in which a memory system 820 (e.g., any of the memory devices described herein) is part of one or more devices 830-9950. The device includes any device that can include a host device (such as host device 810). The host device 810 may be any device capable of executing instructions (sequentially or otherwise). Example devices include vehicle 830 (e.g., as part of an infotainment system, control system, etc.), drone 850 (e.g., as part of a control system), furniture or device 840 (e.g., as a sensor system, entertainment or infotainment system) Part of) and so on. In other examples, although not shown, the equipment may include aerospace devices, marine devices, Internet of Things (IOT) devices, and other devices.Figure 9 shows a block diagram of an example machine 900 on which any one or more of the techniques (e.g., methods) discussed herein can be executed. In alternative embodiments, the machine 900 may operate as a standalone device or may be connected (e.g., networked) to other machines. In a networked deployment, the machine 900 can operate in the capacity of a server machine, a client machine, or both in a server-client network environment. In one example, the machine 900 may act as a peer machine in a peer-to-peer (P2P) (or other distributed) network environment. The machine 900 can be a personal computer (PC), a tablet PC, a set-top box (STB), a personal digital assistant (PDA), a mobile phone, a web device, an IoT device, an automated system, or any machine capable of executing instructions (sequentially or otherwise) , The instructions specify actions to be taken by the machine. Further, although only a single machine is shown, the term "machine" should also be considered as encompassing the execution of a set (or sets) of instructions individually or in combination to perform any one or more of the methods discussed herein (Such as cloud computing, software as a service (SaaS), other computer cluster configuration) any collection of machines.An instance as described herein may contain logic, components, devices, packages, or mechanisms, or may be operated by the logic, components, devices, packages, or mechanisms. A circuit system is a collection (e.g., set) of circuits implemented in a tangible entity containing hardware (e.g., simple circuits, gates, logic, etc.). With the passage of time and the variability of the underlying hardware, the circuit system members can be changeable. The circuit system contains components that can perform specific tasks individually or in combination during operation. In one example, the hardware of the circuit system can be permanently designed to perform specific operations (for example, hard-wired). In one example, the hardware of the circuit system may include indefinitely connected physical components (e.g., execution units, transistors, simple circuits, etc.), including physically modified to (e.g., magnetic and electrically movable placement of particles of invariable mass). Etc.) A non-transitory computer-readable medium used to encode instructions for specific operations. When connecting physical components, the basic electrical properties of the hardware components change, for example, from insulators to conductors, or vice versa. Instructions enable participating hardware (for example, an execution unit or a loading mechanism) to create components of a circuit system in the hardware through variable connections so as to perform parts of a specific task when operating. Therefore, when the device is operating, the computer-readable medium is communicatively coupled to the other components of the circuit system. In one example, any of the physical components can be used in more than one component of more than one circuit system. For example, in operation, the execution unit can be used in the first circuit of the first circuit system at a point in time, and can be reused by the second circuit in the first circuit system, or by the third circuit in the second circuit system. The circuit is reused at different times.The machine (e.g., computer system) 900 (e.g., programming device 120, generator device 124, host device 105, memory system 110A, etc.) may include a hardware processor 902 (e.g., central processing unit (CPU), graphics processing unit (GPU) ), a hardware processor core or any combination thereof, such as the memory controller 115, etc.), the main memory 904 and the static memory 906, some or all of which may communicate with each other through an interconnection link (for example, a bus) 908. The machine 900 may further include a display unit 910, an alphanumeric input device 912 (e.g., a keyboard), and a user interface (UI) navigation device 914 (e.g., a mouse). In one example, the display unit 910, the input device 912, and the UI navigation device 914 may be a touch screen display. The machine 900 may additionally include a storage device (for example, a driving unit) 916, a signal generating device 918 (for example, a speaker), a network interface device 920, and one or more sensors 917, such as a global positioning system (GPS) sensor, a compass, an accelerometer Or other sensors. The machine 900 may include an output controller 928 (e.g., serial (e.g., universal serial bus (USB)), parallel or other wired or wireless (e.g., infrared (IR), near field communication (NFC), etc.) connection for communication Or control one or more peripheral devices (for example, printers, card readers, etc.).The storage device 916 may include a machine-readable medium 922 on which one or more sets of data structures or instructions 924 (for example, software) are stored, and the one or more sets of data structures or instructions implement what is described herein Any one or more of technologies or functions or used by them. During execution by the machine 900, the instructions 924 may also reside completely or at least partially within the main memory 904, the static memory 906, or the hardware processor 902. In one example, one or any combination of the hardware processor 902, the main memory 904, the static memory 906, or the storage device 916 may constitute the machine-readable medium 922.Although the machine-readable medium 922 is shown as a single medium, the term "machine-readable medium" may include a single medium or multiple media (eg, a centralized or distributed database) configured to store the one or more instructions 924 And/or the associated cache and server).The term "machine-readable medium" may include any medium capable of storing, encoding, or carrying instructions for execution by the machine 900 and causing the machine 900 to perform any one or more of the techniques of the present disclosure or capable of storing, encoding, or carrying Any medium of data structure used by or associated with such instructions. Non-limiting examples of machine-readable media may include solid-state memory as well as optical and magnetic media. In one example, the high-capacity machine-readable medium includes a machine-readable medium with multiple particles of constant (e.g., stationary) mass. Therefore, large-capacity machine-readable media is not a temporary propagation signal. Specific examples of the large-capacity machine-readable medium may include: non-volatile memory such as semiconductor memory devices (e.g., electrically programmable read-only memory (EPROM), electrically erasable programmable read-only memory (EEPROM)) and flash memory Fast memory devices; magnetic disks, such as internal hard disks and removable disks; magneto-optical disks; and CD-ROM and DVD-ROM disks.Instructions 924 (for example, software, programs, operating system (OS), etc.) or other data stored on the storage device 921 may be accessed by the memory 904 for use by the processor 902. The memory 904 (e.g., DRAM) is generally fast but volatile, and therefore is a different type of storage device from the storage device 921 (e.g., SSD), which is suitable for long-term storage (including in the case of “off” "On" condition). Instructions 924 or data used by the user or machine 900 are usually loaded into the memory 904 for use by the processor 902. When the memory 904 is full, virtual space from the storage device 921 can be allocated to supplement the memory 904; however, because the storage 921 device is generally slower than the memory 904, and the writing speed is usually at least twice as slow as the reading speed, the virtual The use of memory may greatly reduce the user experience due to storage device latency (compared to memory 904, such as DRAM). Further, using the storage device 921 for virtual storage can greatly reduce the service life of the storage device 921.Compared with virtual memory, virtual memory compression (for example, the core feature “ZRAM”) uses part of the memory as a compressed block storage device to avoid paging to the storage device 921. Paging occurs in the compressed block until it is necessary to write this data to the storage device 921. Virtual memory compression increases the available size of the memory 904 while reducing the loss on the storage device 921.Storage devices optimized for mobile electronic devices or mobile storage devices traditionally include MMC solid-state storage devices (for example, micro-secure digital (microSDTM) cards, etc.). The MMC device includes a plurality of parallel interfaces (for example, an 8-bit parallel interface) with a host device, and is generally a component that can be removed from and separated from the host device. In contrast, an eMMCTM device is attached to a circuit board and is considered to be a component of a host device whose read speed can compete with Serial ATATM (Serial AT (Advanced Technology) Attachment or SATA) based SSD devices. However, the demand for mobile device performance continues to increase, such as to fully enable virtual or augmented reality devices, to take advantage of increased network speeds, and so on. In response to this demand, storage devices have moved from a parallel communication interface to a serial communication interface. A Universal Flash Storage (UFS) device containing a controller and firmware uses a low-voltage differential signaling (LVDS) serial interface with a dedicated read/write path to communicate with the host device, thereby further advancing greater read/write speed.The instruction 924 can further utilize any of multiple transmission protocols (for example, Frame Relay, Internet Protocol (IP), Transmission Control Protocol (TCP), User Datagram Protocol (UDP), Hypertext Transfer Protocol (HTTP), etc.) The protocol is transmitted or received through the communication network 926 via the network interface 920 using a transmission medium. Example communication networks may include local area networks (LAN), wide area networks (WAN), packet data networks (e.g., the Internet), mobile phone networks (e.g., cellular networks), plain old telephone (POTS) networks, and wireless data networks (e.g., electrical and The Institute of Electronic Engineers (IEEE) 802.11 standard series (referred to as), IEEE 802.16 standard series (referred to as)), IEEE 802.15.4 standard series, peer-to-peer (P2P) networks, etc. In one example, the network interface device 920 may include one or more physical jacks (eg, Ethernet, coaxial, or telephone jacks) or one or more antennas to connect to the communication network 926. In one example, the network interface device 920 may include multiple antennas to use at least one of single-input multiple-output (SIMO), multiple-input multiple-output (MIMO), or multiple-input single-output (MISO) technology for wireless Communication. The term "transmission medium" shall be considered to include any intangible media capable of storing, encoding, or carrying instructions for execution by the machine 900, and including digital or analog communication signals or other intangible media used to facilitate the communication of such software.The above detailed description contains a reference to the accompanying drawings, which form a part of the detailed description. The drawings show, by way of illustration, specific embodiments in which the present invention can be practiced. These embodiments are also referred to herein as "examples." Such instances may contain elements other than those shown or described. However, the inventors of the present invention also envision providing examples of only those elements shown or described. In addition, the inventors of the present invention also contemplate using the specific examples (or one or more aspects thereof) or other examples (or one or more aspects thereof) shown or described herein. Examples of any combination or permutation of those elements (or one or more aspects thereof) described.In this document, as is common in patent documents, the term "a" or "an" is used to include one or more than one, regardless of any other instance or usage of "at least one" or "one or more" . In this document, unless otherwise indicated, the term "or" is used to refer to a non-exclusive OR, so that "A or B" can include "A but not B", "B but not A", and "A and B". In the appended claims, the terms "including" and "in which" are used as the plain English equivalents of the corresponding terms "comprising" and "wherein". Moreover, in the following claims, the terms "including" and "comprising" are open-ended, that is, to include elements other than those listed after such terms in the claims The system, device, article, or process of the element is still deemed to fall within the scope of the claims. Furthermore, in the following claims, the terms "first", "second" and "third" etc. are used only as labels and are not intended to impose numerical requirements on their objects.In various instances, among other things, the components, controllers, processors, units, engines, or tables described herein may include physical circuitry or firmware stored on a physical device. As used herein, "processor" means any type of computing circuit, such as, but not limited to, a microprocessor, microcontroller, graphics processor, digital signal processor (DSP), or any other type of processor or processing Circuit, including a set of processors or multi-core devices.As used in this document, the term "horizontal" is defined as a plane parallel to the conventional plane or surface of the substrate (such as the plane or surface below the wafer or die), regardless of the actual state of the substrate at any point in time. What's the direction? The term "vertical" refers to a direction perpendicular to the horizontal as defined above. Prepositions such as "above", "above", and "below" are defined relative to a conventional plane or surface located on the top or exposed surface of the substrate, regardless of the orientation of the substrate; and although "upper" It is intended to imply direct contact of a structure with respect to another structure on which it is located (in the absence of a clear indication to the contrary); the terms "above" and "below" are specifically intended to identify a structure (or The relative placement of layers, features, etc.), unless specifically identified as such, it clearly includes, but is not limited to, direct contact between the identified structures. Similarly, the terms "above" and "below" are clearly intended to identify the relative placement of structures (or layers, features, etc.), unless specifically identified as such, they clearly include, but are not limited to, between the identified structures Direct contact.As used herein, the terms "wafer" and "substrate" generally refer to any structure on which integrated circuits are formed, and also refer to such structures during various stages of integrated circuit manufacturing. Therefore, the following specific embodiments should not be regarded as having a limiting meaning, and the scope of each embodiment is limited only by the appended claims and the full scope of equivalents conferred by such claims.Various embodiments according to the present disclosure and described herein include memories that utilize a vertical structure of memory cells (eg, NAND strings of memory cells). As used herein, directional adjectives will be considered relative to the surface of the substrate on which the memory cell is formed (ie, the vertical structure will be considered to extend away from the substrate surface, and the bottom end of the vertical structure will be Consider the end closest to the substrate surface, and the top end of the vertical structure will be considered the end furthest from the substrate surface).As used herein, unless otherwise stated, directional adjectives such as horizontal, vertical, orthogonal, parallel, vertical, etc. may refer to relative orientation, and are not intended to require strict adherence to specific geometric properties. For example, as used herein, the vertical structure does not need to be strictly perpendicular to the substrate surface, but may be substantially perpendicular to the substrate surface, and may form an acute angle with the substrate surface (for example, between 60 degrees and 120 degrees). Wait).In some embodiments described herein, different doping configurations can be applied to the source-side select gate (SGS), control gate (CG), and drain-side select gate (SGD), in this example Each of the source-side select gate, control gate, and drain-side select gate may be formed of polysilicon or at least contain polysilicon, with the result that these layers (for example, polysilicon, etc.) are exposed to an etching solution. It can have different etching rates. For example, in the process of forming a monolithic pillar in a 3D semiconductor device, SGS and CG can form recesses, while SGD can maintain a less recessed or even no recessed state. These doping configurations can therefore be selectively etched into different layers (e.g., SGS, CG, and SGD) in a 3D semiconductor device by using an etching solution (e.g., tetramethylammonium hydroxide (TMCH)).As used herein, manipulating a memory cell includes reading from, writing to, or erasing a memory cell. The operation of placing a memory cell in an expected state is referred to herein as "programming" and can include both writing to the memory cell or erasing from the memory cell (ie, the memory cell can be programmed to be erased) status).According to one or more embodiments of the present disclosure, a memory controller (e.g., processor, controller, firmware, etc.) located inside or outside the memory device can determine (e.g., select, set, adjust, calculate, change, clear , Transfer, adapt, obtain, define, utilize, modify, apply, etc.) the number or wear status of wear cycles (e.g., record wear cycles, count the operations of the memory device as they occur, track the memory device activated by the memory device Operation of the device, evaluation of the characteristics of the memory device corresponding to the wear state, etc.).According to one or more embodiments of the present disclosure, the memory access device may be configured to provide wear cycle information to the memory device through each memory operation. The memory device control circuitry (eg, control logic) can be programmed to compensate for changes in memory device performance corresponding to the wear cycle information. The memory device may receive the wear cycle information and determine one or more operating parameters (e.g., values, characteristics) in response to the wear cycle information.It should be understood that when an element is referred to as being “on,” “connected to,” or “coupled to,” another element, the element may be directly on the other element and be directly connected to the other element. Another element is connected or coupled, or an intermediate element may be present. In contrast, when an element is referred to as being "directly on," "directly connected to another element," or "directly coupled to another element", there are no intervening elements or layers. Unless otherwise indicated, if two elements shown in the drawings are connected by a wire, the two elements may be coupled or directly coupled.The method examples described herein may be at least partially machine or computer implemented. Some examples may include a computer-readable medium or a machine-readable medium encoded with instructions that are operable to configure an electronic device to perform the method as described in the examples above. The implementation of such a method may include code, such as microcode, assembly language code, high-level language code, and so on. Such code may contain computer readable instructions for performing various methods. The code may form part of a computer program product. Further, the code may be tangibly stored on one or more volatile or non-volatile tangible computer-readable media, such as during execution or at other times. Examples of these tangible computer-readable media may include, but are not limited to, hard disks, removable disks, removable optical disks (for example, compact disks and digital video disks), tape cartridges, memory cards or memory sticks, random access memory (RAM), Read only memory (ROM), solid state drive (SSD), universal flash storage (UFS) device, embedded MMC (eMMC) device, etc.Examples:Example 1 is a method for programming a memory system. The method includes: receiving, by a programming device, a command file including a first pre-generated digital signature, the first pre-generated digital signature and the memory system, the first command, and the first command file. A memory system counter value is associated; and the programming device sends a first command message to the memory system, the first command message including the first command and the first pre-generated digital signature.In Example 2, the subject matter of Example 1 optionally includes: verifying, by the memory system, the first pre-generated digital signature using a current memory system counter value and a memory system encryption key; and executing by the memory system The first pre-generated digital signature.In Example 3, the subject matter of any one or more of Examples 1 to 2 optionally includes wherein the command file further includes a second pre-generated digital signature, the second pre-generated digital signature and the memory system , A second command is associated with a second memory system counter value after the first memory system counter value, and the method further includes: after sending the first command message to the memory system, sending the first command message to the memory system Send a second command message, the second command message including the second command and the second pre-generated digital signature.In Example 4, the subject matter of Example 3 optionally includes wherein the command file further includes a third pre-generated digital signature, the third pre-generated digital signature and the third command and the second memory system counter value after The third memory system counter value is associated, and the method further includes: after sending the second command message, determining that the command sequence data indicates the third command; and sending a third command message to the memory system, the The third command message includes the third command and the third pre-generated digital signature.In Example 5, the subject matter of any one or more of Examples 1 to 4 optionally includes wherein the command file includes a second pre-generated digital signature, the second pre-generated digital signature and the first command In association with a second memory system counter value different from the first memory system counter value, the subject further includes: querying the memory system by the programming device to receive the current memory system counter value; and programming by the programming device The device selects the first pre-generated digital signature based at least in part on the current memory system counter value and the first memory system counter value.In Example 6, the subject matter of any one or more of Examples 1 to 5 optionally includes: querying the memory system to receive a first current memory system counter value; determining that the first current memory system counter value is less than The first memory system counter value; query the memory system to receive a second current memory system counter value greater than the first current memory system counter value; and determine that the second current memory system counter value is equal to the first current memory system counter value A memory system counter value.In Example 7, the subject matter of any one or more of Examples 1 to 6 optionally includes: querying the memory system to receive a first current memory system counter value; determining that the first current memory system counter value is less than The first memory system counter value; and sending an instruction to the memory system to increment the memory system counter.In Example 8, the subject matter of any one or more of Examples 1 to 7 optionally includes wherein the command file includes a first pre-generated digital signature sequence corresponding to the first command sequence and a second command sequence corresponding to The second pre-generated digital signature sequence, the first pre-generated digital signature sequence includes the first pre-generated digital signature.In Example 9, the subject matter of Example 8 optionally includes wherein the second pre-generated digital signature sequence also includes the first pre-generated digital signature.Example 10 is a system for programming a memory. The system includes: a programming device configured to perform operations including: receiving a command file including a first pre-generated digital signature, and the first A pre-generated digital signature is associated with the memory system, the first command, and the first memory system counter value; and sending a first command message to the memory system, the first command message including the first command and the first command A pre-generated digital signature.In Example 11, the subject matter of Example 10 optionally includes wherein the command file further includes a second pre-generated digital signature, the second pre-generated digital signature and the memory system, the second command, and the first memory The second memory system counter value after the system counter value is associated, and wherein the programming device is further configured to perform operations including: after sending the first command message to the memory system, to the memory system Send a second command message, the second command message including the second command and the second pre-generated digital signature.In Example 12, the subject matter of Example 11 optionally includes wherein the command file further includes a third pre-generated digital signature, the third pre-generated digital signature and the third command and the second memory system counter value after The third memory system counter value is associated, and wherein the programming device is further configured to perform operations including: after sending the second command message, determining that the command sequence data indicates the third command; and The memory system sends a third command message, the third command message including the third command and the third pre-generated digital signature.In Example 13, the subject matter of any one or more of Examples 10 to 12 optionally includes wherein the command file includes a second pre-generated digital signature, the second pre-generated digital signature and the first command Is associated with a second memory system counter value different from the first memory system counter value, and wherein the programming device is further configured to perform operations including: querying the memory system to receive the current memory system counter value; And selecting the first pre-generated digital signature based at least in part on the current memory system counter value and the first memory system counter value.In Example 14, the subject matter of any one or more of Examples 10 to 13 optionally includes wherein the programming device is further configured to perform operations including: querying the memory system to receive the first current memory system A counter value; determining that the first current memory system counter value is less than the first memory system counter value; querying the memory system to receive a second current memory system counter value that is greater than the first current memory system counter value; and It is determined that the second current memory system counter value is equal to the first memory system counter value.In Example 15, the subject matter of any one or more of Examples 10 to 14 optionally includes wherein the programming device is further configured to perform operations including: querying the memory system to receive the first current memory system A counter value; determining that the first current memory system counter value is less than the first memory system counter value; and sending an instruction to the memory system to increment the memory system counter.In Example 16, the subject matter of any one or more of Examples 10 to 15 optionally includes wherein the command file includes a first pre-generated digital signature sequence corresponding to the first command sequence and a second command sequence corresponding to The second pre-generated digital signature sequence, the first pre-generated digital signature sequence includes the first pre-generated digital signature.In Example 17, the subject matter of Example 16 optionally includes wherein the second pre-generated digital signature sequence also includes the first pre-generated digital signature.In Example 18, the subject matter of any one or more of Examples 10 to 17 optionally includes the memory system, wherein the memory system is programmed to perform operations including: use of the current memory by the memory system The system counter value and the memory system encryption key verify the first pre-generated digital signature; and the memory system executes the first pre-generated digital signature.Example 19 is a non-transitory computer-readable medium that includes instructions thereon that, when executed by at least one processor, cause the at least one processor to perform operations including the following: Generating a digitally signed command file, the first pre-generated digital signature is associated with a memory system, a first command, and a first memory system counter value; and sending a first command message to the memory system, the first command message Including the first command and the first pre-generated digital signature.In Example 20, the subject matter of Example 19 optionally includes wherein the command file further includes a second pre-generated digital signature, the second pre-generated digital signature being associated with the memory system, the second command, and the first memory The second memory system counter value after the system counter value is associated, and the medium further includes an instruction that, when executed by the at least one processor, causes the at least one processor to perform operations including the following: After the memory system sends the first command message, it sends a second command message to the memory system, where the second command message includes the second command and the second pre-generated digital signature.The above description is intended to be illustrative and not restrictive. For example, the above-described examples (or one or more aspects thereof) can be used in combination with each other. Other embodiments may be used after reviewing the above description as those of ordinary skill in the art. It should be understood that the content of the present invention should not be used to interpret or limit the scope or meaning of the claims. Moreover, in the above specific embodiments, various features may be combined in order to simplify the present disclosure. This should not be interpreted as stating that unclaimed disclosed features are necessary for any claim. On the contrary, the subject matter of the present invention may lie in fewer than all the features of the specific disclosed embodiment. Therefore, the following claims are hereby incorporated into the detailed description, wherein each claim exists independently as a separate embodiment, and it is envisaged that such embodiments can be combined with each other in various combinations or permutations. The scope of the present invention should be determined with reference to the appended claims along with the full scope of equivalents to which such claims are given. |
The invention discloses an apparatus and method for complex multiplication. An embodiment of the invention is a processor including execution circuitry to calculate a result of a complex multiplication of a first complex number and a second complex number in response to a decoded instruction. A first operation is calculated that includes a first term for calculating a real part of the result and a first term of an imaginary part of the result. The calculation also includes a second operation to calculate a second term of a real part of the result and a second term of an imaginary part of the result. The processor also includes a decoder, a first source register, and a second source register. The decoder is to decode an instruction to generate a decoded instruction. A first source register is to provide a first complex number and a second source register is to provide a second complex number. |
1.A device comprising:a decoder for decoding a first instruction, the first instruction having a destination operand, a first source operand and a second source operand;Wherein, the first source operand is used to designate a first source register, the first source register is used to store the first plurality of packed complex numbers, the second source operand is used to designate a second source register, and the A second source register is used to store a second plurality of packed complex numbers, and each of the packed complex numbers consists of a 16-bit half-precision floating point element in an even position corresponding to the real part and an odd position corresponding to the imaginary part consists of 16-bit half-precision floating-point elements in ; andAn execution circuit, coupled to the decoder, for performing an operation corresponding to the first instruction, including for performing the following operations on at least one packed complex number of the first plurality of packed complex numbers:multiply the element in the even position with the element in the even position of the corresponding second packed complex number to generate the first product,multiplying the element in the odd position with the element in the odd position of the corresponding second packed complex number to generate the second product,multiplying the element in the odd position with the element in the even position of the corresponding second packed complex number to generate the third product,multiplying the element in the even position with the element in the odd position of the corresponding second packed complex number to generate the fourth product,The second product is subtracted from the first product to generate a first result,adding the third product to the fourth product to generate a second result,storing the first result in the corresponding even position of the resulting packed complex number in the destination register, andThe second result is stored in the odd position of the corresponding resulting packed complex number in the destination register.2.The apparatus of claim 1, wherein the execution circuit is further for copying elements in the even positions of the corresponding second packed complex numbers to odd positions of the transformed second packed complex numbers.3.The apparatus of claim 1, wherein the execution circuit is further for copying elements in even positions of the first packed complex number to odd positions of the transformed first packed complex number.4.The apparatus of claim 1, wherein the execution circuit is further for copying elements in odd positions of the first packed complex number to even positions of the transformed first packed complex number.5.2. The apparatus of claim 1, wherein the execution circuit is further for copying elements in odd positions of the corresponding second packed complex number to even positions of the transformed second packed complex number.6.The apparatus of claim 1, wherein the decoder is further configured to decode a second instruction, and the execution circuit is configured to perform operations corresponding to the second instruction, including At least one packed complex number of the plurality of packed complex numbers adds the second product to the first product to generate the first result, instead of subtracting the second product from the first product to The first result is generated.7.A method that includes:decoding a first instruction having a destination operand, a first source operand, and a second source operand;Wherein, the first source operand is used to designate a first source register, the first source register is used to store the first plurality of packed complex numbers, the second source operand is used to designate a second source register, and the A second source register is used to store a second plurality of packed complex numbers, and each of the packed complex numbers consists of a 16-bit half-precision floating point element in an even position corresponding to the real part and an odd position corresponding to the imaginary part consists of 16-bit half-precision floating-point elements in ; andPerforming an operation corresponding to the first instruction includes performing the following operations on at least one packed complex number in the first plurality of packed complex numbers:multiply the element in the even position with the element in the even position of the corresponding second packed complex number to generate the first product,multiplying the element in the odd position with the element in the odd position of the corresponding second packed complex number to generate the second product,multiplying the element in the odd position with the element in the even position of the corresponding second packed complex number to generate the third product,multiplying the element in the even position with the element in the odd position of the corresponding second packed complex number to generate the fourth product,The second product is subtracted from the first product to generate a first result,adding the third product to the fourth product to generate a second result,storing the first result in the corresponding even position of the resulting packed complex number in the destination register, andThe second result is stored in the odd position of the corresponding resulting packed complex number in the destination register.8.8. The method of claim 7, wherein the operations further comprise copying elements in even positions of the corresponding second packed complex number to odd positions of the transformed second packed complex number.9.8. The method of claim 7, wherein the operations further comprise copying elements in even positions of the first packed complex number to odd positions of the transformed first packed complex number.10.8. The method of claim 7, wherein the operations further comprise copying elements in odd positions of the first packed complex number to even positions of the transformed first packed complex number.11.8. The method of claim 7, wherein the operations further comprise copying elements in odd positions of the corresponding second packed complex number to even positions of the transformed second packed complex number.12.7. The method of claim 7, further comprising decoding a second instruction and performing operations corresponding to the second instruction, comprising performing the following operations on at least one packed complex number of the first plurality of packed complex numbers: The second product is added to the first product to generate the first result instead of subtracting the second product from the first product to generate the first result.13.A non-transitory machine-readable medium storing a first instruction having a destination operand, a first source operand, and a second source operand;Wherein, the first source operand is used to designate a first source register, the first source register is used to store the first plurality of packed complex numbers, the second source operand is used to designate a second source register, and the A second source register is used to store a second plurality of packed complex numbers, and each of the packed complex numbers consists of a 16-bit half-precision floating point element in an even position corresponding to the real part and an odd position corresponding to the imaginary part consists of 16-bit half-precision floating point elements in , and the first instruction, when executed by a machine, causes the machine to perform a method comprising performing the following for at least one packed complex number of the first plurality of packed complex numbers operate:multiply the element in the even position with the element in the even position of the corresponding second packed complex number to generate the first product,multiplying the element in the odd position with the element in the odd position of the corresponding second packed complex number to generate the second product,multiplying the element in the odd position with the element in the even position of the corresponding second packed complex number to generate the third product,multiplying the element in the even position with the element in the odd position of the corresponding second packed complex number to generate the fourth product,The second product is subtracted from the first product to generate a first result,adding the third product to the fourth product to generate a second result,storing the first result in the corresponding even position of the resulting packed complex number in the destination register, andThe second result is stored in the odd position of the corresponding resulting packed complex number in the destination register.14.14. The non-transitory machine-readable medium of claim 13, wherein the method further comprises copying elements in the even positions of the corresponding second packed complex number to odd positions of the transformed second packed complex number.15.14. The non-transitory machine-readable medium of claim 13, wherein the method further comprises copying elements in even positions of the first packed complex number to odd positions of the transformed first packed complex number.16.14. The non-transitory machine-readable medium of claim 13, wherein the method further comprises copying elements in odd positions of the first packed complex number to even positions of the transformed first packed complex number.17.14. The non-transitory machine-readable medium of claim 13, wherein the method further comprises copying elements in odd positions of the corresponding second packed complex number to even positions of the transformed second packed complex number.18.14. The non-transitory machine-readable medium of claim 13, further storing second instructions, wherein the second instructions, when executed by the machine, cause the machine to perform the method, the method comprising: at least one packed complex number of the first plurality of packed complex numbers performs the following operations: instead of subtracting the first product from the first product, the second product is added to the first product to generate the first result A square product to generate the first result.19.A processor comprising:a) front-end unit, the front-end unit includes: an instruction fetch unit for fetching an instruction from a memory, and a decoding unit for decoding the instruction into a micro-operation;b) an execution engine unit comprising: a rename/allocator unit for allocating machine buffers and resources required by each micro-operation and renaming multiple logical registers to entries in the register file; scheduling a processor unit that arbitrates allocation ports to schedule micro-operations for execution; an execution cluster, including execution units and memory access units; andc) a memory unit coupled to the front end unit and the execution engine unit.20.A processor core comprising:instruction decoder;a scalar unit coupled to the instruction decoder, the scalar unit using a scalar register;a vector unit coupled to the instruction decoder, the vector unit using a vector register; andL1 cache, which allows low latency accesses to the scalar registers and the vector registers,The processor cores use a local subset of the global L2 cache and have direct access paths to the local subset.21.A method that includes:compiling a program in a high-level programming language using an x86 compiler to generate x86 binary code that is natively executed by a first processor having at least one x86 instruction set core;Using an instruction converter, the x86 binary code is converted into an alternative binary code that can be natively executed by a second processor that does not have an x86 instruction set core. |
Apparatus and method for multiplication of complex numbersThe patent application of the present invention is a divisional application of the invention patent application with the application number 201811258028.X and the title of "device and method for multiplication of complex numbers" filed on October 26, 2018.technical fieldEmbodiments of the invention generally relate to the field of computer processors. More specifically, embodiments relate to apparatus and methods for complex multiplication.Background techniqueInstruction Set or Instruction Set Architecture (ISA) is the part of computer architecture that involves programming, including native data types, instructions, register architecture, addressing modes, memory architecture, interrupt and exception handling, and external input and output (I/O) . It should be noted that the term "instruction" herein generally refers to a macroinstruction - that is, an instruction provided to a processor for execution - rather than a microinstruction or micro-operation - that is, the microinstruction or micro-operation is a processor The decoder decodes the result of the macro instruction. Microinstructions or microoperations may be configured to instruct execution units on a processor to perform operations to implement logic associated with the macroinstructions.ISA is different from microarchitecture, which is a collection of processor design techniques used to implement instruction sets. Processors with different microarchitectures can share a common instruction set. For example, Pentium 4 processors, Core™ processors, and multiple processor implementations from Advanced Micro Devices, Inc. of Sunnyvale, Calif. Almost the same version of the x86 instruction set (with some extensions that have been added with newer versions), but with a different internal design. For example, the same register architecture of an ISA can be implemented in different ways in different microarchitectures using well-known techniques, including dedicated physical registers, using register renaming mechanisms (eg, using a register alias table (RAT), reordering buffers ( One or more dynamically allocated physical registers of ROB) and retirement register file). Unless otherwise specified, the phrases "register architecture," "register file," and "register" are used herein to refer to the register architecture, register file, and registers that are visible to the software/programmer and to the way instructions specify registers. Where distinction is required, the adjectives "logical", "architectural", or "software-visible" will be used to refer to registers/register files within a register architecture, while different adjectives will be used to specify that in a given microarchitecture registers (eg, physical registers, reorder buffers, retirement registers, register pools).Description of drawingsThe present invention is illustrated by way of example and not by way of limitation in the accompanying drawings, in which like reference numerals refer to like elements, wherein:1A-1B are block diagrams illustrating a generic vector friendly instruction format and instruction templates thereof according to embodiments of the present invention;1A is a block diagram illustrating a generic vector friendly instruction format and its Class A instruction template according to an embodiment of the present invention;1B is a block diagram illustrating a generic vector friendly instruction format and its Class B instruction template according to an embodiment of the present invention;2A is a block diagram illustrating an exemplary dedicated vector friendly instruction format according to an embodiment of the present invention;Figure 2B is a block diagram illustrating the fields with the dedicated vector friendly instruction format 200 that make up the complete opcode field 174, according to one embodiment of the present invention;FIG. 2C is a block diagram illustrating the fields with the dedicated vector friendly instruction format 200 that make up the register index field 144 according to one embodiment of the present invention;FIG. 2D is a block diagram illustrating the fields with the dedicated vector friendly instruction format 200 that make up the extended operation field 150, according to one embodiment of the present invention;3 is a block diagram of a register architecture 300 according to one embodiment of the invention;4A is a block diagram illustrating both an exemplary in-order pipeline and an exemplary register-renaming out-of-order issue/execution pipeline in accordance with an embodiment of the present invention;4B is a block diagram illustrating an exemplary embodiment of an in-order architecture core and an exemplary register-renaming out-of-order issue/execute architecture core to be included in a processor in accordance with an embodiment of the present invention;Figures 5A-B illustrate block diagrams of a more specific exemplary core architecture that would be one of several logic blocks in a chip (including other cores of the same type and/or different types);5A is a block diagram of a single processor core along with its connection to the on-die interconnect network 502 and its local subset 504 of the second level (L2) cache, according to an embodiment of the present invention;5B is an expanded view of a portion of the processor core in FIG. 5A according to an embodiment of the present invention;6 is a block diagram of a processor 600 that may have more than one core, may have an integrated memory controller, and may have an integrated graphics device, according to an embodiment of the present invention;7-10 are block diagrams of exemplary computer architectures;Figure 7 shows a block diagram of a system according to one embodiment of the invention;8 is a block diagram of a first more specific exemplary system according to an embodiment of the present invention;9 is a block diagram of a second more specific exemplary system according to an embodiment of the present invention;10 is a block diagram of an SoC according to an embodiment of the present invention;11 is a block diagram of the use of a software instruction converter to convert binary instructions in a source instruction set into binary instructions in a target instruction set in contrast to an embodiment of the present invention;12 is a block diagram of an apparatus for multiplying complex numbers according to an embodiment of the present invention;13 is a flowchart of a method for multiplying complex numbers according to an embodiment of the present invention.Detailed waysIn the following description, numerous specific details are set forth. It should be understood, however, that embodiments of the invention may be practiced without these specific details. In other instances, well-known circuits, structures and techniques have not been shown in detail in order not to obscure the understanding of this description.References in the specification to "one embodiment," "an embodiment," "example embodiment," etc. indicate that the described embodiment may include a particular feature, structure, or characteristic, but that each embodiment may not necessarily include that particular characteristics, structure or properties. Moreover, such phrases are not necessarily referring to the same embodiment. Furthermore, when a particular feature, structure or characteristic is described in connection with one embodiment, it is believed to be within the purview of those skilled in the art to affect such feature, structure or characteristic in connection with other embodiments, whether expressly described or not.As used in this specification and the claims, unless specified otherwise, the ordinal numbers "first," "second," "third," etc. used to describe an element merely denote the particular instance or Different instances of similar elements, and are not intended to imply that those elements so described must be in a particular sequence, temporally, spatially, hierarchically, or in any other way.Instructions to be executed by a processor core in accordance with embodiments of the present invention may be implemented in the "Generic Vector Friendly Instruction Format" described in detail below. In other embodiments, such a format is not utilized and another instruction format is used, however, the following descriptions of writemask registers, various data transformations (mixing, broadcasting, etc.), addressing, etc. generally apply to the above (multiple A description of an embodiment of a) instruction. Additionally, exemplary systems, architectures, and pipelines are described in detail below. Instructions may be executed on such systems, architectures and pipelines, but are not limited to those systems, architectures and pipelines detailed.Instruction SetAn instruction set may include one or more instruction formats. A given instruction format may define various fields (eg, number of bits, position of bits) to specify the operation to be performed (eg, opcode) and the operand(s) on which the operation is to be performed and/or Other data field(s) (eg, masks), and so on. Some instruction formats are further broken down by the definition of instruction templates (or sub-formats). For example, an instruction template for a given instruction format can be defined as having fields for that instruction format (the fields included are typically in the same order, but at least some fields have different bit positions because fewer fields are included) A subset, and/or defined as having a given field interpreted differently. Thus, each instruction of the ISA is expressed using a given instruction format (and, if defined, in accordance with a given one of the instruction templates for that instruction format), and includes instructions for specifying operations and operands. field. For example, an exemplary ADD (addition) instruction has a specific opcode and instruction format that includes an opcode field for specifying the opcode and a selection operand (source 1/destination and source 2) and the presence of this ADD instruction in the instruction stream will cause the operand field to have specific content that selects a specific operand. A set of SIMD extensions known as Advanced Vector Extensions (AVX, AVX2 and AVX-512) and Exploiting Vector Extensions (VEX) encoding schemes have been introduced and/or released (see e.g. 64 and IA-32 Architecture Software September 2014 Developer's Handbook; Advanced Vector Extensions Programming Reference October 2014; and Architectural Instruction Set Extensions Programming Reference October 2016).Exemplary Instruction FormatEmbodiments of the instruction(s) described herein can be embodied in different formats. Additionally, exemplary systems, architectures, and pipelines are detailed below. Embodiments of the instruction(s) may be executed on such systems, architectures and pipelines, but are not limited to those systems, architectures and pipelines detailed.Generic Vector Friendly Instruction FormatA vector friendly instruction format is an instruction format suitable for vector instructions (eg, there are specific fields dedicated to vector operations). Although embodiments are described in which both vector and scalar operations are supported through the vector friendly instruction format, alternative embodiments use only vector operations through the vector friendly instruction format.1A-1B are block diagrams illustrating a generic vector friendly instruction format and instruction templates thereof according to embodiments of the present invention. 1A is a block diagram illustrating a generic vector friendly instruction format and its class A instruction template according to an embodiment of the present invention; and FIG. 1B is a diagram illustrating a generic vector friendly instruction format and its class B instruction template according to an embodiment of the present invention block diagram. Specifically, class A and class B instruction templates are defined for the generic vector friendly instruction format 100, both of which include no memory access 105 instruction templates and memory access 120 instruction templates. The term "generic" in the context of a vector friendly instruction format refers to an instruction format that is not tied to any particular instruction set.Although an embodiment of the invention will be described where the vector friendly instruction format supports the following: 64-byte vector operand length (or size) and 32-bit (4-byte) or 64-bit (8-byte) data element width (or size) (and thus, a 64-byte vector consists of 16 doubleword-sized elements, or alternatively 8 quadword-sized elements); 64-byte vector operand length (or size) is the same as 16-bit (or 2 bytes) or 8 bits (1 byte) data element width (or size); 32 bytes vector operand length (or size) and 32 bits (4 bytes), 64 bits (8 bytes), 16 bits (2 bytes) or 8 bits (1 byte) data element width (or size); and 16 byte vector operand length (or size) with 32 bits (4 bytes), 64 bits (8 bytes), 16-bit (2-byte), or 8-bit (1-byte) data element width (or size); however alternative embodiments may support larger, smaller, and/or different vector operand sizes (eg, 256 bytes vector operands) with larger, smaller, or different data element widths (eg, 128-bit (16-byte) data element widths).The class A instruction templates in FIG. 1A include: 1) within the no memory access 105 instruction template, the instruction template showing the no memory access full rounding control type operation 110, and the no memory access data transformation type operation 115 and 2) within the memory access 120 instruction template, the memory access time-sensitive 125 instruction template and the memory access non-time-sensitive 130 instruction template are shown. The class B instruction templates in FIG. 1B include: 1) within the no memory access 105 instruction template, an instruction template showing a no memory access write mask controlled partial round control type operation 112 and a no memory access write mask and 2) within the instruction template of memory access 120, the instruction template of write mask control 127 of memory access is shown.The generic vector friendly instruction format 100 includes the following fields listed below in the order illustrated in FIGS. 1A-1B .Format field 140 - A specific value in this field (instruction format identifier value) uniquely identifies the vector friendly instruction format, and thus identifies that the instruction appears in the instruction stream in the vector friendly instruction format. Thus, this field is optional in the sense that this field is not required for instruction sets that only have a generic vector friendly instruction format.Base Operation Field 142 - Its content distinguishes between different base operations.Register Index Field 144 - whose content specifies the location of the source or destination operand in a register or in memory, either directly or through address generation. These fields include a sufficient number of bits to select N registers from the PxQ (eg, 32x512, 16x128, 32x1024, 64x1024) register file. Although in one embodiment N may be up to three source registers and one destination register, alternative embodiments may support more or fewer source and destination registers (eg, may support up to two sources, where these One of the sources also serves as a destination; up to three sources may be supported, where one of these sources also serves as a destination; up to two sources and one destination may be supported).Modifier field 146 - whose content distinguishes instructions in the generic vector instruction format that specify memory accesses from instructions that appear in the generic vector instruction format that do not specify memory accesses; A distinction is made between instruction templates and memory access 120 . Memory access operations read and/or write to the memory hierarchy (in some cases, using values in registers to specify source and/or destination addresses), while non-memory access operations do not (for example, the source and destination are register). Although in one embodiment this field also selects between three different ways to perform memory address calculations, alternative embodiments may support more, fewer or different ways to perform memory address calculations.Extended operation field 150 - whose content distinguishes which of various operations to perform in addition to the base operation. This field is context specific. In one embodiment of the invention, this field is divided into a class field 168 , an alpha field 152 and a beta field 154 . The extended operation field 150 allows multiple sets of common operations to be performed in a single instruction instead of 2, 3 or 4 instructions.Scale field 160 - whose contents allow scaling of the contents of the index field for memory address generation (eg, for address generation using (2 scale * index + base)).Offset field 162A - whose contents are used as part of memory address generation (eg, for address generation using (2 scale * index + base + offset)).Displacement factor field 162B (note that the concatenation of displacement field 162A directly on displacement factor field 162B indicates the use of one or the other) - its content is used as part of address generation; it specifies the size (N ) - where N is the number of bytes in a memory access (eg, for address generation using (2 scale * index + base + scaled displacement)). Redundant low-order bits are ignored, and thus the contents of the displacement factor field are multiplied by the total memory operand size (N) to generate the final displacement to be used in computing the effective address. The value of N is determined by the processor hardware at runtime based on the full opcode field 174 (described later herein) and the data manipulation field 154C. Displacement field 162A and displacement factor field 162B are not used for instruction templates without memory access 105 and/or different embodiments may implement only one or neither of the two, in the sense that the displacement Field 162A and displacement factor field 162B are optional.Data element width field 164 - its content distinguishes which of the multiple data element widths will be used (in some embodiments for all instructions; in other embodiments for only some of the instructions). This field is optional in the sense that only one data element width is supported and/or some aspect of the opcode is used to support data element widths.Writemask field 170 - its contents control, on a per-data-element-position basis, whether the data element positions in the destination vector operand reflect the results of the base and augment operations. Class A instruction templates support merge-write masking, while class B instruction templates support both merge-write masking and zeroing-write masking. When merging, a vector mask allows to protect any set of elements in the destination from updating during the execution of any operation (specified by the base and augment operations); in another embodiment, keep where the corresponding mask bits have 0 The old value of each element of the destination. Conversely, when zeroed, a vector mask allows any set of elements in the destination to be zeroed during the execution of any operation (specified by the base and augment operations); in one embodiment, the elements of the destination are in the corresponding mask Bits with a value of 0 are set to 0. A subset of this functionality is the ability to control the length of the vector of the operation being performed (ie, the span from the first to the last element being modified), however, the elements being modified need not be contiguous. Thus, the write mask field 170 allows partial vector operations, including loads, stores, arithmetic, logic, and the like. Although described where the content of writemask field 170 selects one of the plurality of writemask registers that contains the writemask to be used (and thus, the content of writemask field 170 indirectly identifies the masking to be performed), but alternative embodiments instead or additionally allow the contents of the mask write field 170 to directly specify the masking to be performed.Immediate field 172 - whose content allows specification of an immediate value. This field is optional in the sense that it is absent in implementations of the generic vector friendly format that do not support immediates and in instructions that do not use immediates.Class field 168 - whose content distinguishes between different classes of instructions. Referring to Figures 1A-1B, the content of this field selects between Type A and Type B instructions. In FIGS. 1A-1B, rounded squares are used to indicate that a particular value is present in a field (eg, class A 168A and class B 168B for class field 168 in FIGS. 1A-1B, respectively).Class A Instruction TemplateIn the case of a class A non-memory access 105 instruction template, the alpha field 152 is interpreted as its content to distinguish which of the different types of augmentation operations are to be performed (eg, round-type operations 110 for no memory access and no memory The instruction template of the accessed data transform type operation 115 specifies the RS field 152A of rounding 152A.1 and data transform 152A.2), respectively, while the beta field 154 distinguishes which of the operations of the specified type is to be performed. In the instruction template without memory access 105, scale field 160, displacement field 162A, and displacement scale field 162B are absent.Instruction Templates without Memory Access - Full Rounding Controlled OperationsIn the instruction template of the no memory access full round control type operation 110, the beta field 154 is interpreted as a round control field 154A whose content(s) provide static rounding. Although in the described embodiment of the invention rounding control field 154A includes suppress all floating point exceptions (SAE) field 156 and rounding operation control field 158, alternative embodiments may support both concepts, which may be combined Concepts are encoded as the same field, or with only one or the other of these concepts/fields (eg, may only have round operation control field 158).SAE field 156 - its content distinguishes whether exception event reporting is disabled; when the content of SAE field 156 indicates that suppression is enabled, a given instruction does not report floating-point exception flags of any kind, and does not invoke any floating-point exception handler.Round operation control field 158 - its contents distinguish which of a set of rounding operations is to be performed (eg, round up, round down, round towards zero, and round to nearest). Thus, the round operation control field 158 allows the rounding mode to be changed on an instruction-by-instruction basis. In one embodiment of the invention in which the processor includes a control register for specifying the rounding mode, the contents of the round operation control field 150 override the register value.Instruction Templates without Memory Access - Data Transformation OperationsIn the instruction template for no memory access data transform type operation 115, beta field 154 is interpreted as data transform field 154B, the content of which distinguishes which of multiple data transforms to perform (eg, no data transform, mix, broadcast) .In the case of an instruction template for class A memory access 120, the alpha field 152 is interpreted as an eviction hint field 152B, the content of which distinguishes which of the eviction hints is to be used (in FIG. 1A, for the instruction template of memory access timeliness 125 and memory access non-time-sensitive 130 instruction templates specify time-sensitive 152B.1 and non-time-sensitive 152B.2 respectively), while beta field 154 is interpreted as data manipulation field 154C, the content of which distinguishes multiple data manipulation operations to be performed (also called primitives) which of (eg, no manipulation, broadcast, up-conversion of source, and down-conversion of destination). The instruction template for memory access 120 includes a scale field 160, and optionally a displacement field 162A or a displacement scale field 162B.Vector memory instructions use translation support to perform vector loads from memory and vector stores to memory. Like normal vector instructions, vector memory instructions transfer data from/to memory in a data-element-wise fashion, where the elements actually transferred are specified by the contents of the vector mask selected as the writemask.Instruction Templates for Memory Access - Time-sensitiveTime-sensitive data is data that is likely to be reused quickly enough to benefit from cache operations. However, this is a hint, and different processors can implement it in different ways, including ignoring the hint entirely.Instruction Templates for Memory Access - TimelessData that is not time-sensitive is data that is unlikely to be reused quickly enough to benefit from cache operations in the first level cache and should be given eviction priority. However, this is a hint, and different processors can implement it in different ways, including ignoring the hint entirely.Type B Directive TemplateIn the case of a Type B instruction template, the alpha field 152 is interpreted as a writemask control (Z) field 152C, the content of which distinguishes whether the writemask controlled by the writemask field 170 should be merged or zeroed.In the case of a class B non-memory access 105 instruction template, a portion of beta field 154 is interpreted as RL field 157A, the contents of which distinguish which of the different types of augmentation operations are to be performed (eg, a write mask for no memory accesses). The code control portion of the instruction template for rounding control type operation 112 and the writemask for no memory access control VSIZE type operation 117 specify rounding 157A.1 and vector length (VSIZE) 157A.2) respectively, while the beta field 154 The remainder of the distinguishes which of the operations of the specified type is to be performed. In the instruction template without memory access 105, scale field 160, displacement field 162A, and displacement scale field 162B are absent.In the instruction template for the no memory access writemask control portion of the round control type operation 110, the remainder of the beta field 154 is interpreted as the round operation field 159A, and exception reporting is disabled (the given instruction does not report any kind of floating-point exception flag, and does not raise any floating-point exception handler).Round operation control field 159A - just like round operation control field 158, its content distinguishes which of a set of rounding operations to perform (eg, round up, round down, round to zero, and round to nearest ). Thus, the round operation control field 159A allows the rounding mode to be changed on an instruction-by-instruction basis. In one embodiment of the invention where the processor includes a control register for specifying the rounding mode, the contents of the round operation control field 150 override the register value.In the instruction template of the no memory access writemask control VSIZE type operation 117, the remainder of the beta field 154 is interpreted as a vector length field 159B, the content of which distinguishes which of multiple data vector lengths (eg, 128) to execute bytes, 256 bytes, or 512 bytes).In the case of an instruction template for class B memory access 120, a portion of the beta field 154 is interpreted as a broadcast field 157B, the content of which distinguishes whether a broadcast type data manipulation operation is to be performed, and the remainder of the beta field 154 is interpreted as a vector length field 159B. The instruction template for memory access 120 includes a scale field 160, and optionally a displacement field 162A or a displacement scale field 162B.For general vector friendly instruction format 100 , full opcode field 174 is shown to include format field 140 , base operation field 142 , and data element width field 164 . Although one embodiment is shown in which full opcode field 174 includes all of these fields, in embodiments where all of these fields are not supported, full opcode field 174 includes less than all of these fields. The full opcode field 174 provides an operation code (opcode).The extended operation field 150, data element width field 164, and writemask field 170 allow these features to be specified on an instruction-by-instruction basis in a generic vector friendly instruction format.The combination of the writemask field and the data element width field creates various types of instructions because these instructions allow the mask to be applied based on different data element widths.The various instruction templates that appear within Class A and Class B are beneficial in different situations. In some embodiments of the invention, different processors or different cores within a processor may support only class A, only class B, or may support both classes. For example, high-performance general purpose out-of-order cores intended for general purpose computing may only support class B, cores intended primarily for graphics and/or scientific (throughput) computing may only support class A, and Cores for both general purpose computing and graphics and/or scientific (throughput) computing may support both classes A and B (with some mix of templates and instructions from both classes, of course, but not The cores of all templates and instructions are within the scope of this invention). Likewise, a single processor may include multiple cores, all of which support the same class, or where different cores support different classes. For example, in a processor with separate graphics cores and general-purpose cores, one of the graphics cores that is intended to be used primarily for graphics and/or scientific computing may only support class A, while one or more of the general-purpose cores One may be a high-performance general-purpose core with out-of-order execution and register renaming that supports only class B, intended for general-purpose computing. Another processor that does not have a separate graphics core may include one or more general purpose in-order or out-of-order cores that support both class A and class B. Of course, features from one class may also be implemented in other classes in different embodiments of the invention. A program written in a high-level language will be made into (e.g., just-in-time or statically compiled) various executable forms including: 1) having only one(s) supported by the target processor(s) for execution either in the form of instructions of the class; or 2) in the form of an alternative routine and having a control flow code written using a different combination of instructions of all classes that selects these routines to be based on what is currently being executed by The code is executed by the instructions supported by the processor.Exemplary Specialized Vector Friendly Instruction Format2A is a block diagram illustrating an exemplary dedicated vector friendly instruction format in accordance with an embodiment of the present invention. Figure 2A shows a special-purpose vector friendly instruction format 200 that specifies the location, size, interpretation and order of fields, and the values of some of those fields, in the sense that the special-purpose vector friendly instruction format 200 is special. The dedicated vector friendly instruction format 200 may be used to extend the x86 instruction set, and thus some of the fields are similar or identical to those used in the existing x86 instruction set and its extensions (eg, AVX). The format remains consistent with the prefix encoding field, real opcode byte field, MOD R/M field, SIB field, displacement field, and immediate field field of the existing x86 instruction set with extensions. Illustrating the fields from Figure IB, the fields from Figure 2A map to the fields from Figure IB.It should be understood that although embodiments of the present invention are described with reference to specific vector friendly instruction format 200 in the context of general vector friendly instruction format 100 for illustrative purposes, the present invention is not limited to specific vector friendly instruction format 200 unless otherwise stated . For example, the general vector friendly instruction format 100 contemplates various possible sizes of various fields, while the specific vector friendly instruction format 200 is shown as having fields of a particular size. As a specific example, although the data element width field 164 is illustrated as a one-bit field in the specific vector friendly instruction format 200, the invention is not so limited (ie, the generic vector friendly instruction format 100 contemplates other sizes of the data element width field 164 ).The generic vector friendly instruction format 100 includes the following fields listed below in the order illustrated in Figure 2A.EVEX prefix (bytes 0-3) 202 - encoded in four bytes.Format Field 140 (EVEX Byte 0, Bits[7:0]) - The first byte (EVEX Byte 0) is the Format Field 140 and it contains 0x62 (in one embodiment of the invention, for Unique value that distinguishes the vector friendly instruction format).The second-fourth bytes (EVEX bytes 1-3) include a number of bit fields that provide dedicated capabilities.REX field 205 (EVEX byte 1, bits [7-5]) - consists of EVEX.R bit field (EVEX byte 1, bits [7]–R), EVEX.X bit field (EVEX byte 1, bits [7]–R) [6]–X) and (157BEX byte 1, bits [5]–B). The EVEX.R, EVEX.X and EVEX.B bitfields provide the same functionality as the corresponding VEX bitfields and are encoded using 1's complement form, ie ZMM0 is encoded as 1111B and ZMM15 is encoded as 0000B. The other fields of these instructions encode the lower three bits of the register index (rrr, xxx and bbb) as known in the art, which can be formed by adding EVEX.R, EVEX.X and EVEX.B Rrrr, Xxxx and Bbbb.REX' field 110 - This is the first part of the REX' field 110 and is the EVEX.R' bit field used to encode the upper 16 or lower 16 registers of the extended 32 register set (EVEX word Section 1, bits[4]–R'). In one embodiment of the invention, this bit is stored in a bit-reversed format along with other bits indicated below to distinguish (in the 32-bit mode of x86 known) from the BOUND instruction whose actual opcode word The section is 62, but the value 11 in the MOD field is not accepted in the MOD R/M field (described below); alternative embodiments of the present invention do not store this indicated bit and the other indicated bits below in an inverted format. A value of 1 is used to encode the lower 16 registers. In other words, R'Rrrr is formed by combining EVEX.R', EVEX.R, and other RRRs from other fields.Opcode Map Field 215 (EVEX Byte 1, Bits[3:0] - mmmm) - Its content encodes the implied leading opcode byte (0F, 0F 38, or 0F 3).Data Element Width Field 164 (EVEX Byte 2, Bits [7]-W) - Denoted by the notation EVEX.W. EVEX.W is used to define the granularity (size) of the data type (32-bit data element or 64-bit data element).EVEX.vvvv 220 (EVEX byte 2, bits [6:3]-vvvv) - The role of EVEX.vvvv may include the following: 1) EVEX.vvvv pairs the first source specified in reversed (1's complement) form register operands are encoded and are valid for instructions with two or more source operands; 2) EVEX.vvvv encodes a destination register operand specified in 1's complement form for a specific vector displacement; or 3 )EVEX.vvvv does not encode any operands, this field is reserved and should contain 1111b. Thus, EVEX.vvvv field 220 encodes the 4 low-order bits of the first source register specifier stored in inverted (1's complement) form. Depending on the instruction, additional different EVEX bit fields are used to extend the specifier size to 32 registers.EVEX.U 168 class field (EVEX byte 2, bits[2]-U) - if EVEX.U=0, it indicates class A or EVEX.U0; if EVEX.U=1, it indicates class B or EVEX.U1.Prefix encoding field 225 (EVEX byte 2, bits[1:0]-pp) - provides additional bits for the base operation field. In addition to providing support for legacy SSE instructions in the EVEX prefix format, this also has the benefit of compressing the SIMD prefix (the EVEX prefix requires only 2 bits instead of bytes to express the SIMD prefix). In one embodiment, to support legacy SSE instructions using SIMD prefixes (66H, F2H, F3H) both in legacy format and in EVEX prefix format, these legacy SIMD prefixes are encoded into SIMD prefix encoded fields; and at runtime The PLA is expanded to the legacy SIMD prefix before being provided to the decoder (thus, without modification, the PLA can execute both these legacy instructions in legacy format and these legacy instructions in EVEX format). While newer instructions may use the contents of the EVEX prefix encoding field directly as an opcode extension, for consistency, certain embodiments extend in a similar manner, but allow for different meanings specified by these legacy SIMD prefixes. Alternative embodiments may redesign the PLA to support 2-bit SIMD prefix encoding and thus require no extensions.Alpha field 152 (EVEX byte 3, bit[7]-EH, also known as EVEX.EH, EVEX.rs, EVEX.RL, EVEX.writemask control, and EVEX.N; also shown as alpha)— - As mentioned earlier, this field is context specific.Beta Field 154 (EVEX Byte 3, Bits[6:4] - SSS, also known as EVEX.s2-0, EVEX.r2-0, EVEX.rr1, EVEX.LL0, EVEX.LLB, also illustrated with βββ ) - As mentioned earlier, this field is context-specific.REX' field 110 - This is the rest of the REX' field and is the EVEX.V' bit field that can be used to encode the upper 16 or lower 16 registers of the extended 32 register set (EVEX byte 3, bit[3]–V'). This bit is stored in a bit-reversed format. A value of 1 is used to encode the lower 16 registers. In other words, V'VVVV is formed by combining EVEX.V', EVEX.vvvv.Writemask field 170 (EVEX byte 3, bits[2:0]-kkk) - its contents specify the index of the register in the writemask register, as previously described. In one embodiment of the invention, the specific value EVEX.kkk=000 has special behavior implying that no writemask is used for a specific instruction (this can be implemented in various ways, including using a writemask hardwired to all objects or bypassing the hardware that masks the hardware to achieve).The real opcode field 230 (byte 4) is also referred to as the opcode byte. Part of the opcode is specified in this field.MOD R/M field 240 (byte 5) includes MOD field 242 , Reg field 244 and R/M field 246 . As previously described, the contents of the MOD field 242 distinguish memory access operations from non-memory access operations. The role of the Reg field 244 can be boiled down to two situations: encoding either the destination register operand or the source register operand; or being considered an opcode extension and not being used to encode any instruction operands. The role of the R/M field 246 may include encoding an instruction operand that references a memory address; or encoding a destination register operand or a source register operand.Scale, Index, Base (SIB) Byte (Byte 6) - As previously described, the contents of the scale field 150 are used for memory address generation. SIB.xxx 254 and SIB.bbb 256 - The contents of these fields have been previously mentioned for register indices Xxxx and Bbbb.Displacement field 162A (bytes 7-10) - bytes 7-10 are displacement field 162A when MOD field 242 contains 10, and it works the same as traditional 32-bit displacement (disp32), and at byte granularity .Displacement Factor Field 162B (Byte 7) - When MOD field 242 contains 01, byte 7 is displacement factor field 162B. The location of this field is the same as that of the traditional x86 instruction set 8-bit displacement (disp8) that works at byte granularity. Since disp8 is sign extended, it can only be addressed between -128 and 127 byte offsets; in terms of a 64-byte cache line, disp8 uses can be set to only four really useful values -128 8 bits for , -64, 0, and 64; disp32 is used since a larger range is often required; however, disp32 requires 4 bytes. In contrast to disp8 and disp32, displacement factor field 162B is a reinterpretation of disp8; when displacement factor field 162B is used, the actual displacement is determined by multiplying the contents of the displacement factor field by the size (N) of the memory operand access. This type of displacement is called disp8*N. This reduces the average instruction length (a single byte is used for displacement, but has a much larger range). Such compressed displacements are based on the assumption that the effective displacement is a multiple of the granularity of the memory access, and thus the redundant low-order bits of the address offset need not be encoded. In other words, the displacement factor field 162B replaces the traditional x86 instruction set 8-bit displacement. Thus, the displacement factor field 162B is encoded in the same way as the x86 instruction set 8-bit displacement (hence, no change in ModRM/SIB encoding rules), with the only difference that disp8 is overloaded to disp8*N. In other words, there is no change in encoding rules or encoding length, but only in the hardware interpretation of the displacement value (which requires the displacement to be scaled to the size of the memory operand to obtain a byte-wise address offset). The immediate field 172 operates as previously described.full opcode fieldFIG. 2B is a block diagram illustrating the fields with the dedicated vector friendly instruction format 200 that make up the complete opcode field 174 in accordance with one embodiment of the present invention. Specifically, full opcode field 174 includes format field 140 , base operation field 142 , and data element width (W) field 164 . Base operation field 142 includes prefix encoding field 225 , opcode mapping field 215 , and real opcode field 230 .register index fieldFIG. 2C is a block diagram illustrating the fields with the dedicated vector friendly instruction format 200 that make up the register index field 144 in accordance with one embodiment of the present invention. Specifically, register index field 144 includes REX field 205 , REX' field 210 , MODR/M.reg field 244 , MODR/M.r/m field 246 , VVVV field 220 , xxx field 254 , and bbb field 256 .Extended action fieldFIG. 2D is a block diagram illustrating the fields with the dedicated vector friendly instruction format 200 that make up the extended operation field 150 in accordance with one embodiment of the present invention. When the class (U) field 168 contains 0, it indicates EVEX.U0 (Class A 168A); when it contains 1, it indicates EVEX.U1 (Class B 168B). When U=0 and MOD field 242 contains 11 (indicating no memory access operation), alpha field 152 (EVEX byte 3, bits [7]-EH) is interpreted as rs field 152A. When rs field 152A contains 1 (rounding 152A.1), beta field 154 (EVEX byte 3, bits [6:4]-SSS) is interpreted as rounding control field 154A. Rounding control field 154A includes one-bit SAE field 156 and two-bit rounding operation field 158 . When the rs field 152A contains 0 (data transform 152A.2), the beta field 154 (EVEX byte 3, bits [6:4]-SSS) is interpreted as a three-bit data transform field 154B. When U=0 and MOD field 242 contains 00, 01, or 10 (indicating a memory access operation), alpha field 152 (EVEX byte 3, bits [7]-EH) is interpreted as eviction hint (EH) field 152B, and Beta field 154 (EVEX byte 3, bits [6:4]—SSS) is interpreted as a three-bit data manipulation field 154C.When U=1, the alpha field 152 (EVEX byte 3, bits [7]-EH) is interpreted as a write mask control (Z) field 152C. When U=1 and MOD field 242 contains 11 (indicating no memory access operation), a portion of beta field 154 (EVEX byte 3, bits [4]-S0) is interpreted as RL field 157A; 157A.1), the remainder of beta field 154 (EVEX byte 3, bits [6-5]–S2-1) is interpreted as round operation field 159A, and when RL field 157A contains 0 (VSIZE 157.A2 ), the remainder of beta field 154 (EVEX byte 3, bits [6-5]-S2-1) is interpreted as vector length field 159B (EVEX byte 3, bits [6-5]-L1-0) . When U=1 and MOD field 242 contains 00, 01 or 10 (indicating a memory access operation), beta field 154 (EVEX byte 3, bits [6:4]-SSS) is interpreted as vector length field 159B (EVEX word Section 3, bits [6-5]–L1-0) and broadcast field 157B (EVEX byte 3, bits [4]–B).Exemplary Register ArchitectureFIG. 3 is a block diagram of a register architecture 300 according to one embodiment of the invention. In the illustrated embodiment, there are 32 512-bit wide vector registers 310; these registers are referenced as zmm0 through zmm31 (the set of zmm registers). Instead of the zmm register set, other embodiments may include a set of sixteen 256-bit wide vector registers; these registers are referenced as ymm0 through ymm15 (ymm register set). Instead of the zmm register set or the ymm register set, other embodiments may include a set of sixteen 128-bit wide vector registers; these registers are referred to as xmm0 through xmm15 (xmm register set). In Figure 3, the lower order 256 bits of the lower 16 zmm registers are overlaid on registers ymm0-15, and the lower order 128 bits of the lower 16 zmm registers (the lower order 128 bits of the ymm registers) Overlaid on registers xmm0-15.The dedicated vector friendly instruction format 200 operates on these overwritten register files, as illustrated in the following table.In other words, the vector length field 159B selects between the maximum length and one or more other shorter lengths, where each such shorter length is half the previous length, and does not have an instruction template for the vector length field 159B Operates on the maximum vector length. Furthermore, in one embodiment, the Class B instruction templates of the dedicated vector friendly instruction format 200 operate on packed or scalar single/double precision floating point data and packed or scalar integer data. Scalar operations are operations performed on the lowest order data element positions in the zmm/ymm/xmm registers; depending on the embodiment, the higher order data element positions either remain the same as before the instruction, or are zeroed.Writemask Registers 315 - In the illustrated embodiment, there are 8 writemask registers (k0 to k7), each 64 bits in size. In an alternate embodiment, the size of the writemask register 315 is 16 bits. In one embodiment, the vector mask register k0 cannot be used as a writemask; when a code that normally indicates k0 is used as a writemask, it selects the hardwired writemask 0xFFFF, effectively disabling the use of the writemask on that instruction.General Purpose Registers 325 - In the embodiment shown, there are sixteen 64-bit general purpose registers that are used with existing x86 addressing modes to address memory operands. These registers are referenced by the names RAX, RBX, RCX, RDX, RBP, RSI, RDI, RSP, and R8 through R15.A scalar floating point stack register file (x87 stack) 345 on which is overlaid the MMX packed integer flat register file 350 - in the illustrated embodiment, the x87 stack is used for 32/64 / 80-bit floating-point data to perform eight-element stacks of scalar floating-point operations; whereas MMX registers are used to perform operations on 64-bit packed integer data, and to save operands for some operations performed between MMX and XMM registers.Alternative embodiments of the present invention may use wider or narrower registers. Additionally, alternative embodiments of the present invention may use more, fewer or different register files and registers.Exemplary Core Architecture, Processor and Computer ArchitectureThe processor cores that can implement the present invention can be implemented in different processors in different ways, for different purposes. For example, implementations of such cores may include: 1) general purpose in-order cores intended for general-purpose computing; 2) high-performance general-purpose out-of-order cores designed for general purpose computing; 3) general purpose out-of-order cores intended primarily for graphics and/or Or dedicated cores for scientific (throughput) computing. Implementations of different processors may include: 1) a central processing unit (CPU) comprising one or more general purpose in-order cores and/or one or more general purpose in-order cores intended for general purpose computing In-order cores; and 2) coprocessors, which include one or more dedicated cores intended primarily for graphics and/or scientific (throughput) computing. Such different processors lead to different computer system architectures, which may include: 1) a coprocessor on a separate chip from the CPU; 2) in the same package as the CPU but on a separate die 3) coprocessors on the same die as the CPU (in which case such coprocessors are sometimes called special purpose logic or special purpose cores such as integrated graphics and and/or scientific (throughput) logic); and 4) a system on a chip (SoC) that can combine the described CPU (sometimes referred to as application core(s) or application processor(s)), the above-described The coprocessor and additional functions are included on the same die.Exemplary Core ArchitectureIn-order and out-of-order core diagrams4A is a block diagram illustrating an exemplary in-order pipeline and an exemplary register-renaming out-of-order issue/execution pipeline in accordance with various embodiments of the present invention. 4B is a block diagram illustrating an exemplary embodiment of an in-order architecture core and an exemplary register-renaming out-of-order issue/execute architecture core to be included in a processor in accordance with various embodiments of the present invention. The solid-line boxes in FIGS. 4A-4B illustrate in-order pipelines and in-order cores, while the optional addition of dashed-line boxes illustrates register-renamed, out-of-order issue/execution pipelines and cores. Considering that the in-order aspect is a subset of the out-of-order aspect, the out-of-order aspect will be described.In Figure 4A, processor pipeline 400 includes fetch stage 402, length decode stage 404, decode stage 406, allocate stage 408, rename stage 410, schedule (also known as dispatch or issue) stage 412, register read/memory Read stage 414 , execute stage 416 , write back/memory write stage 418 , exception handling stage 422 and commit stage 424 .FIG. 4B shows processor core 490 that includes front end unit 430 coupled to execution engine unit 450 and both of which are coupled to memory unit 470 . Cores 490 may be reduced instruction set computing (RISC) cores, complex instruction set computing (CISC) cores, very long instruction word (VLIW) cores, or a hybrid or alternative core type. As yet another option, the cores 490 may be dedicated cores such as, for example, network or communication cores, compression engines, coprocessor cores, general purpose computing graphics processing unit (GPGPU) cores, graphics cores, and the like.Front-end unit 430 includes a branch prediction unit 432 coupled to a micro-op cache 433 and an instruction cache unit 434 coupled to an instruction translation lookaside buffer (TLB) 436, which instruction translation lookaside Buffer 436 is coupled to instruction fetch unit 438 , which is coupled to decode unit 440 . Decode unit 440 (or decoder) may decode the instruction and generate one or more micro-operations, microcode entry points, micro-operations decoded from the original instruction, or otherwise reflecting the original instruction, or derived from the original instruction. Commands, other commands, or other control signals as outputs. Decoding unit 440 may be implemented using a variety of different mechanisms. Examples of suitable mechanisms include, but are not limited to, look-up tables, hardware implementations, programmable logic arrays (PLAs), microcode read only memories (ROMs), and the like. In one embodiment, core 490 includes a microcode ROM or other medium (eg, in decode unit 440, or otherwise within front end unit 430) that stores microcode for certain macroinstructions. Micro-op cache 433 and decode unit 440 are coupled to rename/allocator unit 452 in execution engine unit 450 . In various embodiments, a micro-operation cache such as 433 may also or alternatively be referred to as an operation cache.The execution engine unit 450 includes a rename/allocator unit 452 coupled to a retirement unit 454 and a set 456 of one or more scheduler units. Scheduler unit(s) 456 represents any number of different schedulers, including reservation stations, central instruction windows, and the like. Scheduler unit(s) 456 are coupled to physical register file unit(s) 458 . Each of the physical register file unit(s) 458 represents one or more physical register files, where different physical register files store one or more different data types, such as scalar integer, scalar float point, packed integer, packed floating point, vector integer, vector floating point, state (eg, instruction pointer which is the address of the next instruction to execute), etc. In one embodiment, physical register file unit(s) 458 includes vector register units, writemask register units, and scalar register units. These register units can provide architectural vector registers, vector mask registers, and general purpose registers. Physical register file unit(s) 458 are overlapped by retirement unit 454 to illustrate the various ways in which register renaming and out-of-order execution can be achieved (eg, using reorder buffer(s) and retirement register(s)) heap; using future file(s), history buffer(s), retirement register file(s); using register maps and register pools, etc.). Retirement unit 454 and physical register file unit(s) 458 are coupled to execution cluster(s) 460 . Execution cluster(s) 460 includes a set 462 of one or more execution units and a set 464 of one or more memory access units. Execution unit 462 may perform various operations (eg, shift, add, subtract, multiply) and may perform on various data types (eg, scalar floating point, packed integer, packed floating point, vector integer, vector floating point). While some embodiments may include multiple execution units dedicated to a particular function or set of functions, other embodiments may include only one execution unit or multiple execution units that all perform all functions. Scheduler unit(s) 456 , physical register file unit(s) 458 , and execution cluster(s) 460 are shown as possibly multiple because some embodiments create separate Pipelines (e.g., scalar integer pipeline, scalar floating point/packed integer/packed floating point/vector integer/vector floating point pipeline, and/or each with its own scheduler unit, physical register file unit(s), and/or Execution cluster's memory access pipeline - and in the case of a separate memory access pipeline, implement certain embodiments in which only the pipeline's execution cluster has memory access unit(s) 464). It should also be understood that where separate pipelines are used, one or more of these pipelines can be issued/executed out-of-order, and the remaining pipelines can be in-order.A set 464 of memory access units is coupled to a memory unit 470 that includes a data TLB unit 472 that is coupled to a data cache unit 474 that is coupled to the second level (L2) cache Cache unit 476 . In one exemplary embodiment, memory access unit 464 may include a load unit, a store address unit, and a store data unit, each of which is coupled to data TLB unit 472 in memory unit 470 . Instruction cache unit 434 is also coupled to a second level (L2) cache unit 476 in memory unit 470 . L2 cache unit 476 is coupled to one or more other levels of cache, and ultimately to main memory.As an example, an exemplary register renaming out-of-order issue/execute core architecture may implement pipeline 400 as follows: 1) instruction fetch 438 performs fetch stage 402 and length decode stage 404; 2) decode unit 440 performs decode stage 406; 3) rename/allocator unit 452 executes allocation stage 408 and rename stage 410; 4) scheduler unit(s) 456 executes dispatch stage 412; 5) physical register file unit(s) 458 and memory unit 470 execute register read/memory read stage 414; execution cluster 460 executes execution stage 416; 6) memory unit 470 and physical register file unit(s) 458 execute write back/memory write stage 418; 7) each unit may be involved in Exception handling stage 422; and 8) retire unit 454 and physical register file unit(s) 458 perform commit stage 424.The core 490 may support one or more instruction sets (eg, the x86 instruction set (with some extensions that have been added with newer versions); the MIPS instruction set from MIPS Technologies, Inc., Sunnyvale, Calif.; Sunnyvale, Calif. The ARM instruction set (with optional additional extensions such as NEON) from ARM Holdings, Inc., which includes the instruction(s) described herein. In one embodiment, core 490 includes logic to support packed data instruction set extensions (eg, AVX, AVX2, AVX-512), thereby allowing packed data to be used to perform operations used by many multimedia applications.It should be understood that cores may support multithreading (performing a collection of two or more operations or threads in parallel), and that this multithreading may be accomplished in various ways, including time division multithreading, SMT ( For example, a single physical core provides a logical core for each of the threads that the physical core is concurrently multithreading), or a combination thereof (eg, time division fetch and decode and thereafter SMT such as in hyperthreading techniques).Although register renaming is described in the context of out-of-order execution, it should be understood that register renaming may be used in an in-order architecture. Although the illustrated embodiment of the processor also includes separate instruction and data cache units 434/474 and a shared L2 cache unit 476, alternative embodiments may have a single internal cache for both instruction and data , such as, for example, a first level (L1) internal cache or multiple levels of internal caches. In some embodiments, the system may include a combination of internal cache and external cache external to the core and/or processor. Alternatively, all caches can be external to the core and/or processor.Specific Exemplary Core Architecture5A-5B illustrate block diagrams of a more specific exemplary core architecture, which would be one of several logic blocks in a chip (including other cores of the same type and/or different types). Depending on the application, the logic blocks communicate with some fixed function logic, memory I/O interfaces, and other necessary I/O logic through a high bandwidth interconnect network (eg, a ring network).5A is a block diagram of a single processor core and its connection to the on-die interconnect network 502 and its local subset 504 of the second level (L2) cache, according to an embodiment of the present invention. In one embodiment, instruction decoder 500 supports the x86 instruction set with packed data instruction set extensions. L1 cache 506 allows low latency accesses to cache memory into scalar and vector units. Although in one embodiment (to simplify the design), scalar unit 508 and vector unit 510 use separate sets of registers (scalar registers 512 and vector registers 514, respectively), and data transferred between these registers is written to memory , and then read back from the first level (L1) cache 506, but alternative embodiments of the present invention may use a different approach (eg, use a single register set or include allowing data to be transferred between the two register files without requiring communication paths that are written and read back).The local subset 504 of the L2 cache is part of the global L2 cache, which is divided into a number of separate local subsets, one local subset per processor core. Each processor core has a direct access path to its own local subset 504 of the L2 cache. Data read by a processor core is stored in its L2 cache subset 504 and can be quickly accessed in parallel with other processor cores accessing their own local L2 cache subset. Data written by a processor core is stored in its own L2 cache subset 504 and flushed from other subsets as necessary. The ring network ensures the consistency of shared data. The ring network is bidirectional to allow agents such as processor cores, L2 caches and other logic blocks to communicate with each other within the chip. Each ring data path is 1012 bits wide in each direction.5B is an expanded view of a portion of the processor core in FIG. 5A, according to an embodiment of the present invention. FIG. 5B includes the L1 data cache 506A portion of the L1 cache 504 , as well as more details about the vector unit 510 and the vector registers 514 . Specifically, vector unit 510 is a 16-wide vector processing unit (VPU) (see 16-wide ALU 528) that executes one or more of integer, single-precision floating-point, and double-precision floating-point instructions. The VPU supports mixing of register inputs through mixing unit 520 , numerical conversion through numerical conversion units 522A-B, and replication of memory inputs through copying unit 524 . Writemask register 526 allows predicted vector writes.specific processor architecture6 is a block diagram of a processor 600 that can have more than one core, can have an integrated memory controller, and can have an integrated graphics device, according to an embodiment of the invention. The solid-line box in FIG. 6 illustrates the processor 600 with a single core 602A, a system agent 610, a set 616 of one or more bus controller units, while an optional addition to the dashed-line box illustrates having multiple cores 602A-N , a set 614 of one or more integrated memory controller units in the system agent unit 610 and a replacement processor 600 for the special purpose logic 608 .Thus, different implementations of the processor 600 may include: 1) a CPU where the dedicated logic 608 is integrated graphics and/or scientific (throughput) logic (which may include one or more cores) and the cores 602A-N are one or more Multiple general-purpose cores (eg, general-purpose in-order cores, general-purpose out-of-order cores, a combination of the two); 2) coprocessors, where cores 602A-N are intended primarily for graphics and/or science (throughput) and 3) coprocessors, where cores 602A-N are a large number of general purpose in-order cores. Thus, the processor 600 may be a general-purpose processor, a co-processor, or a special-purpose processor such as, for example, a network or communications processor, a compression engine, a graphics processor, a GPGPU (General Purpose Graphics Processing Unit), a high-throughput many-core integrated (MIC) coprocessors (including 30 or more cores), embedded processors, etc. The processor may be implemented on one or more chips. Processor 600 may be part of one or more substrates, and/or may be implemented on one or more substrates using any of a variety of process technologies, such as, for example, BiCMOS, CMOS, or NMOS.The memory hierarchy includes one or more cache levels within the core, a set 606 of one or more shared cache units, and external memory (not shown) coupled to a set 614 of integrated memory controller units. The set 606 of shared cache units may include one or more intermediate levels of cache, such as second level (L2), third level (L3), fourth level (L4) or other levels of cache, last level Cache (LLC) and/or a combination of the above. Although in one embodiment, ring-based interconnect unit 612 will integrate graphics logic 608 (integrated graphics logic 608 is an example of dedicated logic and is also referred to herein as dedicated logic), the set 606 of shared cache units, and the system The proxy unit 610/integrated memory controller unit(s) 614 are interconnected, although alternative embodiments may use any number of known techniques to interconnect such units. In one embodiment, coherency is maintained between one or more cache units 606 and cores 602A-N.In some embodiments, one or more of the cores 602A-N are capable of multithreading. System agent 610 includes those components that coordinate and operate cores 602A-N. The system agent unit 610 may include, for example, a power control unit (PCU) and a display unit. The PCU may be, or may include, the logic and components required to regulate the power states of the cores 602A-N and the integrated graphics logic 608. The display unit is used to drive one or more externally connected displays.The cores 602A-N may be homogeneous or heterogeneous in terms of architectural instruction sets; that is, two or more of the cores 602A-N may be able to execute the same instruction set, while other cores may be able to execute the instruction Only a subset of the set or a different instruction set.Exemplary Computer Architecture7-10 are block diagrams of exemplary computer architectures. Known in the art for laptops, desktops, handheld PCs, personal digital assistants, engineering workstations, servers, network devices, network hubs, switches, embedded processors, digital signal processors (DSPs), graphics devices Other system designs and configurations for video game devices, set-top boxes, microcontrollers, cellular phones, portable media players, handheld devices, and various other electronic devices are also suitable. In general, a wide variety of systems or electronic devices capable of incorporating processors and/or other execution logic as disclosed herein are generally suitable.Referring now to FIG. 7, shown is a block diagram of a system 700 according to one embodiment of the present invention. System 700 may include one or more processors 710 , 715 coupled to controller hub 720 . In one embodiment, controller hub 720 includes graphics memory controller hub (GMCH) 790 and input/output hub (IOH) 750 (which may be on separate chips); GMCH 790 includes memory and graphics controller, memory 740 and Coprocessor 745 is coupled to the memory and graphics controller; IOH 750 couples input/output (I/O) devices 760 to GMCH 790 . Alternatively, one or both of the memory and graphics controller are integrated within the processor (as described herein), the memory 740 and coprocessor 745 are directly coupled to the processor 710, and the controller hub 720 communicates with the IOH The 750 is in a single chip.The optionality of additional processors 715 is indicated in FIG. 7 by dashed lines. Each processor 710 , 715 may include one or more of the processing cores described herein, and may be some version of processor 600 .Memory 740 may be, for example, dynamic random access memory (DRAM), phase change memory (PCM), or a combination of the two. For at least one embodiment, the controller hub 720 communicates with the processing(s) via a multidrop bus such as a front side bus (FSB), a point-to-point interface such as a quick path interconnect (QPI), or similar connections 795 710, 715 to communicate.In one embodiment, coprocessor 745 is a special purpose processor such as, for example, a high throughput MIC processor, network or communication processor, compression engine, graphics processor, GPGPU, embedded processor, and the like. In one embodiment, the controller hub 720 may include an integrated graphics accelerator.Various differences may exist between physical resources 710, 715 in a range of quality metrics including architectural, micro-architecture, thermal, power consumption characteristics, and the like.In one embodiment, processor 710 executes instructions that control general types of data processing operations. Embedded within these instructions may be coprocessor instructions. The processor 710 identifies these coprocessor instructions as being of a type that should be executed by the attached coprocessor 745 . Accordingly, processor 710 issues these coprocessor instructions (or control signals representing coprocessor instructions) to coprocessor 745 over a coprocessor bus or other interconnect. Coprocessor(s) 745 accepts and executes the received coprocessor instructions.Referring now to FIG. 8, shown is a block diagram of a first more specific exemplary system 800 in accordance with an embodiment of the present invention. As shown in FIG. 8 , the multiprocessor system 800 is a point-to-point interconnect system and includes a first processor 870 and a second processor 880 coupled via a point-to-point interconnect 850 . Each of processors 870 and 880 may be some version of processor 600 . In one embodiment of the invention, processors 870 and 880 are processors 710 and 715, respectively, and coprocessor 838 is coprocessor 745. In another embodiment, processors 870 and 880 are processor 710 and coprocessor 745, respectively.Processors 870 and 880 are shown including integrated memory controller (IMC) units 872 and 882, respectively. Processor 870 also includes point-to-point (P-P) interfaces 876 and 878 as part of its bus controller unit; similarly, second processor 880 includes P-P interfaces 886 and 888 . The processors 870 , 880 may exchange information via a P-P interface 850 using point-to-point (P-P) interface circuits 878 , 888 . As shown in Figure 8, IMCs 872 and 882 couple the processors to respective memories, namely memory 832 and memory 834, which may be portions of main memory locally attached to the respective processors.The processors 870, 880 may each exchange information with the chipset 890 via respective P-P interfaces 852, 854 using point-to-point interface circuits 876, 894, 886, 898. Chipset 890 may optionally exchange information with coprocessor 838 via high performance interface 892 . In one embodiment, coprocessor 838 is a special purpose processor such as, for example, a high throughput MIC processor, network or communication processor, compression engine, graphics processor, GPGPU, embedded processor, and the like.A shared cache (not shown) can be included in either processor, or external to both processors but connected to the processors via a P-P interconnect, so that if the processor is placed in a low power mode, either Local cache information for one or both processors may be stored in a shared cache.Chipset 890 may be coupled to first bus 816 via interface 896 . In one embodiment, the first bus 816 may be a Peripheral Component Interconnect (PCI) bus or a bus such as a PCI Express bus or another third-generation I/O interconnect bus, although the scope of the invention is not limited in this regard .As shown in FIG. 8 , various I/O devices 814 may be coupled to the first bus 816 along with a bus bridge 818 that couples the first bus 816 to the second bus 820 . In one embodiment, one or a A plurality of additional processors 815 are coupled to the first bus 816 . In one embodiment, the second bus 820 may be a low pin count (LPC) bus. In one embodiment, various devices may be coupled to the second bus 820, including, for example, a keyboard and/or mouse 822, a communication device 827, and a storage unit 828, such as a device that may include instructions/code and data 830. disk drive or other mass storage device. Additionally, audio I/O 824 may be coupled to the second bus 820 . Note that other architectures are possible. For example, instead of the point-to-point architecture of FIG. 8, the system may implement a multidrop bus or other such architecture.Referring now to FIG. 9, shown is a block diagram of a second more specific exemplary system 900 in accordance with an embodiment of the present invention. Like elements in FIGS. 8 and 9 use like reference numerals, and certain aspects of FIG. 8 have been omitted from FIG. 9 to avoid obscuring other aspects of FIG. 9 .9 illustrates that processors 870, 880 may include integrated memory and I/O control logic ("CL") 872 and 882, respectively. Thus, the CLs 872, 882 include integrated memory controller units and include I/O control logic. 9 illustrates that not only memory 832, 834 is coupled to CL 872, 882, but I/O device 914 is also coupled to control logic 872, 882. Conventional I/O devices 915 are coupled to chipset 890 .Referring now to FIG. 10, shown is a block diagram of an SoC 1000 in accordance with an embodiment of the present invention. Similar elements in Figure 6 use similar reference numerals. Additionally, the dashed boxes are optional features on more advanced SoCs. In Figure 10, interconnect unit(s) 1002 are coupled to: an application processor 1010 comprising a set 602A-N of one or more cores and a shared cache unit(s) 606, one or more cores The set 602A-N includes cache units 604A-N; system proxy unit 610; bus controller unit(s) 616; integrated memory controller unit(s) 614; set 1020 of one or more coprocessors, It may include integrated graphics logic, image processors, audio processors, and video processors; a static random access memory (SRAM) unit 1030; a direct memory access (DMA) unit 1032; and for coupling to one or more external displays display unit 1040. In one embodiment, coprocessor(s) 1020 comprise special purpose processors, such as, for example, network or communication processors, compression engines, GPGPUs, high throughput MIC processors, or embedded processors, among others.Embodiments of the mechanisms disclosed herein may be implemented in hardware, software, firmware, or a combination of such implementations. Embodiments of the present invention may be implemented as a computer program or program code executing on a programmable system including at least one processor, a storage system (including volatile and nonvolatile memory and/or storage elements) , at least one input device, and at least one output device.Program code, such as code 830 illustrated in Figure 8, may be applied to input instructions to perform the functions described herein and generate output information. The output information can be applied to one or more output devices in a known manner. For the purposes of this application, a processing system includes any system having a processor, such as, for example, a digital signal processor (DSP), a microcontroller, an application specific integrated circuit (ASIC), or a microprocessor.The program code may be implemented in a high-level procedural or object-oriented programming language to communicate with the processing system. The program code may also be implemented in assembly or machine language, if desired. In fact, the mechanisms described herein are not limited to the scope of any particular programming language. In any case, the language can be a compiled language or an interpreted language.One or more aspects of at least one embodiment may be implemented by representative instructions stored on a machine-readable medium, the instructions representing various logic in a processor that, when read by a machine, cause the machine to manufacture logic for implementing the techniques described herein. Such representations, referred to as "IP cores," can be stored on tangible machine-readable media and can be supplied to various customers or production facilities for loading into the manufacturing machines that actually manufacture the logic or processors.Such machine-readable storage media may include, but are not limited to, non-transitory, tangible arrangements of articles of manufacture or formation by machines or devices, including storage media, such as hard disks; any other type of disks, including floppy disks, optical disks, compact disks Disc read only memory (CD-ROM), compact disc rewritable (CD-RW), and magneto-optical disk; semiconductor devices such as read only memory (ROM), such as dynamic random access memory (DRAM) and static random access memory Random Access Memory (RAM), Erasable Programmable Read Only Memory (EPROM), Flash Memory, Electrically Erasable Programmable Read Only Memory (EEPROM); Phase Change Memory (PCM); Magnetic Cards or optical card; or any other type of medium suitable for storing electronic instructions.Accordingly, embodiments of the present invention also include non-transitory tangible machine-readable media containing instructions or containing design data, such as a hardware description language (HDL), which defines the structures, circuits, devices, processors described herein and/or system features. These embodiments are also referred to as program products.Simulation (including binary transformation, code deformation, etc.)In some cases, an instruction translator may be used to convert instructions from a source instruction set to a target instruction set. For example, an instruction translator may transform an instruction (eg, using static binary transform, dynamic binary transform including dynamic compilation), warp, emulate, or otherwise convert the instruction into one or more other instructions to be processed by the core. Instruction translators can be implemented in software, hardware, firmware, or a combination thereof. The instruction translator may be on-processor, off-processor, or partially on-processor and partially off-processor.11 is a block diagram contrasting the use of a software instruction converter to convert binary instructions in a source instruction set to binary instructions in a target instruction set, according to an embodiment of the present invention. In the illustrated embodiment, the instruction translator is a software instruction translator, but alternatively, the instruction translator may be implemented in software, firmware, hardware, or various combinations thereof. 11 shows that a program in a high-level language 1102 can be compiled using an x86 compiler 1104 to generate x86 binary code 1106 that can be natively executed by a processor 1116 having at least one x86 instruction set core. Processor with at least one x86 instruction set core 1116 represents any processor that performs substantially the same functions as an Intel processor with at least one x86 instruction set core by compatibly executing or otherwise performing the following: 1) Intel x86 an essential part of the instruction set of an instruction set core, or 2) an application targeted to run on an Intel processor with at least one x86 instruction set core to achieve substantially the same results as an Intel processor with at least one x86 instruction set core or Object code versions of other software. x86 compiler 1104 represents a compiler operable to generate x86 binary code 1106 (eg, object code) executable on a processor 1116 having at least one x86 instruction set core with or without additional linking processing . Similarly, FIG. 11 shows that an alternative instruction set compiler 1108 can be used to compile programs in the high-level language 1102 to generate programs that can be executed by a processor 1114 that does not have at least one x86 instruction set core (eg, a Alternate instruction set binary code 1110 natively executed by the MIPS instruction set of MIPS Technologies, Inc. of Sunnyvale, CA, and/or a processor executing a core of the ARM instruction set of ARM Holdings, Inc. of Sunnyvale, CA. Instruction converter 1112 is used to convert x86 binary code 1106 into code that can be natively executed by processor 1114 without an x86 instruction set core. It is unlikely that the converted code will be identical to the alternate instruction set binary code 1110, as instruction converters capable of doing so are difficult to manufacture; however, the converted code will perform general operations and be composed of instructions from the alternate instruction set. Thus, instruction translator 1112 represents, by emulation, emulation, or any other process, software, firmware, hardware, or a combination thereof that allows a processor or other electronic device that does not have an x86 instruction set processor or core to execute x86 binary code 1106.complex multiplicationEmbodiments of the present invention may use the apparatus shown in FIG. 12 to perform complex multiplication using packed real and imaginary data elements. The apparatus of Figure 12 may be included in the processors and/or systems of Figures 4 to 10, respectively, as described above, and Figures 4 to 10 illustrate processors and systems including embodiments of the present invention, wherein processors 490, 600, 710, 715, 870, 880 and 1010 and systems 700, 800, 900 and 1000 may include any or all of the blocks and/or elements shown in FIG. 13 to operate with the techniques and/or methods described in the description of 13.The described embodiments operate on 16-bit half-precision floating point values in 128-bit, 256-bit and 512-bit packed data registers and memory locations. For example, one embodiment multiplies packed data values in xmm2 and xmm3/m128, where xmm2 and xmm3/m128 store the real part of the complex number in the even elements and the imaginary part of the complex number in the odd elements. However, other embodiments may operate on other sizes and/or data types.In an embodiment, processing hardware including a multiplier performs a first computation for computing the real part of the result and a second computation for computing the imaginary part of the result. Using the symbols X=Xr+i*Xi and Y=Yr+i*Yi, respectively, to denote a first complex number X with real and imaginary parts Xr and Xi and a second complex number Y with real and imaginary parts Yr and Yi, the first The calculation can be expressed as Xr*Yr-Xi*Yi, and the second calculation can be expressed as Xr*Yi+Yr*Xi, since (Xr+i*Xi)(Yr+i*Yi)=[Xr*Yr+i2( Xi*Yi)]+i[Xr*Yi+Yr*Xi].Embodiments use processing hardware to perform both computations in response to decoding a single instruction, identified herein with the mnemonic VCFMULPH. In contrast, other methods of performing complex multiplication may use more than one instruction, eg, a combination of instructions including one or more shuffle instructions and one or more multiply instructions.The following pseudocode specifies the computation performed in one embodiment, where SRC1 and SRC2 are source registers or memory locations, TEMP is the register used to store intermediate values, DEST is the destination register, and the real part is stored in the source and destination registers or in even elements of memory locations (e.g., the lower 16 bits of each 32-bit word), and the imaginary part is stored in the source and destination registers or odd elements of memory locations (e.g., the upper 16 bits of each 32-bit word) )middle.Example pseudocode for calculating even elements:TEMP[15:0]←SRC1[15:0]*SRC2[15:0]DEST[15:0]←TEMP[15:0]-SRC1[31:16]*SRC2[31:16]Example pseudocode for counting odd elements:TEMP[31:16]←SRC1[31:16]*SRC2[15:0]DEST[31:16]←TEMP[31:16]+SRC1[15:0]*SRC2[31:16]Therefore, the real part of the result is stored in the even elements of DEST, and the imaginary part of the result is stored in the odd elements of DEST.In addition, the execution of a single VCFMULPH instruction can also perform both operations used to compute the real and imaginary parts of other words of the packed result, e.g., three additional words of the 128-bit packed result, seven additional words of the 256-bit packed result , or another fifteen words of the 512-bit packed result.In an embodiment, the ISA of the processor may include a first single instruction (eg, VCFMULPH) for performing the complex multiplication described above, and a common instruction used herein for performing the complex multiplication performed by VCFMULPH identified with the mnemonic VCFCMULPH The second single instruction of the yoke version. For example, in an embodiment where VCFMULPH is used to calculate even elements by subtracting the product of two corresponding odd input elements from the product of two even input elements, VCFCMULPH is used to calculate even elements by adding the product of two odd input elements to two Calculate the even element by multiplying the corresponding even input elements.In various embodiments, either or both of the VCFMULPH and VCFCMULPH instructions may provide optional write masking, broadcasting, and/or zeroing.Returning to FIG. 12, register file 1210 may store a first vector X in a first source register and a second vector Y in a second source register, where each of vectors X and Y may represent n complex numbers of gather. Each even and odd element of X (eg, X[0] and X[1], X[2] and X[3], ... X[2n-2] and X[2n-1]) can convert the complex number of The real part is stored in the even elements and the imaginary part of the complex number is stored in the odd elements. Similarly, each pair of even and odd elements of Y (eg, Y[0] and Y[1], Y[2] and Y[3], ... Y[2n-2] and Y[2n-1]) can be The real part of the complex number is stored in the even elements and the imaginary part of the complex number is stored in the odd elements.A copy multiplexer (dup mux) 1220 may perform copying of values from odd elements into even element positions (eg, transform {a,b,c,d} to {b,b,d,d}). In an embodiment, dup mux 1220 may be implemented in hardware with a multiplexer circuit having two input vectors, one output vector. A swap mux 1230 may perform copying of values from odd elements into even element positions (eg, transforming {a,b,c,d} to {b) based on the value of one or more control signals ,b,d,d}), copy values from even elements into odd element positions (e.g. transform {a,b,c,d} to {a,a,c,c}), or swap odd Sum and even elements (for example, transform {a,b,c,d} to {b,a,d,c}). In an embodiment, the swap mux 1230 may be implemented in hardware with two multiplexer circuits of two input vectors, one output vector.Fused multiply-adder (FMA) 1240 may be any type of multiplier and adder circuit. In an embodiment, FMA 1240 may be implemented in hardware with floating point vector FMA circuitry. The FMA 1240 may multiply each of the elements of any size (eg, sixteen bits) of the first input vector with each of the elements of the same size of the second input vector and add the product to the same size of the third input vector each of the elements.In an embodiment, the VCFMULPH instruction may be decoded into two micro-operations, which may cause processing hardware, such as the processing hardware of FIG. 12, to compute both the even and odd elements of a vector of complex numbers.For example, a first micro-op may use control signals to cause hardware to: use the first operand (eg, X) from a first source register in register file 1210 as an input to dup mux 1220; use a second operand (eg, X) from a second source register Operand (eg, Y) as input to swap mux 1230; use dup mux 1220 to pass unchanged first operand to first input 1241 of FMA 1240; use swap mux 1230 to copy even elements of second operand to the odd element and pass the transformed second operand to the second input 1242 of the FMA 1240; use the zero-valued vector as the third input 1243 of the FMA 1240; perform the FMA operation; and store the result of the FMA operation in a temporary register. Thus, for example, the first input 1241 of the FMA 1240 would be {X[0],X[1],X[2],X[3],...X[2n-2],X[2n-1]}; The second input 1242 of the FMA 1240 will be {Y[0], Y[0], Y[2], Y[2], ... Y[2n-2], Y[2n-2]}; the FMA 1240 will Multiply the first input with the second input and add zero to the product; and the FMA result stored in the scratch register will be {X[0]*Y[0],X[1]*Y[0],X [2]*Y[2],X[3]*Y[2],…X[2n-2]*Y[2n-2],X[2n-1]*Y[2n-2]}.Continuing the previous example, a corresponding second micro-op may use control signals to cause the same hardware to: use the second operand (eg, Y) from the second source register in register file 1210 as the input to dup mux 1220; First operand (eg, X) of a source register as input to swap mux 1230; use dup mux 1220 to copy odd elements of second operand to even elements and pass transformed second operand to FMA 1240 first input 1241 of ; use swap mux 1230 to swap even and odd elements of first operand and pass transformed first operand to second input 1242 of FMA 1240; use first micro-op from scratch register the result as a third input 1243 of the FMA 1240; perform the multiplication portion of the FMA operation; negate the even elements of the multiplication result using an inversion circuit such as FMA control logic; perform the addition portion of the FMA operation; and store the result of the FMA operation in the destination register in register file 1210. Thus, for example, the first input 1241 of the FMA 1240 would be {Y[1], Y[1], Y[3], Y[3], ... Y[2n-1], Y[2n-1]}; The second input 1242 of the FMA 1240 will be {X[1],X[0],X[3],X[2],...X[2n-1],X[2n-2]}; the result of the multiplication will be {X[1]*Y[1],X[0]*Y[1],X[3]*Y[3],X[2]*Y[3],…X[2n-1]*Y [2n-1],X[2n-2]*Y[2n-1]}; and the FMA result stored in the destination register will be {X[0]*Y[0]-X[1]*Y [1],X[1]*Y[0]+X[0]*Y[1],X[2]*Y[2]-X[3]*Y[3],X[3]*Y [2]+X[2]*Y[3],…X[2n-2]*Y[2n-2]-X[2n-1]*Y[2n-1],X[2n-1]* Y[2n-2]+X[2n-2]*Y[2n-1]}.Therefore, the real part of the result is stored in the even elements of the destination register, and the imaginary part of the result is stored in the odd elements of the destination register.A method according to an embodiment of the invention is shown in FIG. 13 . The method may be implemented within the context of the processor architectures described herein, but is not limited to any particular processor architecture.At 1302, a first instruction (eg, VCFMULPH) is fetched having fields for specifying an opcode, first and second source operands, and destination operands. In an embodiment, the first and second source operand fields are used to specify 128-bit, 256-bit or 512-bit packed data registers that store sets of complex numbers having 16-bit packed data elements, wherein each even data element represents a complex number of real part, and each corresponding odd data element represents the corresponding imaginary part of the corresponding complex number.At 1304, the first instruction is decoded. In an embodiment, the instruction is decoded into a first micro-operation and a second micro-operation.At 1310, execution of the first micro-operation begins. The execution of the first micro-operation includes 1312 , 1314 , 1316 and 1318 .In 1312, the first operand from the first source register is used as the input to the dup mux, and the second operand from the second source register is used as the input to the swap mux. In 1314, the dup mux passes the first operand unchanged to the first input of the FMA; the swap mux copies the even elements of the second operand to the odd elements and passes the transformed second operand to the first input of the FMA Two inputs; and a zero-valued vector is used for the third input of the FMA. At 1316, an FMA operation is performed by multiplying the vectors provided to the first and second inputs and adding the vector provided to the third input to the product. At 1318, the result of the FMA operation is stored in a temporary register.At 1320, execution of the second micro-operation begins. The execution of the second micro-operation includes 1322 , 1324 , 1326 and 1328 .In 1322, the second operand is used as the input to the dup mux, and the first operand is used as the input to the swap mux. At 1324, the dup mux copies the odd elements of the second operand to the even elements and passes the transformed second operand to the first input of the FMA, the swap mux swaps the even and odd elements of the first operand and the transformed second operand is passed to the first input of the FMA. The transformed first operand is passed to the second input of the FMA, and the result of the first micro-operation from the scratch register is used for the third input of the FMA. At 1326, an FMA operation is performed by multiplying the vectors provided to the first and second inputs, negating the even elements of the multiplication result, and adding the vector provided to the third input to the product. At 1328, the result of the FMA operation is stored in the destination register.Although the real and imaginary values described above are 16 bits in length, the underlying principles of the present invention may be implemented using data elements of any size. For example, the real and imaginary parts can be 8-bit, 32-bit or 64-bit and still conform to the basic principles of the present invention. Various other method embodiments and variations to the method embodiment of FIG. 13 are possible within the scope of the present invention. As one example, a second instruction (eg, VCFCMULPH) may be fetched in 1302, decoded in 1304, and executed in 1326 by omitting the evaluation of the even elements of the multiplication result. As another example, the first and/or second source operand fields may specify 128-bit, 256-bit, or 512-bit memory locations that store sets of complex numbers having 16-bit packed data elements, where each even data element represents a complex number of real part, and each corresponding odd data element represents the corresponding imaginary part of the corresponding complex number.The operations in the flowcharts may have been described with reference to example embodiments in other figures. It should be understood, however, that the operations in this flowchart may be performed by embodiments other than those of the invention discussed with reference to the other figures, and that embodiments of the invention discussed with reference to the other figures may be performed with The operations discussed with reference to the flowcharts differ from operations. Furthermore, although the flowcharts in the figures illustrate a particular order of operations performed by certain embodiments of the invention, it is to be understood that such order is exemplary (eg, alternative embodiments may perform operations in a different order, may combine certain operations, make certain operations overlap, etc.).Accordingly, the present invention may be embodied in machine-executable instructions that may be used to cause a general-purpose or special-purpose processor to perform operations. Alternatively, the operations may be performed by dedicated hardware components containing hardwired logic for performing the operations, or by any combination of programmed computer components and custom hardware components.Accordingly, one or more portions of embodiments of the invention may be implemented using various combinations of software, firmware and/or hardware. Embodiments may be implemented using electronic devices that use a machine-readable medium (also referred to as a computer-readable medium) to store and transmit (internally and/or other electronic devices over a network) code (referred to by Software instructions consisting of and sometimes referred to as computer program code or computer program) and/or data, machine-readable media such as machine-readable storage media (eg, magnetic disks, optical disks, read only memory (ROM), flash memory devices, phase change memory ) and a machine-readable transmission medium (also referred to as a carrier) (eg, electrical, optical, radio, acoustic, or other form of propagated signal—such as a carrier wave, infrared signal). Thus, an electronic device (eg, a computer) may include hardware and software, such as a collection of one or more processors coupled to one or more machine-readable storage media, the machine may A read storage medium is used for storing code for execution on a collection of processors and/or for storing data. For example, an electronic device may include non-volatile memory that contains code, as non-volatile memory can persist code/data even when the electronic device is turned off (when power is removed), and persist when the electronic device is turned on , the portion of the code to be executed by the processor(s) of that electronic device is typically copied from the slower non-volatile memory to the volatile memory of that electronic device (eg, dynamic random access memory (DRAM) ), static random access memory (SRAM)). A typical electronic device also includes a collection of one or more physical network interfaces used to establish network connections with other electronic devices (to transmit and/or receive code and/or data using propagated signals).An embodiment of the invention is a processor including execution circuitry for computing a result of a complex multiplication of a first complex number and a second complex number in response to decoded instructions. The computation includes a first operation for computing a first term for the real part of the result and a first term for the imaginary part of the result. The computation also includes a second operation for computing a second term for the real part of the result and a second term for the imaginary part of the result. The processor also includes a decoder, a first source register, and a second source register. A decoder is used to decode instructions to generate decoded instructions. The first source register is used to provide the first complex number, and the second source register is used to provide the second complex number.The processor may include a destination register in which to store the result. The first complex number may be one of a first set of complex numbers to be represented by a first vector to be stored in a first source register, and the second complex number may be a second complex number to be represented by a second vector to be stored in a second source register one of the sets of complex numbers, and the result may be a third vector representing the third set of complex numbers. The first vector may include a first set of elements representing the real part of the first complex number set and a second set of elements representing the imaginary part of the first complex number set, and the second vector may include a second set of elements representing the second complex number set. A third set of elements for the real part and a fourth set of elements for representing the imaginary part of the second set of complex numbers, and the third vector may include a fifth set of elements for representing the real part of the third set of complex numbers and a fourth set of elements for representing the first set of complex numbers The sixth group of elements of the imaginary part of the set of three complex numbers. The elements of the first, third and fifth groups may be even elements, and the elements of the second, fourth and sixth groups may be odd elements. The first real part can be represented by the first even element of the first operand, the first imaginary part can be represented by the first odd element of the first operand, and the second real part can be represented by the second even element of the second operand , the second imaginary part can be represented by the second odd element of the second operand, the third real part can be represented by the third even element of the result, and the third imaginary part can be represented by the third odd element of the result. The execution circuit may include a first multiplexer for copying the second real part from the second even element of the second operand to the second odd element of the transformed second operand of the first operation. The execution circuit may include a second multiplexer for copying the first real part from the first even element of the first operand to the first odd element of the transformed first operand of the second operation and for copying the first real part from the first even element of the second operation. The first imaginary part of the first odd element of one operand is copied to the first even element of the transformed first operand of the second operation, and the first multiplexer may combine the second odd element from the second operand The second imaginary part of is copied to the second even element of the transformed second operand of the second operation. The execution circuit may include a multiplying circuit for multiplying, as part of the first operation, the first even element of the first operand and the second even element of the transformed second operand of the first operation to compute the third the first term of the real part, and multiply the first odd element of the first operand by the second even element of the transformed second operand of the first operation to compute the first term of the third imaginary part. The processor may include a temporary register in which the first term of the third real part and the first term of the third imaginary part are stored. The multiplying circuit may: as part of the second operation, multiply the first odd element of the transformed first operand of the second operation with the second odd element of the transformed second operand of the second operation to compute the triple the second term of the real part, and multiply the first even element of the transformed first operand of the second operation by the second odd element of the transformed second operand of the second operation to compute the third imaginary Section 2. The execution circuit may include a negation circuit for negating the second term of the third real part to generate a negated second term of the third real part. The execution circuit may include an addition circuit for: adding the first term of the third real part and the negated second term of the third real part to calculate the third real part, and adding the first term of the third imaginary part Add to the second term of the third imaginary part to calculate the third imaginary part. The execution circuit may include a fused multiply-adder, including multiplying circuits and adding circuits. The decoder may also decode the second instruction to generate a second decoded instruction, and the execution circuit may execute the second decoded instruction, wherein the execution of the second decoded instruction is to include bypassing the negation circuit and The first term of the triple real part is added to the second term of the third real part to calculate the third real part.An embodiment of the present invention is a system including a processor and a system memory. System memory may provide the second complex number.In an embodiment, a method may include decoding a first instruction to generate a first micro-operation and a second micro-operation, the first instruction for specifying a first operand having a first real part and a first imaginary part and a Second operand of two real and second imaginary parts; perform first micro-operation to compute first term of third real part and first term of third imaginary part; perform second micro-operation to compute third real part the second term of the third real part and the second term of the third imaginary part, negate the second term of the third real part to generate the negated second term of the third real part, and combine the first term of the third real part with The negated second term of the third real part is added to calculate the third real part, and the second term of the third imaginary part is added to the second term of the third imaginary part to calculate the third imaginary part; and The three real and third imaginary parts are stored in the destination register.Performing the first micro-operation may include multiplying the first real part by the second real part to compute the first term of the third real part, and multiplying the first imaginary part by the second real part to compute the third imaginary part 's first item. Performing the second micro-operation may include multiplying the first imaginary part by the second imaginary part to compute the second term of the third real part, and multiplying the first real part by the second imaginary part to compute the third imaginary part the second item.In an embodiment, an apparatus may comprise means for performing any of the above methods. In an embodiment, a machine-readable tangible medium may store instructions that, when executed by a machine, cause the machine to perform any of the methods described above.Although the invention has been described in terms of several embodiments, the invention is not limited to the described embodiments and the invention may be practiced with various changes without departing from the spirit and scope of the invention as set forth in the appended claims . Accordingly, the specification and drawings are to be regarded in an illustrative rather than a restrictive sense. |
A processor surrogate (320/520) is adapted for use in a processing node (S 1) of a multiprocessor data processing system (300/500) having a plurality of processing nodes (P0, S1) coupled together and to a plurality of input/output devices (330, 340, 350/530, 540, 550, 560) using corresponding communication links. The processor surrogate (320/520) includes a first port (372, 374/620, 622) comprising a first set of integrated circuit terminals adapted to be coupled to a first external communication link (370/590) for coupling (P0) of the plurality of processing nodes (310, 320/510, 520), a second port (382, 384/630, 632) comprising a second set of integrated circuit terminals adapted to be coupled to a second external communication link (380/592) for coupling to one (350/550) of the plurality of input/output devices (330, 340, 350/530, 540, 550, 560), and an interconnection circuit (390, 392/608, 612, 614) coupled between the first port (372, 374/620, 622) and the second port (382, 384/630, 632). |
1.A processor substitute (320/520) for a processing node (SI) of a multiprocessor data processing system (300/500), the multiprocessor data processing system (300/500) has a corresponding communication link Circuits are coupled together and to multiple processing nodes (P0, S1) of multiple input / output devices (330, 340, 350/530, 540, 550, 560), the processor substitute (320/520) include:The first port (372, 374/620, 622) includes a first set of integrated circuit terminals adapted to be coupled to a first external communication link (370/590), the first external communication The link (370/590) is used to couple to one of the plurality of processing nodes (P0, S1) (P0);The second port (382, 384/630, 632) includes a second set of integrated circuit terminals adapted to be coupled to a second external communication link (380/592), the second external communication The link (380/592) is used to couple to one (350/550) of the plurality of input / output devices (330, 340, 350/530, 540, 550, 560); andThe interconnection circuit (390,392 / 606,608,612,614) is coupled between the first port (372,374 / 620,622) and the second port (382,384 / 630,632).2.The processor substitute (320) according to claim 1, wherein the interconnection circuit (390, 392) includes an interface between the first port (372, 374) and the second port (382, 384) Source interconnection.3.The processor substitute (520) of claim 1, wherein the interconnect circuit (606, 608, 612, 614) includes the first port (620, 622) and the second port (630, 632) Active interconnection.4.The processor substitute (520) of claim 3, wherein the interconnect circuit (606, 608, 612, 614) further comprises:A first communication link controller (612) coupled to the first port (620, 622);A second communication link controller (614) coupled to the second port (630, 632); andThe crossbar switch (608) has a first terminal coupled to the first communication link controller (612), and a second terminal coupled to the second communication link controller (614).5.A processor substitute (320) for a multiprocessor data processing system (300) having a first processing node (P0) including an actual processor (310), and A second processing node (S1) coupled to the first processing node (P0) and including the processor substitute (320), the processor substitute (320) includes:An integrated circuit package having a first port (372, 374) forming the processor substitute (320) and disposed at a first position corresponding to the position of the first link controller (212) of the actual processor (310) A plurality of terminals and a second port (382, 384) forming the processor substitute (320) and arranged at a position corresponding to the position of the second link controller (214) of the actual processor (310) Terminals; andElectrical connections (390, 392) between the first plurality of terminals of the first port (372, 374) and the counterparts of the second plurality of terminals of the second port (382, 384).6.The processor substitute (320) of claim 5, wherein the plurality of electrical connections (390, 392) include:A first set of internal connections (390) between the input terminals (372) of the first port (372, 374) and the corresponding output terminals (382) of the second port (382, 384); andA second set of internal connections (392) between multiple input terminals (384) of the second port (382, 384) and corresponding multiple output terminals (374) of the first port (372, 374).7.The processor substitute (320) of claim 5, wherein the first (212) and (214) link controllers of the actual processor (310) are substantially compatible with HyperTransportTM I / O link specifications Version 1.05 is compatible.8.A multi-processor data processing system (300/500), including:The first processing node (P0), including the actual processor (110);The second processing node (S1) includes a processor substitute (320/520) having a first port (372, 374 /) coupled to the first processing node (P0) 620,622), a second port (382,384 / 630,632), and coupled to the first port (372,372 / 620,622) and the second port (382,384 / 630,632) Interconnect circuit (390,392 / 606,608,612,614);The input / output device (350/550) is coupled to the second port (382, 384/630, 632) of the second processing node (S1), and can be replaced by the processor substitute (320/520) Accessed by the actual processor (110).9.The multiprocessor data processing system (300) of claim 8, wherein the interconnection circuit (390,392) includes between the first port (372,374) and the second port (382,384) Passive interconnection.10.The multiprocessor data processing system (500) according to claim 8, wherein the interconnection circuit (606, 608, 612, 614) includes the first port (620, 622) and the second port (630, 632) Active interconnection. |
Processor substitute for multiprocessor system and multiprocessor system using the processor substituteTechnical fieldThe present invention relates to a data processing system, especially a multi-processor system.Background techniqueIn the development of digital computers, there is a tendency to continue toward higher performance. Recent developments in integrated circuit (IC) manufacturing technology have produced smaller and faster ICs, which now make microprocessor-based computer systems have higher performance than previous-generation supercomputers. Microprocessor performance is determined by many factors, including clock speed and data bus width.Typical IC manufacturers have been able to provide their higher-speed revisions during the lifetime of certain microprocessors. Continuous improvements in microprocessor speed have enabled users to upgrade their computer systems with newer, higher-speed microprocessors. It is therefore possible to remove the older, slower microprocessor from its socket and insert the new, higher-speed microprocessor into its place. An example of such upgradeability is a microprocessor that can communicate with a memory device at a certain speed, but its internal clock speed can be increased to a higher frequency, as shown in US Patent No. 5,828,869 Revealed by Johnson et al.This type of upgrade has allowed a single processor system to significantly improve its performance. However, recent computer architectures have become more complex than single processor systems. For example, some computer architectures now use multiple processors and non-uniform memory access (NUMA). In such a NUMA system, two or more microprocessors are connected in a ring or a network, and each microprocessor has associated memory and possibly one or more associated input / output devices. For users, it is more desirable to use a low-cost NUMA system at the beginning and upgrade the system later to improve performance.Therefore, it is desirable to provide a new means to improve the performance in multiprocessor computer systems. This and other desired features and characteristics of the present invention will become clear from the subsequent detailed description and the appended claims, combined with the accompanying drawings and the aforementioned technical field and background.Summary of the InventionA processor surrogate is a processing node used in a multi-processor data processing system. The multi-processor data processing system is coupled with corresponding communication links. And coupled to multiple processing nodes of multiple input / output devices. Processor substitutes include a first port, a second port, and interconnect circuits. The first port contains a first set of integrated circuit terminals, making the first set of integrated circuit terminals suitable for coupling to a first external communication link for coupling to one of a plurality of processing nodes. The second port contains a second set of integrated circuit terminals, making the second set of integrated circuit terminals suitable for coupling to a second external communication link for coupling to one of a plurality of input / output devices. The interconnect circuit is coupled between the first port and the second port.In another form, a multiprocessor data processing system includes first and second processing nodes and input / output devices. The first processing node includes an actual processor. The second processing node includes a processor substitute. The processor substitute has a first port coupled to the first processing node, a second port, and an interconnect circuit coupled between the first port and the second port. The input / output device is coupled to the second port of the second processing node and can be accessed to the actual processor via the processor substitute.BRIEF DESCRIPTIONThe present invention has been described in detail in conjunction with the following drawings, in which the same reference numbers denote the same components.FIG. 1 shows a block diagram of a multi-processor computer system that can help to understand the present invention;2 shows a block diagram of a part of the multiprocessor computer system of FIG. 1, including one of the processors and its associated memory;3 shows a block diagram of a multiprocessor computer system using processor substitutes according to the present invention;4 shows a block diagram of the processor substitute of FIG. 3;5 shows a block diagram of a multiprocessor computer system using processor substitutes according to another aspect of the present invention;6 shows a block diagram of the processor substitute of FIG. 5;7 shows a block diagram of a multiprocessor computer system using the processor substitute of FIG. 6 according to yet another aspect of the present invention;8 shows a top view of an integrated circuit package that can be used in the actual processor of FIG. 2 and the processor substitutes of FIGS. 4 and 6;9 shows a side view of the integrated circuit package of FIG. 8; and10 shows a bottom view of the integrated circuit package of FIG. 8.detailed descriptionThe following detailed description is merely an example in nature, and is not intended to limit the invention or to limit the application and use of the invention. Furthermore, it is not intended to be subject to any expressed or implied theory appearing in the above technical field, background, brief summary, or detailed description below. limit.FIG. 1 shows a block diagram of a multiprocessor computer system 100 for understanding the present invention. The computer system 100 includes two processor nodes represented by circles, including a first processor node labeled "P0" and a second processor node labeled "P1", which are connected via a communication link 116 at together. The nodes P0 and P1 are executed using the microprocessors 110 and 120, respectively. The system 100 also includes a first input / output (I / O) device 130 labeled "I / O A", a second I / O device 140 labeled "I / O B", and "I / O C" ", The third I / O device 150, the fourth I / O device 160 labeled" I / O D ", the first dynamic random access memory (DRAM) 170 labeled" DRAM 0 ", and the label" DRAM " 1 "second DRAM 180. The processor 110 is a single-chip microcomputer, which communicates with the I / O devices 130 and 140 via communication links 112 and 114, and communicates with the processor 120 via link 116, respectively. The processor 110 also has a dedicated bus for performing memory access with the local DRAM 170. Similarly, the processor 120 communicates with the I / O devices 150 and 160 via corresponding links, and has a dedicated bus for connecting with the regional DRAM 180. The I / O devices 130, 140, 150, and 160 may include a graphics processor, an Ethernet controller, and a bridge connected to another bus (for example, interconnected by a personal computer (Personal Computer) Interconnect (referred to as PCI) Group (Special) Group (Special) Group and any changes I / O devices.The processors 110 and 120 use a link controller to communicate with their individual I / O devices. The link controller complies with the HyperTransportTM I / O link specification, revision 1.05, 2003 High Speed Transmission Technology Enterprise Consortium ), When using 1600MHz data rate can achieve 3.2GB / sec throughput (throughput). The HyperTransport technology is a packet-based link implemented in two independent unidirectional lines. Thus, for example, links 112, 114, and 116 include output connections and input connections. Each HyperTransport link is nominally point-to-point, and connects two devices. The chain of the HyperTransport link can also be used as an I / O channel, connecting I / O devices and bridges to the host system. The HyperTransport link is designed to transmit high-performance and scalable interconnections between the central processing unit (CPU), memory, and I / O devices. The HyperTransport link uses low-swing differential signal transmission (on-diedifferential) on the die to achieve a very high data rate. HyperTransport link uses adjustable frequency and data width to achieve increase or decrease bandwidth.The system 100 includes memory associated with each processor node and distributed between the nodes. The system 100 adopts a cache-related non-uniform memory access (cache memory-uniform memory access; CC NUMA) architecture. The CCNUMA architecture is non-uniform, in which all the memory in the system can be seen by each processor, but the access time depends on the physical distance between the processor and the memory. Therefore, the processor 110 can quickly access the DRAM 170, but before the processor 110 can access the DRAM 180, it must wait for the memory access request to travel through the entire link 116. The link 116 between the processors 110 and 120 uses a known HyperTransport ( coherent HyperTransport) special form of HyperTransport.FIG. 2 shows a block diagram of a portion 200 of the multiprocessor computer system 100 of FIG. 1, including the processor 110 and DRAM 170. The processor 110 is a single crystal microprocessor, and generally includes a central processing unit (CPU) 202, a memory controller 206, a crossbar switch 208 labeled "XBAR", and a HyperTransport, each labeled " HT "three link controllers 212, 214, and 216. The CPU 202 is a processor, adapted to execute instructions of the so-called x86 instruction set. The x86 instruction set is based on the instruction set of the 8086 microprocessor first manufactured by Intel Corporation of Santa Clara County, California, USA. However, CPU 202 includes many high-performance executions for x86 programs. Complex functions include pipelining and superscalar design. The CPU 202 includes at least one cache 204 for storing frequently used data. For example, the CPU 202 may include two top-level (L1) caches, one for storing instructions, the other for storing data, and a second-level (L2) cache, which is shared by the instruction and the data stream.The memory controller 206 is a mechanism for data transfer between the processor 110 and the DRAM 170. The memory controller 206 offloads the task of enabling and terminating memory access from the CPU 202. The memory controller 206 includes an internal queue that allows efficient use of the external bus to DRAM 170. In other embodiments, DRAM 170 can be replaced by a lower-level memory system, which includes one or more additional caches and main memory, and can also be replaced by static RAM, non-volatile memory, and the like.XBAR 208 is a switching / multitasking circuit designed to be coupled to the processor 110 inside the bus.Link controllers 212, 214, and 216 are coupled to external links 112, 114, and 116, respectively. Links 112, 114, and 116 include output channels 220, 230, and 240, and input channels 222, 232, and 242, respectively. Each link controller 212, 214, and 216 conforms to the HyperTransportTM I / O link specification, revision 1.05, but additionally supports HyperTransport's special related form capable of linking two processor nodes.Considering both FIG. 1 and FIG. 2 at the same time, it can be seen how the processor 120 accesses the DRAM 170. The own memory controller of the processor 120 corresponding to the memory controller 206 will receive a memory access request from its CPU. After recognizing that the access is for memory existing in another node, one of its link controllers sends a memory access request to the processor 110 via the related link 116. The link controller 216 receives the request packet and routes it to the memory controller 206 via XBAR 208. The memory controller 206 then checks its internal directory to see if the requested memory component exists in the cache 204. If the requested memory component does not exist in the cache 204, the memory controller 206 will read the DRAM 170 and provide the requested data component back to the processor via the XBAR 208 and link controller 216 via the relevant link 116 120.Although a socket-compatible but faster processor may be used to upgrade the system 100, it is hoped that there will be a more flexible upgrade. This capability is shown in FIG. 3, which is a block diagram of a multiprocessor computer system 300 using a processor substitute 320 according to the present invention. As used herein, "multiprocessor" means having more than one processing node, that is, or only one processing node having an actual CPU. The system 300 is similar to the system 100 except that the node P1 has been replaced by a node labeled "S1" (the node S1 has a processor substitute 320 without its own CPU). The "processor substitute" used here is a device that is inserted into the socket of the node S1 to replace the actual processor. By replacing the actual processor with a processor substitute 320, an additional I / O device 350 can be used in the system 300 without having to spend another actual microprocessor with its own CPU. The system 300 is essentially a single processor system, and can be easily upgraded to a dual processor system. Therefore, the system 300 is a low-cost system but has an upgrade path: the actual processor such as that shown in FIG. 2 can be later inserted into the socket used by the processor substitute 320 to significantly upgrade the computer system 300 Performance.As will be understood later, there are generally two types of processor substitutes: active and passive. The two types of processor substitutes are compatible with the actual microprocessor as a socket and can replace the operation of the actual microprocessor, but the types of interconnect circuits they use are different. 4 shows a block diagram of the processor substitute 320 of FIG. The processor substitute 320 is a passive substitute and includes a first set of wires 390 connecting the input signal of the HyperTransport link 370 to the output signal of the HyperTransport link 380, and connecting the input signal of the HyperTransport link 380 to the HyperTransport link 370 The second set of wires 392 of the output signal. The processor substitute 320 includes integrated circuit terminals corresponding to the terminals of the two link controllers of the actual processor such as the microprocessor of FIG. 2.After power-up, the processor 310 detects that the HyperTransport link is related or non-coherent, and negotiates the information transmission speed on the link. Therefore, the link controller in the processor 310 connected to the link 370 will detect from the communication with the I / O controller 350 via the processor substitute 320 that the link is unrelated. However, if the processor substitute is later replaced by the actual processor, the link controller in the processor 310 will detect the presence of the active node and configure it as a related link.The processor substitute 320 has the same "footprint" as the processor 110 of FIG. 1 and FIG. 2, that is to say, it can actually be inserted or inserted into a socket that can accommodate an actual processor such as the processor 110 groove. Therefore, the processor substitute 320 will have the same integrated circuit package size as the actual processor. However, the integrated circuit package will only contain wires 390 and 392. In particular, one type of package for the processor 110 is known as a ceramic micro pin grid array package. To be accommodated in a socket for a micro-pin grid array processor, the processor substitute 320 will also use a similar micro-pin grid array package. However, the pins used for most signals will not be connected, and therefore there will be "dummy" pins. Pins 372, 374, 382, and 384 are used to provide proper interconnection. The ceramic type package provides the opportunity to use multiple signal planes to form interconnections within the ceramic material to reduce parasitic losses. Otherwise, if a leadframe type package is used, parasitic losses will occur through the use of bonding wires (parasiticlosses). You can choose to connect power and ground pins to provide an appropriate ground plane for shielding radio frequency (RF) radiation and interference. It is worth noting that the processor substitute 320 can be redesigned to match the footprint of any other package type used for the actual processor. Moreover, if the electrical and mechanical properties are sufficient, the ceramic package can be replaced with a cheaper organic package.In detail, the processor substitute 320 can be accommodated in a ceramic micro-pin grid array package formed by an array of 31 columns by 31 rows and having a total of 961 pins. Using the HyperTransport link, input ports 372 and 384 each include 38 pins, including 4 clock input pins, 2 control input pins, and 32 multitask command / address / data input pins, which are used A pair of signal pins conduct each signal in a differential manner. The output ports 374 and 384 also include 38 corresponding pins, including 4 clock output pins, 2 control output pins, and 32 multi-task command / address / data output pins. To manufacture processor substitute 320, the manufacturer connects the control input pin of the first link (link 0) and the control output pin of the second link (link 1), the clock input of link 0 Pin and the corresponding clock output pin of link 1, and the multi-task command / address / data input pin of the link 0 and the corresponding multi-task command / address / data output pin of the link 1, and at Internal interconnects 390 are formed in the package. A similar connection 392 is also made to connect the input of link 1 to the output of link 0. It is worth noting that the feature of HyperTransport is the ability to adjust the number of command / address / data output pins from 2 to 32 pairs. In other embodiments, different numbers other than the above 16 pairs of differential pins can be supported Command / address / data pin.FIG. 5 shows a block diagram of a multiprocessor computer system 500 using processor substitutes according to another aspect of the present invention. The system 500 includes an actual processor 510 at node P0 and a processor substitute 520 in active form at node S1. Nodes P0 and S1 are connected together using the related HyperTransport link 590. System 500 includes four I / O devices, including I / O device 530 labeled "I / O A", I / O device 540 labeled "I / O B", and "I / O C" I / O device 550, and I / O device 560 labeled "I / O D". Use separate non-relevant HyperTransport links to connect I / O devices 530 and 540 to processor 510, and use separate non-relevant HyperTransport links 592 and 594 to connect I / O devices 550 and 560 to processor substitutes 520. The system 500 also includes a first DRAM 570 labeled "DRAM0" and a second DRAM 580 labeled "DRAM1" connected to nodes P0 and S1, respectively.The system 500 uses an active processor substitute 520 to allow more resources for node P0 without the need for a second actual processor with its own CPU and cache. As will be explained further below, the processor substitute 520 provides interconnection by replacing simple wiring with active circuits to allow the processor 510 to access two I / O devices 550 and 560 and additional DRAM 580 without additional CPU. Because the processor substitute 520 lacks the CPU and cache, the processor substitute 520 is cheaper than the actual processor, but it provides an upgrade path for improving future performance.A better understanding of the construction of active processor substitutes will be made with reference to FIG. 6, which shows a block diagram of a portion of system 500 of FIG. 5 that includes processor substitutes 520 and DRAM 580. As shown in FIG. 6, the processor substitute 520 includes a memory controller 606, a crossbar switch 608, and HyperTransport link controllers 612, 614, and 616 connected to links 590, 592, and 594, respectively. As with the processor 110 of FIG. 2, HyperTransport link controllers 612, 614, and 616 are connected to corresponding ports, including output connection groups 620, 630, and 640, and input connection groups 622, 632, and 634, respectively. HyperTransport link controllers 612, 614, and 616 are also connected to the crossbar switch 608. The memory controller 606 is connected to the crossbar switch 608 and to the external DRAM 580.The memory controller, crossbar switch, and HyperTransport link controller of the actual processor 110 and processor substitute 520 of FIG. 2 are functionally the same. The displayed crossbar switches 208 and 608 each include a feature that automatically detects whether the CPU is present. Therefore, the design is modular, and it is possible to simply delete the CPU from the netlist and input the amended parts list to the automatic configuration and winding CAD software to implement the integrated circuit for the processor substitute 520. Because the CPU consumes the basic main area of the integrated circuit area of the processor 110, the integrated circuit for the processor substitute 520 will cause a relatively small cost. Alternatively, an actual processor with a defective CPU may be used to form an active processor substitute.It is worth noting that in order to use active processor substitutes, Link 590 uses the relevant form of HyperTransport. Like the memory controller, the link controller in the processor substitute 520 is modular and the same as those used in actual processors. However, after booting, the link controller in the processor substitute 520 connected to the processor 510 via the link 590 will detect the active device at the other end and configure the link as a related form of HyperTransport. Therefore, the protocol is applicable to substitutes that have allocated memory and memory controllers.7 shows a block diagram of a multiprocessor computer system 700 using the processor substitute of FIG. 6 according to yet another aspect of the present invention. System 700 illustrates the flexibility of active form processor substitutes in the construction of complex system topologies with considerable upgrade capabilities. System 700 includes four processor nodes labeled "P0", "S1", "S2", and "S3" executed by actual processor 710 and processor substitutes 720, 730, and 740, respectively. System 700 uses actual processors such as processor 110 of FIG. 2 for PO, and active form processor alternatives such as processor substitute 520 shown in FIG. 6 for nodes S1, S2, and S3. Use the relevant HyperTransport links to connect the processor nodes in a ring so that node P0 is connected to adjacent nodes S1 and S3, node S1 is connected to adjacent nodes P0 and S2, node S2 is connected to adjacent nodes S1 and S3, and S3 is connected to adjacent nodes S2 and P0.System 700 provides accessibility to DRAM and I / O devices connected to three active form processor alternatives without the need for an additional CPU (the CPU must be an actual processor). The system 700 also has a significant upgrade path capability, which can expand the system to 4 processors.In other systems, other processor node topologies may be used, and all such multiprocessor topologies may have at least one actual processor and one or more processor substitutes to provide a flexible upgrade path. It should also be noted that although the processor 110 of FIG. 2 supports communication through three HyperTransport links using three corresponding link controllers, in other embodiments, the actual processor may include a different number of links Circuit controller, and the possibility of using processor substitutes can also vary. For example, if the actual processor 110 includes four link controllers, in a two-node multiprocessor system, a passive form of processor substitute can allow the processor 110 and the two connected to the processor substitute Additional I / O devices are connected. The four link controllers also allow more complex network topologies, as detailed in this category.It is worth noting that here is an example of processor substitutes and multiprocessor systems according to the HyperTransport NUMA architecture. In other embodiments, other inter-processor communication protocols can also be used. It should also be noted that the inter-processor communication protocol is not required to use related links. For example, software-related management can be used through communication links between unrelated processors. Moreover, the disclosed microprocessor can execute other instruction groups than the x86 instruction group.8 shows a top view 800 of an integrated circuit package that can be used in the actual processor 110 of FIG. 2, the processor substitute 320 of FIG. 4, and the processor substitute 520 of FIG. The illustrative integrated circuit package is a micro-pin grid array package. The pin grid array package is a package type that is particularly suitable for replacement, because it can be matched to the corresponding socket, and the processor substitute can be easily extracted from the socket and replaced with the actual processor. From the top view 800, it can be seen that the micro-pin grid array package has a base 802 and a cover 804 in the central portion of the area defined by the base 802. The base 802 has a bevel 806 in the upper right corner marked as "A1", which will be described in more detail below.9 shows a side view 900 of the integrated circuit package of FIG. From the side view 900, the base portion 902 and the cover portion 904 can be seen. Below the base 902 are a plurality of integrated circuit terminals formed in the form of a pin array 906 that extends downward from the bottom surface of the base 902.10 shows a bottom view 1000 of the integrated circuit package of FIG. From the lower view 1000, the A1 corner 1002 and the pin array extending toward the observer and represented by solid circles can be seen. The package depicts a possible pin array formed by columns 1004 and rows 1006. Column 1004 includes 31 columns and row 1006 includes 31 rows, which can be used for a possible 961 pin array. However, there are no pins in the corners and several areas in the array, making the total number of pins equal to 940. Columns in the order of A, B, C ··· H, J ··· M, N, P, R, S ··· V, W, Y, AA, AB ··· AH, AJ, AK, AL Specify from top to bottom, and the lines are designated from 1 to 31 from right to left.Therefore, in a particular example, the standard processor system has a pin assignment that includes the following appropriate pins for the link controller 212 as shown in Table I and for the link controller 212 as shown in Table II:Table ITable IIPin L0_CADIN_H [0] represents the high potential or higher positive potential pin of the differential pair of the control / address / data input pin 0 of the link controller 0, and L1_CLKOUT_L [1] indicates the link The differential output pin of the clock output pin 1 of the road controller 1 is a low-potential or more negative-potential pin, and the rest are similar.In order to manufacture the passive form of the processor substitute shown in FIG. 4, the manufacturer must internally connect the input end of the link controller 0 to the output end of the corresponding link controller 1, and connect the The input terminal is internally connected to the output terminal of the corresponding link controller 0. Therefore, using the example of the micro-pin grid array in Figures 8 to 10 and Tables I and II, pin L0_CADIN_L [0] (assigned to pin position G2) will be connected to pin L1_CADOUT_L [0] (assigned to pin position E14), pin L1_CADIN_H [15] (assigned to pin position E14) will be connected to pin L0_CADOUT_H [15] (assigned to pin position V4), the rest are similar.Although at least one implementation example has been presented in the above detailed description, it should be understood that there are wide variations in this implementation example. It should also be understood that the implementation example or implementation examples are only used as examples, and are not intended to limit the scope, application, or configuration of the present invention in any way. On the contrary, the above detailed description provides a convenient guide for those skilled in the art to implement this embodiment or embodiments. It should be understood that various changes can be made in function and component configuration without departing from the scope of the invention and its legal equivalents as set forth in the appended claims. |
Methods and apparatus for performing machine learning tasks, and in particular, to a neural-network-processing architecture and circuits for improved handling of partial accumulation results in weight-stationary operations, such as operations occurring in compute-in-memory (CIM) processing elements (PEs). One example PE circuit for machine learning generally includes an accumulator circuit, a flip-flop array having an input coupled to an output of the accumulator circuit, a write register, and a first multiplexer having a first input coupled to an output of the write register, having a second input coupled to an output of the flip-flop array, and having an output coupled to a first input of the first accumulator circuit. |
CLAIMSWhat is claimed is:1. A processing element (PE) circuit comprising: a first accumulator circuit; a flip-flop array having an input coupled to an output of the first accumulator circuit; a write register; and a first multiplexer having a first input coupled to an output of the write register, having a second input coupled to an output of the flip-flop array, and having an output coupled to a first input of the first accumulator circuit.2. The PE circuit of claim 1, further comprising a read register having an input coupled to the output of the flip-flop array.3. The PE circuit of claim 2, further comprising a write bus coupled to an output of the read register.4. The PE circuit of claim 3, further comprising a read bus coupled to an input of the write register.5. A neural network circuit comprising a plurality of PE circuits, wherein at least one of the plurality of PE circuits comprises the PE circuit of claim 4, the neural network circuit further comprising: a tightly coupled memory coupled to the write bus and to the read bus; and a global memory coupled to the read bus, wherein another one of the plurality of PE circuits has an output coupled to a second input of the first accumulator circuit.6. The neural network circuit of claim 5, wherein the other one of the plurality of PE circuits does not include a write register.7. The PE circuit of claim 1, further comprising a read bus coupled to an input of the write register, wherein the read bus is configured to couple to at least one of a tightly coupled memory or a global memory, external to the PE circuit.8. The PE circuit of claim 1, further comprising: an adder circuit; and an accumulator-and- shifter circuit having an input coupled to an output of the adder circuit and having an output coupled to a second input of the first accumulator circuit.9. The PE circuit of claim 8, further comprising: a second accumulator circuit; and a second multiplexer having a first input coupled to an output of the second accumulator circuit and having an output coupled to the first input of the first accumulator circuit.10. The PE circuit of claim 1, wherein the PE circuit is a digital compute-in-memory (DCIM) PE circuit and wherein the PE circuit further comprises: a DCIM array; a bit-column adder tree circuit coupled to the DCIM array; and a weight-shift adder tree circuit coupled to the bit-column adder tree circuit.11. The PE circuit of claim 10, wherein the DCIM array comprises a plurality of compute-in-memory cells and wherein at least one of the compute-in-memory cells comprises an eight-transistor (8T) static random-access memory (SRAM) cell.12. A neural network circuit comprising: a first set of cascaded processing element (PE) circuits, wherein an output of a first PE circuit in the first set is coupled to an input of a second PE circuit in the first set and wherein each PE circuit in the first set of cascaded PE circuits comprises: a multiply-and-accumulate (MAC) circuit; a local accumulator circuit having an input coupled to an output of theMAC circuit; and
a set of flip-flops having an input coupled to an output of the local accumulator circuit; and a first global accumulator circuit having an input coupled to an output of the first set of cascaded PE circuits.13. The neural network circuit of claim 12, wherein each PE circuit in the first set of cascaded PE circuits is configured to concurrently perform a MAC operation with the MAC circuit and a shift operation with the set of flip-flops to shift a value from the PE circuit to a next PE circuit in the first set of cascaded PE circuits or to the first global accumulator circuit.14. The neural network circuit of claim 12, further comprising a memory, wherein: the first global accumulator circuit is configured to write partial sums to, and read the partial sums from, the memory; and the first set of cascaded PE circuits is not configured to write the partial sums to, or read the partial sums from, the memory.15. The neural network circuit of claim 12, wherein the first global accumulator circuit comprises: a first accumulator; a flip-flop array having an input coupled to an output of the first accumulator; a write register; and a first multiplexer having a first input coupled to an output of the write register, having a second input coupled to an output of the flip-flop array, and having an output coupled to a first input of the first accumulator.16. The neural network circuit of claim 15, wherein the first global accumulator circuit further comprises a read register having an input coupled to the output of the flip-flop array.17. The neural network circuit of claim 16, further comprising a tightly coupled memory, wherein the first global accumulator circuit further comprises:
a write bus coupled between an output of the read register and the tightly coupled memory; and a read bus coupled between the tightly coupled memory and an input of the write register.18. The neural network circuit of claim 17, further comprising a global memory coupled to the read bus of the first global accumulator circuit.19. The neural network circuit of claim 12, wherein the first set of cascaded PE circuits is configured such that weights are loaded in parallel into the first set of cascaded PE circuits.20. The neural network circuit of claim 12, wherein the first set of cascaded PE circuits comprises a number of cascaded PE circuits, such that the first global accumulator circuit is configured to receive a partial sum from the first PE circuit through all the PE circuits in the first set after a number of activation-input-bit cycles has occurred that matches the number of cascaded PE circuits.21. The neural network circuit of claim 12, wherein: the first global accumulator circuit is configured to receive a partial sum from the first PE circuit through all the PE circuits in the first set after a number of activationinput-bit cycles has occurred; and a number of cascaded PE circuits in the first set is greater than or equal to the number of activation-input-bit cycles.22. The neural network circuit of claim 12, wherein each PE circuit in the first set of cascaded PE circuits is a digital compute-in-memory (DCIM) PE circuit, wherein the MAC circuit in each PE circuit comprises a DCIM array, wherein the DCIM array comprises a plurality of compute-in-memory cells, and wherein at least one of the compute-in-memory cells comprises an eight-transistor (8T) static random-access memory (SRAM) cell.23. The neural network circuit of claim 12, further comprising:
a second set of cascaded PE circuits, wherein an output of a first PE circuit in the second set is coupled to an input of a second PE circuit in the second set and wherein each PE circuit in the second set of cascaded PE circuits comprises: a multiply-and-accumulate (MAC) circuit; a local accumulator circuit having an input coupled to an output of theMAC circuit; and a set of flip-flops having an input coupled to an output of the local accumulator circuit; a second global accumulator circuit having an input coupled to an output of the second set of cascaded PE circuits; a first copy-flop having an input coupled to an output of the first global accumulator circuit; a second copy-flop having a first input coupled to an output of the second global accumulator circuit and having a second input coupled to an output of the first copyflop; and a super global accumulator circuit having an input coupled to an output of the second copy-flop.24. A method of neural network processing, comprising: receiving, at a first input of a multiplexer, first data from a write register; receiving, at a second input of the multiplexer, second data from a flip-flop array; receiving, at an accumulator circuit, third data from a processing element (PE) circuit; selecting, with the multiplexer, data to output to the accumulator circuit between the first data and the second data; and accumulating, with the accumulator circuit, the selected output data from the multiplexer and the third data received from the PE circuit to generate accumulated data.25. The method of claim 24, further comprising: outputting the accumulated data to the flip-flop array; shifting, with the flip-flop array, the accumulated data to a read register; and
writing the accumulated data from the read register to a tightly coupled memory (TCM) via a write bus.26. The method of claim 24, further comprising: outputting the accumulated data to the flip-flop array; shifting, with the flip-flop array, the accumulated data to a read register; processing the accumulated data from the read register with digital postprocessing logic; and writing the processed, accumulated data to a tightly coupled memory (TCM) via a write bus coupled between the digital post-processing logic and the TCM.27. A method of neural network processing, comprising: performing a multiply-and-accumulate (MAC) operation in each processing element (PE) circuit in a set of cascaded PE circuits, wherein an output of a first PE circuit in the set of cascaded PE circuits is coupled to an input of a second PE circuit in the set of cascaded PE circuits and wherein each PE circuit in the set of cascaded PE circuits comprises: a MAC circuit; a local accumulator circuit having an input coupled to an output of theMAC circuit; and a set of flip-flops having an input coupled to an output of the local accumulator circuit; performing a shifting operation with the set of flip-flops in each PE circuit to shift a value from the PE circuit to a next PE circuit in the set of cascaded PE circuits or to a global accumulator circuit, wherein in each PE circuit, the shifting operation is performed concurrently with the performance of the MAC operation; and accumulating, with the global accumulator circuit, the shifted values from a last PE circuit in the set of cascaded PE circuits to generate accumulated data.28. The method of claim 27, further comprising loading weights in parallel into the set of cascaded PE circuits before performing the MAC operation in each PE circuit with the weights.29. The method of claim 27, wherein the accumulating comprises: writing, with the global accumulator circuit, partial sums to a memory; and reading, with the global accumulator circuit, the partial sums from the memory, wherein the set of cascaded PE circuits does not write the partial sums to, or read the partial sums from, the memory.30. The method of claim 27, wherein the accumulating comprises: receiving, at a first input of a multiplexer in the global accumulator circuit, first data from a write register in the global accumulator circuit; receiving, at a second input of the multiplexer, second data from a flip-flop array in the global accumulator circuit; receiving, at another accumulator circuit in the global accumulator circuit, third data from a last PE circuit in the set of cascaded PE circuits; selecting, with the multiplexer, data to output to the other accumulator circuit between the first data and the second data; and accumulating, with the other accumulator circuit, the selected output data from the multiplexer and the third data to generate the accumulated data. |
PARTIAL SUM MANAGEMENT AND RECONFIGURABLE SYSTOLICFLOW ARCHITECTURES FOR IN-MEMORY COMPUTATIONCROSS-REFERENCE TO RELATED APPLICATION(S)[0001] This application claims priority to U.S. Application No. 17/398,791 filed August 10, 2021, which is assigned to the assignee hereof and incorporated by reference herein in its entiretyTECHNICAL FIELD[0002] Aspects of the present disclosure relate to machine learning, and in particular, to circuits, neural-network-processing architectures, and techniques for handling partial sums in weight-stationary schemes, such as in compute-in-memory (CIM) technology.BACKGROUND[0003] Machine learning is generally the process of producing a trained model (e.g., an artificial neural network, a tree, or other structures), which represents a generalized fit to a set of training data that is known a priori. Applying the trained model to new data produces inferences, which may be used to gain insights into the new data. In some cases, applying the model to the new data is described as “running an inference” on the new data.[0004] As the use of machine learning has proliferated for enabling various machine learning (or artificial intelligence) tasks, the desire for more efficient processing of machine learning model data has grown. In some cases, dedicated hardware, such as machine learning accelerators, may be used to enhance a processing system’s capacity to process machine learning model data. However, such hardware demands space and power, which is not always available on the processing device. For example, “edge processing” devices, such as mobile devices, always-on devices, Internet of Things (loT) devices, and the like, typically have to balance processing capabilities with power and packaging constraints. Further, accelerators may move data across common data busses, which can cause significant power usage and introduce latency into other processes sharing the data bus.[0005] Consequently, other aspects of a processing system are being considered for processing machine learning model data. Memory devices are one example of another
aspect of a processing system that may be leveraged for performing processing of machine learning model data through so-called compute-in-memory (CIM) processes, also referred to as in-memory computation.SUMMARY[0006] The systems, methods, and devices of the disclosure each have several aspects, no single one of which is solely responsible for its desirable attributes. Without limiting the scope of this disclosure as expressed by the claims that follow, some features are discussed briefly below. After considering this discussion, and particularly after reading the section entitled “Detailed Description,” one will understand how the features of this disclosure provide the advantages described herein.[0007] Certain aspects of the present disclosure are directed to a processing element (PE) circuit for machine learning. The PE circuit generally includes a first accumulator circuit; a flip-flop array having an input coupled to an output of the first accumulator circuit; a write register; and a first multiplexer having a first input coupled to an output of the write register, having a second input coupled to an output of the flip-flop array, and having an output coupled to a first input of the first accumulator circuit.[0008] Certain aspects of the present disclosure are directed to a neural network circuit comprising a plurality of PE circuits, wherein at least one of the plurality of PE circuits comprises the PE circuit as described herein. The neural network circuit further includes a tightly coupled memory coupled to the write bus and to the read bus and a global memory coupled to the read bus, wherein another one of the plurality of PE circuits has an output coupled to a second input of the first accumulator circuit.[0009] Certain aspects of the present disclosure are directed to a neural network circuit. The neural network circuit generally includes a first set of cascaded PE circuits, wherein an output of a first PE circuit in the first set is coupled to an input of a second PE circuit in the first set and a first global accumulator circuit having an input coupled to an output of the first set of cascaded PE circuits. Each PE circuit in the first set of cascaded PE circuits includes a multiply-and-accumulate (MAC) circuit, a local accumulator circuit having an input coupled to an output of the MAC circuit, and a set of flip-flops having an input coupled to an output of the local accumulator circuit.
[0010] Certain aspects of the present disclosure are directed to a method of neural network processing. The method generally includes receiving, at a first input of a multiplexer, first data from a write register; receiving, at a second input of the multiplexer, second data from a flip-flop array; receiving, at an accumulator circuit, third data from a PE circuit; selecting, with the multiplexer, data to output to the accumulator circuit between the first data and the second data; and accumulating, with the accumulator circuit, the selected output data from the multiplexer and the third data received from the PE circuit to generate accumulated data.[0011] Certain aspects of the present disclosure are directed to a method of neural network processing. The method generally includes performing a MAC operation in each PE circuit in a set of cascaded PE circuits, wherein an output of a first PE circuit in the set of cascaded PE circuits is coupled to an input of a second PE circuit in the set of cascaded PE circuits and wherein each PE circuit in the set of cascaded PE circuits comprises: a MAC circuit, a local accumulator circuit having an input coupled to an output of the MAC circuit, and a set of flip-flops having an input coupled to an output of the local accumulator circuit; performing a shifting operation with the set of flip-flops in each PE circuit to shift a value from the PE circuit to a next PE circuit in the set of cascaded PE circuits or to a global accumulator circuit, wherein in each PE circuit, the shifting operation is performed concurrently with the performance of the MAC operation; and accumulating, with the global accumulator circuit, the shifted values from a last PE circuit in the set of cascaded PE circuits to generate accumulated data.[0012] Other aspects provide processing systems configured to perform the aforementioned methods as well as those described herein; non-transitory, computer- readable media comprising instructions that, when executed by one or more processors of a processing system, cause the processing system to perform the aforementioned methods as well as those described herein; a computer program product embodied on a computer-readable storage medium comprising code for performing the aforementioned methods as well as those further described herein; and a processing system comprising means for performing the aforementioned methods as well as those further described herein.
[0013] To the accomplishment of the foregoing and related ends, the one or more aspects comprise the features hereinafter fully described and particularly pointed out in the claims. The following description and the appended drawings set forth in detail certain illustrative features of the one or more aspects. These features are indicative, however, of but a few of the various ways in which the principles of various aspects may be employed.BRIEF DESCRIPTION OF THE DRAWINGS[0014] So that the manner in which the above-recited features of the present disclosure can be understood in detail, a more particular description, briefly summarized above, may be had by reference to aspects, some of which are illustrated in the appended drawings. It is to be noted, however, that the appended drawings illustrate only certain typical aspects of this disclosure and are therefore not to be considered limiting of its scope, for the description may admit to other equally effective aspects.[0015] FIGs. 1A-1D depict examples of various types of neural networks, which may be implemented by aspects of the present disclosure.[0016] FIG. 2 depicts an example of a traditional convolution operation, which may be implemented by aspects of the present disclosure.[0017] FIGs. 3A and 3B depict examples of depthwise separable convolution operations, which may be implemented by aspects of the present disclosure.[0018] FIG. 4 is a block diagram of an example digital compute-in-memory (DCIM) architecture, in accordance with certain aspects of the present disclosure.[0019] FIG. 5 illustrates an example compute-in-memory (CIM) cell for the DCIM architecture of FIG. 4, implemented as an eight-transistor (8T) static random-access memory (SRAM) cell.[0020] FIG. 6 is a block diagram of an example neural-network-processing architecture with tightly coupled memory (TCM) and processing elements (PEs), illustrating an example dataflow sequence, in which certain aspects of the present disclosure may be implemented.
[0021] FIG. 7 is a block diagram of a systolic flow architecture for connecting different PEs for concurrent shift and multiply-and-accumulate (MAC) operations, in accordance with certain aspects of the present disclosure.[0022] FIGs. 8A-8C are block diagrams of different example implementations of a global accumulator circuit and connections with a global memory, an output TCM, and a PE, in accordance with certain aspects of the present disclosure.[0023] FIG. 9A illustrates cycle-by-cycle systolic operation for the example systolic flow architecture of FIG. 7, in accordance with certain aspects of the present disclosure.[0024] FIG. 9B illustrates cycle-by-cycle systolic operation with dummy cycles for an example systolic flow architecture having more PEs than activation-input-bit cycles, in accordance with certain aspects of the present disclosure.[0025] FIG. 10 is a block diagram of an example systolic architecture with more than one row, in accordance with certain aspects of the present disclosure.[0026] FIG. 11 is a flow diagram illustrating example operations for neural network processing, in accordance with certain aspects of the present disclosure.[0027] FIG. 12 is a flow diagram illustrating example operations for neural network processing, in accordance with certain aspects of the present disclosure.[0028] FIG. 13 is a block diagram illustrating an example electronic device having a neural-network-processing circuit implementing a systolic flow architecture and configured to perform machine learning tasks, in accordance with certain aspects of the present disclosure.[0029] To facilitate understanding, identical reference numerals have been used, where possible, to designate identical elements that are common to the drawings. It is contemplated that elements and features of one aspect may be beneficially incorporated in other aspects without further recitation.DETAILED DESCRIPTION[0030] Aspects of the present disclosure provide apparatuses, methods, processing systems, and computer-readable mediums for performing data-intensive processing, such as implementing machine learning models. Some aspects provide a neural -networkprocessing architecture and circuits for improved handling of partial accumulation results
in weight-stationary operations, such as operations occurring in compute-in-memory (CIM) processing elements (PEs).Brief Introduction to Neural Networks, Deep Neural Networks, and Deep Learning[0031] Neural networks are organized into layers of interconnected nodes. Generally, a node (or neuron) is where computation happens. For example, a node may combine input data with a set of weights (or coefficients) that either amplifies or dampens the input data. The amplification or dampening of the input signals may thus be considered an assignment of relative significances to various inputs with regard to a task the network is trying to learn. Generally, input-weight products are summed (or accumulated), and then the sum is passed through a node’s activation function to determine whether and to what extent that signal should progress further through the network.[0032] In a most basic implementation, a neural network may have an input layer, a hidden layer, and an output layer. “Deep” neural networks generally have more than one hidden layer.[0033] Deep learning is a method of training deep neural networks. Generally, deep learning maps inputs to the network to outputs from the network and is thus sometimes referred to as a “universal approximator” because deep learning can learn to approximate an unknown function /(%) = y between any input x and any output y. In other words, deep learning finds the right f to transform x into j'.[0034] More particularly, deep learning trains each layer of nodes based on a distinct set of features, which is the output from the previous layer. Thus, with each successive layer of a deep neural network, features become more complex. Deep learning is thus powerful because it can progressively extract higher-level features from input data and perform complex tasks, such as object recognition, by learning to represent inputs at successively higher levels of abstraction in each layer, thereby building up a useful feature representation of the input data.[0035] For example, if presented with visual data, a first layer of a deep neural network may learn to recognize relatively simple features, such as edges, in the input data. In another example, if presented with auditory data, the first layer of a deep neural network may learn to recognize spectral power in specific frequencies in the input data. The second layer of the deep neural network may then learn to recognize combinations
of features, such as simple shapes for visual data or combinations of sounds for auditory data, based on the output of the first layer. Higher layers may then learn to recognize complex shapes in visual data or words in auditory data. Still higher layers may learn to recognize common visual objects or spoken phrases. Thus, deep learning architectures may perform especially well when applied to problems that have a natural hierarchical structure.Layer Connectivity in Neural Networks[0036] Neural networks, such as deep neural networks (DNNs), may be designed with a variety of connectivity patterns between layers.[0037] FIG. 1A illustrates an example of a fully connected neural network 102. In a fully connected neural network 102, each node in a first layer communicates its output to every node in a second layer, so that each node in the second layer will receive input from every node in the first layer.[0038] FIG. IB illustrates an example of a locally connected neural network 104. In a locally connected neural network 104, a node in a first layer may be connected to a limited number of nodes in the second layer. More generally, a locally connected layer of the locally connected neural network 104 may be configured so that each node in a layer will have the same or a similar connectivity pattern, but with connection strengths (or weights) that may have different values (e.g., values associated with local areas 110, 112, 114, and 116 of the first layer nodes). The locally connected connectivity pattern may give rise to spatially distinct receptive fields in a higher layer, because the higher layer nodes in a given region may receive inputs that are tuned through training to the properties of a restricted portion of the total input to the network.[0039] One type of locally connected neural network is a convolutional neural network (CNN). FIG. 1C illustrates an example of a convolutional neural network 106. The convolutional neural network 106 may be configured such that the connection strengths associated with the inputs for each node in the second layer are shared (e.g., for local area 108 overlapping another local area of the first layer nodes). Convolutional neural networks are well suited to problems in which the spatial locations of inputs are meaningful.
[0040] One type of convolutional neural network is a deep convolutional network (DCN). Deep convolutional networks are networks of multiple convolutional layers, which may further be configured with, for example, pooling and normalization layers.[0041] FIG. ID illustrates an example of a DCN 100 designed to recognize visual features in an image 126 generated by an image-capturing device 130. For example, if the image-capturing device 130 is a camera mounted in or on (or otherwise moving along with) a vehicle, then the DCN 100 may be trained with various supervised learning techniques to identify a traffic sign and even a number on the traffic sign. The DCN 100 may likewise be trained for other tasks, such as identifying lane markings or identifying traffic lights. These are just some example tasks, and many others are possible.[0042] In the example of FIG. ID, the DCN 100 includes a feature-extraction section and a classification section. Upon receiving the image 126, a convolutional layer 132 applies convolutional kernels (for example, as depicted and described in FIG. 2) to the image 126 to generate a first set of feature maps (or intermediate activations) 118. Generally, a “kernel” or “filter” comprises a multidimensional array of weights designed to emphasize different aspects of an input data channel. In various examples, “kernel” and “filter” may be used interchangeably to refer to sets of weights applied in a convolutional neural network.[0043] The first set of feature maps 118 may then be subsampled by a pooling layer (e.g., a max pooling layer, not shown) to generate a second set of feature maps 120. The pooling layer may reduce the size of the first set of feature maps 118 while maintaining much of the information in order to improve model performance. For example, the second set of feature maps 120 may be downsampled to a 14x14 matrix from a 28x28 matrix by the pooling layer.[0044] This process may be repeated through many layers. In other words, the second set of feature maps 120 may be further convolved via one or more subsequent convolutional layers (not shown) to generate one or more subsequent sets of feature maps (not shown).[0045] In the example of FIG. ID, the second set of feature maps 120 is provided to a fully connected layer 124, which in turn generates an output feature vector 128. Each feature of the output feature vector 128 may include a number that corresponds to a
possible feature of the image 126, such as “sign,” “60,” and “100.” In some cases, a softmax function (not shown) may convert the numbers in the output feature vector 128 to a probability. In such cases, an output 122 of the DCN 100 is a probability of the image 126 including one or more features.[0046] A softmax function (not shown) may convert the individual elements of the output feature vector 128 into a probability in order that an output 122 of DCN 100 is one or more probabilities of the image 126 including one or more features, such as a sign with the number “60” thereon, as in image 126. Thus, in the present example, the probabilities in the output 122 for “sign” and “60” should be higher than the probabilities of the other elements of the output 122, such as “30,” “40,” “50,” “70,” “80,” “90,” and “100.”[0047] Before training the DCN 100, the output 122 produced by the DCN 100 may be incorrect. Thus, an error may be calculated between the output 122 and a target output known a priori. For example, here the target output is an indication that the image 126 includes a “sign” and the number “60.” Utilizing the known target output, the weights of the DCN 100 may then be adjusted through training so that a subsequent output 122 of the DCN 100 achieves the target output (with high probabilities).[0048] To adjust the weights of the DCN 100, a learning algorithm may compute a gradient vector for the weights. The gradient vector may indicate an amount that an error would increase or decrease if a weight were adjusted in a particular way. The weights may then be adjusted to reduce the error. This manner of adjusting the weights may be referred to as “backpropagation” because this adjustment process involves a “backward pass” through the layers of the DCN 100.[0049] In practice, the error gradient of weights may be calculated over a small number of examples, so that the calculated gradient approximates the true error gradient. This approximation method may be referred to as stochastic gradient descent. Stochastic gradient descent may be repeated until the achievable error rate of the entire system has stopped decreasing or until the error rate has reached a target level.[0050] After training, the DCN 100 may be presented with new images, and the DCN 100 may generate inferences, such as classifications, or probabilities of various features being in the new image.
Convolution Techniques for Convolutional Neural Networks[0051] Convolution is generally used to extract useful features from an input data set. For example, in convolutional neural networks, such as described above, convolution enables the extraction of different features using kernels and/or filters whose weights are automatically learned during training. The extracted features are then combined to make inferences.[0052] An activation function may be applied before and/or after each layer of a convolutional neural network. Activation functions are generally mathematical functions that determine the output of a node of a neural network. Thus, the activation function determines whether a node should pass information or not, based on whether the node’s input is relevant to the model’s prediction. In one example, where y = convex) (i.e., y is the convolution of x), both x and y may be generally considered as “activations.” However, in terms of a particular convolution operation, x may also be referred to as “preactivations” or “input activations” as x exists before the particular convolution, and y may be referred to as output activations or a feature map.[0053] FIG. 2 depicts an example of a traditional convolution in which a 12-pixel x 12-pixel x 3 -channel input image 202 is convolved using a 5 x 5 x 3 convolution kernel 204 and a stride (or step size) of 1. The resulting feature map 206 is 8 pixels x 8 pixels x 1 channel. As seen in this example, the traditional convolution may change the dimensionality of the input data as compared to the output data (here, from 12 x 12 to 8 x 8 pixels), including the channel dimensionality (here, from 3 channels to 1 channel). The convolution kernel 204 is shown as corresponding to a portion of the input image 202 with which the kernel is convolved to generate a single element of the feature map 206. Generally, as in this example, the depth (d= 3) of the kernel 204 matches the number of channels of the input image 202.[0054] One way to reduce the computational burden (e.g., measured in floating-point operations per second (FLOPs)) and the number of parameters associated with a neural network comprising convolutional layers is to factorize the convolutional layers. For example, a spatial separable convolution, such as depicted in FIG. 2, may be factorized into two components: (1) a depthwise convolution, where each spatial channel is convolved independently by a depthwise convolution (e.g., a spatial fusion); and (2) a pointwise convolution, where all the spatial channels are linearly combined (e.g., a
channel fusion). An example of a depthwise separable convolution is depicted in FIGs. 3A and 3B. Generally, during spatial fusion, a network learns features from the spatial planes, and during channel fusion, the network learns relations between these features across channels.[0055] In one example, a depthwise separable convolution may be implemented using 5x5 kernels for spatial fusion, and 1x1 kernels for channel fusion. In particular, the channel fusion may use a I x l xt/ kernel that iterates through every single point in an input image of depth d, where the depth d of the kernel generally matches the number of channels of the input image. Channel fusion via pointwise convolution is useful for dimensionality reduction for efficient computations. Applying I x l xt/ kernels and adding an activation layer after the kernel may give a network added depth, which may increase the network’s performance.[0056] In particular, in FIG. 3A, the 12-pixel x 12-pixel x 3 -channel input image 302 is convolved with a filter comprising three separate kernels 304A-C, each having a 5 x 5 x 1 dimensionality, to generate a feature map 306 of 8 pixels x 8 pixels x 3 channels, where each channel is generated by an individual kernel among the kernels 304A-C with the corresponding shading in FIG. 3A. Each convolution kernel 304A-C is shown as corresponding to a portion of the input image 302 with which the kernel is convolved to generate a single element of the feature map 306. The combined depth (d = 3) of the kernels 304A-C here matches the number of channels of the input image 302.[0057] Then, feature map 306 is further convolved using a pointwise convolution operation with a kernel 308 having dimensionality 1 x 1 x 3 to generate a feature map 310 of 8 pixels x 8 pixels x 1 channel. As is depicted in this example, feature map 310 has reduced dimensionality (1 channel versus 3 channels), which allows for more efficient computations therewith.[0058] Though the result of the depthwise separable convolution in FIGs. 3A and 3B is substantially similar to the traditional convolution in FIG. 2, the number of computations is significantly reduced, and thus depthwise separable convolution offers a significant efficiency gain where a network design allows it.[0059] Though not depicted in FIG. 3B, multiple (e.g., ni) pointwise convolution kernels 308 (e.g., individual components of a filter) can be used to increase the channel
dimensionality of the convolution output. So, for example, m = 256 1x1x3 kernels 308 can be generated, in which each output is an 8-pixel x 8-pixel x 1 -channel feature map (e.g., feature map 310), and these feature maps can be stacked to get a resulting feature map of 8 pixels x 8 pixels x 256 channels. The resulting increase in channel dimensionality provides more parameters for training, which may improve a convolutional neural network’s ability to identify features (e.g., in input image 302).Example Compute-in-Memory (CIM) Architecture[0060] CIM-based machine learning (ML)/artificial intelligence (Al) may be used for a wide variety of tasks, including image and audio processing and making wireless communication decisions (e.g., to optimize, or at least increase, throughput and signal quality). Further, CIM may be based on various types of memory architectures, such as dynamic random-access memory (DRAM), static random-access memory (SRAM) (e.g., based on an SRAM cell as in FIG. 5), magnetoresistive random-access memory (MRAM), and resistive random-access memory (ReRAM or RRAM), and may be attached to various types of processing units, including central processing units (CPUs), digital signal processors (DSPs), graphics processing units (GPUs), field-programmable gate arrays (FPGAs), Al accelerators, and others. Generally, CIM may beneficially reduce the “memory wall” problem, which is where the movement of data in and out of memory consumes more power than the computation of the data. Thus, by performing the computation in memory, significant power savings may be realized. This is particularly useful for various types of electronic devices, such as lower power edge processing devices, mobile devices, and the like.[0061] For example, a mobile device may include a memory device configured for storing data and performing CIM operations. The mobile device may be configured to perform an ML/ Al operation based on data generated by the mobile device, such as image data generated by a camera sensor of the mobile device. A memory controller unit (MCU) of the mobile device may thus load weights from another on-board memory (e.g., flash or RAM) into a CIM array of the memory device and allocate input feature buffers and output (e.g., output activation) buffers. The processing device may then commence processing of the image data by loading, for example, a layer in the input buffer and processing the layer with weights loaded into the CIM array. This processing may be repeated for each layer of the image data, and the outputs (e.g., output activations) may
be stored in the output buffers and then used by the mobile device for an ML/ Al task, such as facial recognition.[0062] As described above, conventional CIM processes may perform computation using analog signals, which may result in inaccuracies in the computation results, adversely impacting neural network computations. One emerging solution for analog CIM schemes is digital compute-in-memory (DCIM) schemes, in which computations are performed using digital signals. As used herein, the term “CIM” may refer to either or both analog CIM and digital CIM, unless it is clear from context that only analog CIM or only digital CIM is meant.[0063] FIG. 4 is a block diagram of an example DCIM circuit 400, in accordance with certain aspects of the present disclosure. In a neural network architecture comprising multiple processing elements, the DCIM circuit 400 may function as a single DCIM processing element (PE).[0064] In the example of FIG. 4, the DCIM circuit 400 includes a CIM array 401 (e.g., a DCIM array) having thirty-two word-lines 4O4o to 40431 (also referred to as rows) and eight columns 4O6o to 406? (e.g., each column may be composed of multiple bit-lines, such as thirty-two bit-lines). Word-lines 4O4o to 40431 are collectively referred to as “word-lines (WLs) 404,” and columns 4O6o to 406? are collectively referred to as “columns 406.” While the CIM array 401 is implemented with 32 word-lines and 8 columns to facilitate understanding, the CIM array may be implemented with any number of word-lines and with any number of columns. As shown, CIM cells 4O2o-o to 40231-7 (collectively referred to as “CIM cells 402”) are implemented at the intersections of the WLs 404 and columns 406.[0065] Each of the CIM cells 402 may be implemented using the CIM cell architecture described below with respect to FIG. 5, for example.[0066] The CIM cells 402 may be loaded with the weight bits of a neural network. The activation inputs may be provided as an input matrix (e.g., a 32-row by 8-column matrix) to the CIM array 401, one vector at a time. As shown in FIG. 4, activation input bits a(0,0) to a(31,0) (e.g., a first vector) may be provided to respective word-lines 404, and the CIM cells 402 may store weights w(0,0) to w(31,7) of the neural network, for example. In this case, CIM cells 4O2o-o to 4O2o-7 may store weight bits w(0,0) to w(0,7),
CIM cells 4O2i-o to 4021-7 may store weight bits w(l,0) to w(l,7), and so on. Each wordline may store a multi-bit weight. For example, weight bits w(0,0) to w(0,7) may represent eight bits of a weight of a neural network (e.g., an 8-bit weight). Each CIM cell may perform bit-wise multiplication of a received activation input bit with the weight bit stored in the CIM cell and pass the result to the output of the CIM cell (e.g., the read bit- line (RBL), as explained with respect to FIG. 5).[0067] As shown, the DCIM circuit 400 may include a bit-column adder tree 409, which may include eight adder trees 41Oo to 410? (collectively referred to as “adder trees 410”), each adder tree being implemented for a respective one of the columns 406. Each of the adder trees 410 adds the output signals from the CIM cells 402 on the respective one of the columns 406, and the adder trees 410 may operate in parallel (e.g., concurrently). The outputs of the adder trees 410 may be coupled to a weight-shift adder tree circuit 412, as shown. The weight-shift adder tree circuit 412 includes multiple weight-shift adders 414, each including a bit-shift-and-add circuit to facilitate the performance of a bit-shifting-and-addition operation. In other words, the CIM cells on column 4O6o may store the most-significant bits (MSBs) for respective weights on each word-line 404, and the CIM cells on column 406? may store the least-significant bits (LSBs) for respective weights on each word-line. Therefore, when performing addition across the columns 406, a bit-shift operation is performed to shift the bits to account for the significance of the bits on the associated column.[0068] The output of the weight-shift adder tree circuit 412 is provided to an activation-shift accumulator circuit 416. The activation-shift accumulator circuit 416 includes a bit-shift circuit 418, a serial accumulator 420, and a flip-flop (FF) array 422. For example, the FF array 422 may be used to implement a register.[0069] For certain aspects, the various elements of the DCIM circuit 400 of FIG. 4 may be operated with a common clock frequency (as indicated by the label “System Frequency x 1”).[0070] During operation of the DCIM circuit 400, activation circuitry 490 provides a first set of activation input bits a(0,0) to a(31,0) (e.g., a first vector in a batch of thirty- two activation input features) to the CIM cells 402 for computation during a first activation cycle. The first set of activation input bits a(0,0) to a(31,0) may represent the most-significant bits of the activation inputs. The outputs of computations on each
column are added using a respective one of the adder trees 410. The outputs of the adder trees 410 are added using the weight-shift adder tree circuit 412, the results of which are provided to the activation-shift accumulator circuit 416. The same operation is performed for other sets of activation input bits (other input vectors in the batch) during subsequent activation cycles, such as activation input bits a(0,l) to a(31,1) (e.g., a second vector) that may represent the second most-significant bits of the activation inputs, and so on until activation input bits representing the least-significant bits of the activation inputs are processed. The bit-shift circuit 418 performs a bit-shift operation based on the activation cycle. For example, for an 8-bit activation input processed using eight activation cycles, the bit-shift circuit 418 may perform an 8-bit shift for the first activation cycle, a 7 -bit shift for the second activation cycle, and so on. After the activation cycles, the outputs of the bit-shift circuit 418 are accumulated using the serial accumulator 420 and stored in the FF array 422, which may be used as a register to transfer the final accumulation result to another component (e.g., an output TCM or another DCIM circuit, such as in a systolic flow architecture as described below).[0071] The DCIM circuit 400 of FIG. 4 provides bit-wise storage and bit-wise multiplication. The adder trees 410 perform a population count addition for the columns 406. That is, each of the adder trees 410 adds the output signals of the CIM cells for a column (e.g., adding all 32 rows per column). The weight-shift adder tree circuit 412 (e.g. having three stages as shown for eight columns) combines the weighted sum generated for the eight columns (e.g., providing the accumulation result for a given activation input bit position during an activation cycle). The activation-shift accumulator circuit 416 combines the results from multiple (e.g., eight) activation cycles and outputs the final accumulation result. For example, the bit-shift circuit 418 shifts the bits at the output of the weight-shift adder tree circuit 412 based on the associated activation cycle. The serial accumulator 420 accumulates the shifted adder output generated by the bitshift circuit 418. The transfer register implemented using the FF array 422 copies the output of the serial accumulator 420 after the computation for the last activation cycle has been completed.[0072] The DCIM circuit 400 provides linear energy scaling across computations using different bit-sizes of activation inputs and/or weights. In other words, using the adder trees 410 and weight-shift adder tree circuit 412 provides bit-size configurability,
allowing for an //-bit activation input with an // -bit weight accumulation, // and /// being positive integers. The energy consumption associated with the DCIM circuit 400 may scale linearly based on the configured bit-size for the activation inputs and weights.[0073] The example DCIM circuit 400 of FIG. 4 may be comparatively compact (in terms of area occupied) and may consume relatively low energy. However, the DCIM circuit 400 and the “pseudo-weight-stationary mapping” used therein may have some challenges with partial sum accumulation, which are discussed below. As used herein, a “pseudo-weight-stationary mapping” generally refers to a weight-stationary re-use scheme that processes a batch of input features for each of multiple depth-cycles, in an effort to generate the final outputs as quickly as possible. For example, the DCIM circuit 400 enables a pseudo-weight-stationary scheme, where a batch of 32 activation input features may be concurrently processed. A smaller batch size (e.g., 32 versus 256 features) allows the final output result to be generated more quickly, since the total number of cycles to finish running through the depth-cycles becomes much less compared to a case in which all inputs are processed for each of the depth-cycles, which would significantly delay the output generation. As shown, weights are re-used for the different sets of activation input bits in the input batch. At the last cycle, the final outputs may be transferred to the memory (e.g., the output TCM), as described below.[0074] FIG. 5 illustrates an example CIM cell 500 of a static random-access memory (SRAM), which may be implemented in a CIM array, such as the CIM array 401 in the DCIM circuit 400 of FIG. 4. The CIM cell 500 may be referred to as an eight-transistor (8T) SRAM cell because the CIM cell is implemented with eight transistors.[0075] As shown, the CIM cell 500 may include a cross-coupled invertor pair 524 having an output 514 and an output 516. As shown, the cross-coupled invertor pair output 514 is selectively coupled to a write bit-line (WBL) 506 via a pass-gate transistor 502, and the cross-coupled invertor pair output 516 is selectively coupled to a complementary write bit-line (WBLB) 520 via a pass-gate transistor 518. The WBL 506 and WBLB 520 are configured to provide complementary digital signals to be written (e.g., stored) in the cross-coupled invertor pair 524. The WBL and WBLB may be used to store a bit for a neural network weight in the CIM cell 500. The gates of pass-gate transistors 502, 518 may be coupled to a write word-line (WWL) 504, as shown. For example, a digital signal to be written may be provided to the WBL (and a complement of the digital signal is
provided to the WBLB). The pass-gate transistors 502, 518 — which are implemented here as n-type field-effect transistors (NFETs) — are then turned on by providing a logic high signal to WWL 504, resulting in the digital signal being stored in the cross-coupled invertor pair 524.[0076] As shown, the cross-coupled invertor pair output 514 may be coupled to a gate of a transistor 510. The source of the transistor 510 may be coupled to a reference potential node (Vss or electrical ground), and the drain of the transistor 510 may be coupled to a source of a transistor 512. The drain of the transistor 512 may be coupled to a read bit-line (RBL) 522, as shown. The gate of transistor 512 may be controlled via a read word-line (RWL) 508. The RWL 508 may be controlled via an activation input signal.[0077] During a read cycle, the RBL 522 may be precharged to logic high. If both the activation input bit and the weight bit stored at the cross-coupled invertor pair output 514 are logic high, then transistors 510, 512 are both turned on, electrically coupling the RBL 522 to the reference potential node at the source of transistor 510 and discharging the RBL 522 to logic low. If either the activation input bit or the weight bit stored at the cross-coupled invertor pair output 514 is logic low, then at least one of the transistors 510, 512 will be turned off, such that the RBL 522 remains logic high. Thus, the output of the CIM cell 500 at the RBL 522 is logic low only when both the weight bit and the activation input bit are logic high, and is logic high otherwise, effectively implementing a NAND- gate operation.Example Neural-Network-Processins Architectures and Dataflow[0078] FIG. 6 is a block diagram of an example neural-network-processing architecture 600, illustrating an example dataflow sequence, in which certain aspects of the present disclosure may be implemented. The neural-network-processing architecture 600 may include a plurality of processing elements (PEs) 602 for performing data computation (e.g., multiply-and-accumulate (MAC) operations) and other operations. The PEs 602 may be implemented with any of various suitable circuits, such as the DCIM circuit 400 of FIG. 4. The architecture 600 may also include a global memory 604 (labeled “Global Buffer”), a weight tightly coupled memory (TCM) 606, an activation TCM 608, an output TCM 610, PE-mapper logic 612 (which may also include bus
arbitration logic (not shown) and/or digital post-processing logic (not shown)), a memory bus 614, and a PE bus 616. As used herein, a TCM generally refers to a memory accessed by a dedicated connection from the processor(s), such as the PEs 602. Although shown as separate TCMs, the weight TCM 606, the activation TCM 608, and/or the output TCM 610 may be combined. The memory bus 614 may couple the global memory 604 to the weight TCM 606, the activation TCM 608, and the output TCM 610. The PE bus 616 may couple the PEs 602 and the PE-mapper logic 612 together. In this manner, the PEs 602 may access the memory resources (e.g., the weight TCM, the activation TCM, and the output TCM).[0079] In the dataflow sequence shown, weights may be loaded from the global memory to the weight TCM 606. Then, the weights may be loaded from the weight TCM 606 to the PE weight arrays (e.g., in the CIM cells of the PEs). Activation inputs may be loaded from the global memory 604 to the activation TCM 608. Then, the activation inputs may be loaded from the activation TCM 608 to the PE bus 616 (or at least a portion of the PE bus operating as an activation bus). After the weights have been loaded in the PEs 602 and the activations are ready on the activation bus, the PEs 602 may perform computations (e.g., MAC operations) over multiple computation cycles to generate final accumulation results. The final accumulation results may be processed (e.g., by the PE- mapper logic 612, or more specifically for certain cases, the digital post-processing logic), and the processed results may be written to the output TCM 610. From the output TCM 610, the processed accumulation results may be loaded in the global memory 604 via the memory bus 614.Example Reconfisurable Systolic Flow Architecture and Partial Sum Management[0080] As described above, compute-in-memory (CIM) technology is solving the energy and speed bottlenecks arising from moving data from memory and the processing system (e.g., the central processing unit (CPU)). CIM offers energy efficiency and significantly less memory accesses in weight-stationary use cases. As used herein, the term “weight-stationary” generally refers to a re-use architecture where the neural network weights remain stationary during operation (e.g., after being initially loaded) and the inputs are streamed in. Weight-stationary mapping may be used in CIM to reduce the overhead of the weight update time during operation.
[0081] Despite these benefits, CIM and other weight-stationary mapping schemes may have some challenges in certain applications. For example, the weight-stationary operation of some neural-network-processing circuits (e.g., DCIM PEs) may force these circuits to offload and reload (e.g., write and read) partial accumulation results to a memory (e.g., the output TCM) for the final accumulation. Also referred to as “partial sums,” partial accumulation results are not final data, or in other words, are not yet ready to become (or to be transferred to digital post-processing logic before the results become) an activation input for the next layer nor data to be stored in the output TCM as the final result of a layer. Rather, partial sums may be temporarily stored in the output TCM and read back to the DCIM PEs for further processing in one or more cycles until the final accumulation output is ready. These partial sums may then be discarded when the final outputs are ready to be processed (e.g., by the digital post-processing logic).[0082] In some cases, weight-stationary mapping may force the partial accumulation results to be written to a buffer memory and read back from the buffer memory for a subsequent input feature multiply-and-accumulate (MAC) operation, which may create overhead in terms of energy and a performance penalty (e.g., in terms of lower teraoperations per second (TOPS)) if this read/write cannot be handled in the same MAC cycle. In other words, having to store and reload these partial accumulation results leads to storage area, bandwidth, and throughput (e.g., TOPS) penalties in the neural -networkprocessing architecture. In some cases, the circuit overhead to handle the partial sums can reduce the area advantage of DCIM solutions compared to other neural -networkprocessing solutions (e.g., neural process units (NPUs)). This offloading and reloading can also introduce a significant latency penalty in some instances.[0083] Certain aspects of the present disclosure provide a neural-network-processing architecture and circuits to handle the partial sums with no throughput penalty, thereby reducing the bottleneck writing and reading back and forth from the memory. The circuits may be referred to as concurrent multiply-and-accumulate (MAC) and partial sum store and reload circuits. The architecture may be referred to as a “reconfigurable systolic flow architecture.” Both the architecture and the circuits are described below.[0084] FIG. 7 is a block diagram of an example systolic flow architecture 700, in accordance with certain aspects of the present disclosure. The systolic flow architecture 700 may include a cascaded series 701 of PE circuits 702i to 702s (collectively referred
to as “PE circuits 702”) and a global accumulator circuit 710 (also referred to as a “fat accumulator circuit”). Although eight PE circuits 702 are represented in the example systolic flow architecture 700, the reader is to understand that the series 701 may include any number of cascaded PE circuits.[0085] The PE circuits 702 may be implemented by any of various suitable PE circuits, such as the DCIM circuit 400 of FIG. 4 or other weight-stationary mapping PE circuits. The PE circuits 702 may replace at least some of the PE circuits in a neural network architecture, such as the PEs 602 in the architecture 600 of FIG. 6. As illustrated in FIG. 7, each of the PE circuits 702 includes a multiply-and-accumulate (MAC) adder tree 704 and a local accumulator 706 (also referred to as a “light accumulator”). The MAC adder tree 704 may represent or be implemented by any of various suitable circuits for performing MAC operations, such as the CIM array 401 (e.g., with thirty -two rows and eight columns), bit-column adder tree 409, and weight-shift adder tree circuit 412 of FIG. 4. The local accumulator 706 in each PE circuit 702 may represent or be implemented by the activation-shift accumulator circuit of 416 of FIG. 4. The global accumulator circuit 710 may include a large accumulator 711 (also referred to as the “fat accumulator”), which may have a higher number of bits (e.g., 32 bits) compared to the bit-size of the local accumulators 706 (e.g., 21 bits) and which is therefore represented in FIG. 7 with shading. By designing the PE circuits with smaller bit-size local accumulators 706, the cascaded series 701 may occupy a smaller area than if each of the PE circuits had a higher bit-size large accumulator 711.[0086] The PE circuits 702 may be systolically connected such that the output of a local accumulator 706 from one PE circuit (e.g., PE circuit 7021) is input as a partial accumulation result to the MAC adder tree 704 of a subsequent PE circuit (e.g., PE circuit 7022). In this manner, the partial accumulation results from each PE circuit 702 need not be stored and then reloaded. Instead of the individual PE circuits, the global accumulator circuit 710 may write the accumulation results to an output TCM (e.g., the output TCM 610). Furthermore, each PE circuit 702 may perform concurrent shift and MAC operations during a MAC cycle. In other words, concurrently while the PE circuit 702 is shifting data out (e.g., to the next PE circuit or to the global accumulator circuit 710), the MAC adder tree 704 may be computing with input data, and the local accumulator 706
may be running. This concurrent shift and MAC operation is possible due to flip-flops (e.g., FF array 422) in the local accumulator 706 operating as a shift register.[0087] The depth-wise spatial tiling of the systolic flow architecture 700 reduces the overall number of MAC cycles to achieve final results and decreases the number of partial sum writes and reads, in depth-heavy workloads. Moreover, this systolic implementation has less timing overhead compared to other solutions, such as a neural processing unit (NPU) solution. For example, it may take a single MAC cycle to generate the sum of the results of 8 PE circuits 702, where eight bit-serial clock cycles equals one MAC cycle. An equivalent NPU solution may take 8 MAC cycles for the same computation.[0088] With an example scheme of eight 32-row PE circuits 702, the systolic flow architecture 700 is basically emulating a memory array with 256 rows (instead of 32 rows for a single PE circuit). However, a single, direct 256-row memory array may not be mapped efficiently to some workloads. Each PE circuit 702 can load weights in parallel, which decreases the weight-loading time compared to loading weights row-by-row, especially for a 256-row memory array. Each PE circuit 702 can also accumulate independently for workloads that are not depth-heavy. This enables flexibility and, thus, a better utilization efficiency for the PE assignment for computation.[0089] Within a neural network circuit, the systolic flow architecture 700 may be reconfigurable such that aspects of the architecture may be changed, such as the number of PE circuits 702 cascaded in series. A compiler for the neural network may be used to select the initial components and make any reconfigurations.[0090] FIGs. 8A-8C are block diagrams of different example implementations of the global accumulator circuit 710, showing other components for context, in accordance with certain aspects of the present disclosure. These other components may include, for example, a global memory, an output TCM, and/or a PE circuit.[0091] FIG. 8A includes a block diagram of an example global accumulator circuit 800 (also referred to as a “fat accumulator module”) and illustrates connections with a global memory 604 (labeled “system memory”) of FIG. 6, an output TCM 610 of FIG. 6, digital post-processing logic 801, and a PE circuit 702 of FIG. 7. The global accumulator circuit 800 includes the large accumulator 711, a flip-flop array 802 (labeled “flop array”), a write register 804, and a multiplexer 806. The write register 804 may be sized for 24
bits, for example. The global accumulator circuit 710 may also include a read register 808, an output TCM write bus 812, and an output TCM read bus 810. The read register 808 may be sized similar to the write register 804 (e.g., 24 bits).[0092] The output TCM read bus 810 may be coupled between the write register 804 and the output TCM 610, for example, for reading stored data (e.g., partial sums) from the output TCM and loading this read data into the write register. The output TCM read bus 810 may also be coupled between the output TCM 610 and the global memory 604, for example, for reading stored data (e.g., final results) from the output TCM and writing this read data into the global memory 604. The output TCM write bus 812 may be coupled between the read register 808 and the output TCM 610, for example, for loading data (e.g., partial sums) from the read register into the output TCM. The digital postprocessing logic 801 (labeled “DPP”) may be coupled between the read register 808 and the output TCM write bus 812, for example, for processing data (e.g., a final accumulation result) from the read register 808 before this data is written to the output TCM 610 via the output TCM write bus 812.[0093] The multiplexer 806 has a first data input coupled to an output of the write register 804 and a second data input coupled to an output of the flip-flop array 802. The output of the multiplexer 806 is coupled to a first input of the large accumulator 711. A control input of the multiplexer 806 may receive a control signal (labeled “Reload/ Accumulate”) configured to select whether the multiplexer selects to output the reloaded data from the write register 804 or the previous value of the large accumulator 711 from the flip-flop array 802. An output of the PE circuit 702 is coupled to a second input of the large accumulator 711, and an output of the large accumulator is coupled to an input of the flip-flop array 802, which may have a bit-size similar to the write register 804 (and the read register 808). The output of the flip-flop array may be coupled to an input of the read register 808.[0094] Operating as the partial sum reload circuitry for the systolic flow architecture 700, the write register 804 may be loaded during any activation-input-bit (Act-Bit) cycle. The read register 808 operates as the partial sum store circuitry and may write its value to the output TCM 610 via the output TCM write bus 812 at the end of the current MAC cycle (e.g., after the first cycle following the last Act-Bit cycle). The write register 804
and the read register 808 may be used to maximize (or at least increase) the utilization of the output TCM write and read busses without having to wait for Act-Bit cycles.[0095] During operation, a previously stored partial sum value may be read from the output TCM 610 and loaded into the write register 804. The multiplexer 806 may select either (Al) the reloaded data from the write register 804 or (A2) the previous value of the large accumulator 711 from the flip-flop array 802, according to the selection control signal. The large accumulator 711 may accumulate the selection (Al or A2) with (B) the accumulation result from the previous PE circuit 702 (e.g., the contents of the shift register in the local accumulator 706). The accumulation result from the last Act-Bit cycle may be loaded into the read register 808. The value in the read register 808 may be transferred to the output TCM 610 in any one of the Act-Bit cycles within a MAC cycle (e.g., the first one of the next 8 Act-Bit cycles), whenever the output TCM write bus 812 is available.[0096] Since the delay addition of the 2: 1 multiplexer 806 is quite small (e.g., one logic gate delay) and not in a speed-critical path for the systolic flow architecture 700, there should be no penalty on the operating frequency of the architecture. Furthermore, this solution has a limited energy penalty of one flop cycle out of the Act-Bit cycles within a MAC cycle (e.g., out of 8 Act-Bit cycles).[0097] When the global accumulator circuit 800 with the partial sum store and reload circuitry (the write register 804, the read register 808, and the multiplexer 806) is coupled to an output of the cascaded series 701 of PE circuits 702, the PE circuits may not include partial sum store and reload circuitry and may not have connections to the output TCM read bus 810 or the output TCM write bus 812. For example, the PE circuits 702 may not include a write register, a read register, or a multiplexer, or at least these circuits need not be coupled to the output TCM write and read busses. This configuration limits the area overhead of partial sum store and reload circuitry to the overall area of a PE array (e.g., an array of the PEs 602 in FIG. 6 or the cascaded series 701 of PE circuits 702 in FIG. 7).[0098] FIG. 8B includes a block diagram of an example PE circuit 820 with partial accumulation store and reload circuitry, in accordance with certain aspects of the present disclosure. The PE circuit 820 may be used to implement the PE circuits 702 in the cascaded series 701 and/or the global accumulator circuit 710. In this manner, a single PE circuit 820 could be replicated and used to implement all the blocks in the systolic
flow architecture 700, if desired. In such a case, the partial sum store and reload circuitry may be disabled for PE circuits 820 that are implementing the cascaded series 701 of PE circuits 702, but may be enabled for the PE circuit 820 implementing the global accumulator circuit 710. Unlike the global accumulator circuit 800 in FIG. 8A, the output TCM write bus 812 and the output TCM read bus 810 are external to the PE circuit 820 in FIG. 8B. The PE circuit 820 adds a CIM circuit (e.g., the DCIM circuit 400) to the other components (e.g., the non-bus components) of the global accumulator circuit 800 in FIG. 8A. For example, the PE circuit 820 adds a MAC adder tree 822 (e.g., a DCIM adder tree or other adder circuit) and an accumulator-and-shifter circuit 824 (e.g., the activation-shift accumulator circuit 416). The MAC adder tree 822 may be implemented by the MAC adder tree 704, and the accumulator-and-shifter circuit 824 may be implemented by the local accumulator 706 of FIG. 7.[0099] FIG. 8C includes a block diagram of an example global accumulator circuit 830 with partial accumulation store and reload circuitry and an additional multiplexer 828, in accordance with certain aspects of the present disclosure. Furthermore, the global accumulator circuit 830 may include an accumulator 826 having a first input coupled to an output of the multiplexer 806 and having an output coupled to a first input of the additional multiplexer 828. The output of a PE circuit 702 may be coupled to a second input of the additional multiplexer 828. For certain aspects, the global accumulator circuit 830 includes an optional MAC adder tree 822 and an optional accumulator-and-shifter circuit 824, as described above with respect to FIG. 8B. In this case, the global accumulator circuit 830 may function as both a PE circuit and a global accumulator circuit and, thus, may replace both the last PE circuit (e.g., PE circuit 702s) in the cascaded series 701 and the global accumulator circuit 710 in a systolic flow architecture. The additional multiplexer 828 has a control input receiving a selection signal (labeled “Shift/ Accumulate”) configured to select between the accumulated data from the accumulator 826 or the output from the previous PE circuit (e.g., PE circuit 702?) in the cascaded series.[0100] FIG. 9A is a timing diagram 900 illustrating an example cycle-by-cycle systolic operation for the systolic flow architecture 700 of FIG. 7, in accordance with certain aspects of the present disclosure. In this example, the cascaded series 701 has eight PE circuits 702 (labeled “PEI” to “PE8”), and each depth cycle (e.g., each MAC
cycle) includes eight Act-Bit cycles to complete the final accumulation. Each PE circuit 702i to 7028 includes a flop array 902i to 902s (collectively referred to as “flop arrays 902”), respectively, which may represent a plurality of flip-flops implementing a shift register (e.g., similar to the FF array 422 in FIG. 4). As described above, the flop arrays 902 in each PE circuit 702 copy the bits (representing the partial accumulation results) from the local accumulator 706 and transfer the copied bits to the next PE circuit 702 in the series (and more specifically to the flop in the next PE circuit), instead of to the output TCM (as done in other DCIM solutions where the partial sums were transferred in parallel from the DCIM PEs). Thus, the flop arrays 902 may be referred to as “copy registers.” The flop arrays 902 may run independently from the local accumulators 706 and may transfer their contents at each Act-Bit cycle. Also as described above, the MAC adder tree 704 and the local accumulator 706 may run in parallel with the shifting operation of the flop arrays 902.[0101] Starting from the left at the end of the last bit-serial cycle of the first depth cycle (labeled “Depth Cycle- 1” and “Act-Bit8 Cycle”), the final accumulation result may be generated by the global accumulator circuit 710 and, for certain aspects, stored in the read register 808 as described above. At some time during the next depth cycle (labeled “Depth Cycle-2”), the global accumulator circuit 710 may write the final accumulation result to the output TCM 610 (e.g., via the output TCM write bus 812). At the first bitserial cycle of the next depth cycle (labeled “Depth2, Act-Bitl Cycle”), the MAC operations may be performed in the MAC adder tree 704 of each PE circuit, and concurrently with the MAC operations, the contents of flop array 9021 may be shifted to PE circuit 7022, the contents of flop array 9022 may be shifted to PE circuit 7023, and so on where the contents of flop array 902s are shifted to the global accumulator circuit 710. Similar operations are performed at each bit-serial cycle in Depth Cycle-2, until the final accumulation result for Depth Cycle-2 is generated by the global accumulator circuit 710 at the last bit-serial cycle (labeled “Depth 2, Act-Bit8 Cycle”). The systolic operation repeats starting with the first bit-serial cycle of Depth Cycle-3, and so on, until all depth cycles have been completed.[0102] In the example of FIG. 9A, the number of PE circuits 702 matched the number of activation-input-bit cycles (e.g., eight PE circuits). In some cases, it may be possible to use a cascaded series with a greater number of PE circuits than the number of
activation-input-bit cycles. This may occur, for example, when a neural network workload calls for a number of PE circuits, but this number does not fit a standard systolic flow configuration, or when the compiler fits the neural network design to a systolic flow configuration that comprises a greater number of cascaded PE circuits than needed. For example, if a workload called for ten PE circuits, but the systolic mapping was for eight PE circuits, then one solution would be to use one set of PE circuits (e.g., five PE circuits) in a first MAC cycle and another set of PE circuits (e.g., five PE circuits, which may be the same five PE circuits) in a second MAC cycle. However, this solution takes two MAC cycles, thereby negatively impacting the throughput (e.g., half of the TOPS for a single MAC cycle). Instead, the MAC cycle length could be increased by using a dummy cycle for each extra PE circuit. In dummy cycles, all activation inputs are 0, but the contents of the flop arrays 902 may still be transferred to the global accumulator circuit and to the next PE circuits in the series during each dummy cycle. With all activation inputs equal to 0, no new MAC computations are performed, and no energy is consumed by at least the MAC circuits in the systolic flow architecture. Continuing the example above, two dummy cycles may be used for the extra two PE circuits, such that a single MAC cycle comprising eight activation-input-bit cycles and two dummy cycles could be used. Therefore, the impact to the throughput is only a 20% penalty (e.g., TOPS for a single MAC cycle * 8/10), rather than the 50% penalty in the two-MAC-cycle solution.[0103] For example, FIG. 9B is a timing diagram 950 illustrating cycle-by-cycle systolic operation with dummy cycles for an example systolic flow architecture having ten PE circuits 702 and eight activation-input-bit cycles, in accordance with certain aspects of the present disclosure. Thus, the systolic operation includes two dummy cycles (labeled “Dummy Cycle-1” and “Dummy Cycle-2”) after Act-Bitl through Act-Bit8 Cycles in each depth cycle. In Dummy Cycle- 1 and Dummy Cycle-2, all activation inputs are 0, but the contents of the flop arrays 902 may still be transferred to the global accumulator circuit 710 and to the next PE circuits in the series during each dummy cycle.[0104] Although shown at the end as consecutive cycles in the timing diagram 950 of FIG. 9B, the dummy cycles may occur at the beginning, the middle, and/or at the end of a depth cycle. Furthermore, in the case of multiple dummy cycles, at least some of the dummy cycles may be consecutive activation-input-bit cycles or may be separated in time (e.g., non-consecutive activation input-bit cycles).
[0105] FIG. 10 is a block diagram of an example extended systolic flow architecture 1000 with more than one row (e.g., more than one cascaded series of PE circuits and a global accumulator circuit), in accordance with certain aspects of the present disclosure. In this manner, the systolic flow architecture may be extended to any number of rows (also referred to as “channels”), allowing for any number of cascaded series per accumulation (in addition to the flexibility in the number of PE circuits in each cascaded series).[0106] For example, the extended systolic flow architecture 1000 may include eight rows with a cascaded series 10011 to 100 h (collectively referred to as “cascaded series 1001”) of eight PE circuits 702i to 702s (labeled “PEI” to “PE8” and as described with respect to FIG. 7) in each row. To extend this example, if each PE circuit 702 includes 32 inputs, then the extended systolic flow architecture 1000 effectively operates as a CIM circuit with 2048 inputs (= 32 inputs x 8 PE circuits x 8 rows) per accumulation. It is to be understood that the extended systolic flow architecture 1000 may include more or less than eight rows and that each cascaded series 1001 may include more or less than eight PE circuits 702. Each row may also include a global accumulator circuit 101 Or to 1010s (collectively referred to as “global accumulator circuits 1010”) coupled to a last PE circuit in a respective cascaded series 10011 to 100 h. The global accumulator circuits 1010 may each include a large accumulator 711 and a copy-flop 1012 coupled to an output of the large accumulator. The copy-flop 1012 may represent or be implemented as a shift register and may be used to transfer the accumulated data from one row to the next subsequent row (and more specifically, to the global accumulator circuit 1010 in the next subsequent row).[0107] The extended systolic flow architecture 1000 may also have a super global accumulator circuit 1020. The super global accumulator circuit 1020 may have an input coupled to the global accumulator circuit 101 Ox in the last row and an output coupled to the output TCM 610 of FIG. 6 (e.g., via an output TCM write bus that may be internal to the super global accumulator circuit). The super global accumulator circuit 1020 may have any suitable bit-size (e.g., 48 bits when there are eight rows, each with a large accumulator 711 having a bit-size of 32 bits) to generate and handle a final global accumulation result for the extended systolic flow architecture 1000.
[0108] The extended systolic flow architecture 1000 may operate as two nested accumulations, where the inner loop generates a final accumulation result at the output of each global accumulator circuit 1010 (similar to the systolic flow architecture 700) and where the outer loop generates the final global accumulation result at the output of the super global accumulator circuit 1020. As with the example of FIG. 9A, the final accumulation result in each row may be ready after eight activation-input-bit cycles (with the eight PE circuits 702 in each cascaded series 1001). However, rather than transferring the final accumulation result — which is still a partial sum for the workload — from each row to the output TCM in the next MAC cycle, the copy-flop 1012 in each row may transfer the final accumulation result to the global accumulator circuit 101 in the next subsequent row at any time during the next MAC cycle. In fact, with the extended systolic flow architecture 1000, there may be no need for partial sum reads and writes when the number of rows is sufficiently increased for a given workload. At the end of N MAC cycles, where N is the number of rows (here, N= 8), the final global accumulation result may be generated in the super global accumulator circuit 1020 and may be transferred to the output TCM (e.g., via the digital post-processing logic) at any time during the next A MAC cycles.Example Operations[0109] FIG. 11 is a flow diagram illustrating example operations 1100 for neural network processing, in accordance with certain aspects of the present disclosure. The operations 1100 may be performed, for example, by a processing element (PE) circuit, such as the global accumulator circuit 800 or 830 of FIGs. 8A and 8C or the PE circuit 820 of FIG. 8B.[0110] The operations 1100 may begin at block 1105 with a first input of a multiplexer (e.g., the multiplexer 806) receiving first data from a write register (e.g., the write register 804). At block 1110, a second input of the multiplexer receives second data from a flip-flop array (e.g., the flip-flop array 802). At block 1115, an accumulator circuit (e.g., the large accumulator 711) receives third data from a processing element (PE) circuit (e.g., a PE circuit 702, and more particularly in some cases, a last PE circuit in a cascaded series, such as the PE circuit 702s). The multiplexer selects data, between the first data and the second data, to output to the accumulator circuit at block 1120. At block 1125, the accumulator circuit accumulates the selected output data from the multiplexer
and the third data received from the PE circuit to generate accumulated data (e.g., a partial sum or a final accumulation result).[0111] According to certain aspects, the operations 1100 further include outputting the accumulated data to the flip-flop array; shifting, with the flip-flop array, the accumulated data to a read register (e.g., the read register 808); and writing the accumulated data from the read register to a tightly coupled memory (TCM) (e.g., the output TCM 610) via a write bus (e.g., the output TCM write bus 812). In this case, for example, the accumulated data may be a partial accumulation result.[0112] According to certain aspects, the operations 1100 further involve outputting the accumulated data to the flip-flop array; shifting, with the flip-flop array, the accumulated data to a read register; processing the accumulated data from the read register with digital post-processing logic (e.g., the digital post-processing logic 801); and writing the processed, accumulated data to a TCM via a write bus coupled between the digital post-processing logic and the TCM. In this case, for example, the accumulated data may be a final accumulation result.[0113] FIG. 12 is a flow diagram illustrating example operations 1200 for neural network processing, in accordance with certain aspects of the present disclosure. The operations 1200 may be performed by a neural network circuit with a (reconfigurable) systolic flow architecture (e.g., the systolic flow architecture 700 of FIG. 7 or the extended systolic flow architecture 1000 of FIG. 10).[0114] The operations 1200 may begin at block 1205 with each processing element (PE) circuit (e.g., each PE circuit 702) in a set of cascaded PE circuits (e.g., the cascaded series 701 or 1001) performing a multiply-and-accumulate (MAC) operation. An output of a first PE circuit (e.g., the PE circuit 702i) in the set of cascaded PE circuits is coupled to an input of a second PE circuit (e.g., the PE circuit 7022) in the set of cascaded PE circuits. Each PE circuit in the set of cascaded PE circuits may include a MAC circuit (e.g., the MAC adder tree 704), a local accumulator circuit (e.g., the local accumulator 706 or the serial accumulator 420) having an input coupled to an output of the MAC circuit, and a set of flip-flops (e.g., the flop array 902 or the FF array 422) having an input coupled to an output of the local accumulator circuit.
[0115] At block 1210, the set of flip-flops in each PE circuit may perform a shifting operation to shift a value (e.g., a partial sum) from the PE circuit to a next PE circuit in the set of cascaded PE circuits or to a global accumulator circuit (e.g., the global accumulator circuit 710). In each PE circuit, the shifting operation may be performed concurrently with the performance of the MAC operation in block 1205.[0116] At block 1215, the global accumulator circuit may accumulate the shifted values from a last PE circuit (e.g., the PE circuit 702s) in the set of cascaded PE circuits to generate accumulated data (e.g., the final accumulation result or a partial accumulation result).[0117] According to certain aspects, the operations 1200 further involve loading weights in parallel into the set of cascaded PE circuits before performing the MAC operation in each PE circuit with the weights.[0118] According to certain aspects, the accumulating at block 1215 includes writing, with the global accumulator circuit, partial sums to a memory (e.g., the output TCM 610). For certain aspects, the accumulating at block 1215 also includes reading, with the global accumulator circuit, the partial sums from the memory. The set of cascaded PE circuits may not write the partial sums to, or read the partial sums from, the memory[0119] According to certain aspects, the accumulating involves receiving, at a first input of a multiplexer (e.g., the multiplexer 806) in the global accumulator circuit, first data from a write register (e.g., the write register 804) in the global accumulator circuit; receiving, at a second input of the multiplexer, second data from a flip-flop array (e.g., the flip-flop array 802) in the global accumulator circuit; receiving, at another accumulator circuit (e.g., the large accumulator 711) in the global accumulator circuit, third data from a last PE circuit (e.g., the PE circuit 702s) in the set of cascaded PE circuits; selecting, with the multiplexer, data to output to the other accumulator circuit between the first data and the second data; and accumulating, with the other accumulator circuit, the selected output data from the multiplexer and the third data to generate the accumulated data.Example Device with Systolic Flow Architecture and/or Partial Sum Management[0120] FIG. 13 illustrates an example electronic device 1300. The electronic device 1300 may be configured to perform the methods described herein, including the operations 1100 and/or 1200 described with respect to FIGs. 11 and 12.
[0121] The electronic device 1300 includes a central processing unit (CPU) 1302, which in some aspects may be a multi-core CPU. Instructions executed at the CPU 1302 may be loaded, for example, from a program memory associated with the CPU 1302 or may be loaded from a memory 1324.[0122] The electronic device 1300 also includes additional processing blocks tailored to specific functions, such as a graphics processing unit (GPU) 1304, a digital signal processor (DSP) 1306, a neural network circuit 1307 with a set of cascaded PEs 1309 to implement a (reconfigurable) systolic flow architecture, a multimedia processing block 1310, and a wireless connectivity processing block 1312. In one implementation, the neural network circuit 1307 is implemented in one or more of the CPU 1302, GPU 1304, and/or DSP 1306.[0123] In some aspects, the wireless connectivity processing block 1312 may include components, for example, for Third-Generation (3G) connectivity, Fourth-Generation (4G) connectivity (e.g., 4G LTE), Fifth-Generation connectivity (e.g., 5G or NR), Wi-Fi connectivity, Bluetooth connectivity, and/or wireless data transmission standards. The wireless connectivity processing block 1312 is further connected to one or more antennas 1314 to facilitate wireless communication.[0124] The electronic device 1300 may also include one or more sensor processors 1316 associated with any manner of sensor, one or more image signal processors (ISPs) 1318 associated with any manner of image sensor, and/or a navigation processor 1320, which may include satellite-based positioning system components (e.g., Global Positioning System (GPS) or Global Navigation Satellite System (GLONASS)), as well as inertial positioning system components.[0125] The electronic device 1300 may also include one or more input and/or output devices 1322, such as screens, touch-sensitive surfaces (including touch-sensitive displays), physical buttons, speakers, microphones, and the like. In some aspects, one or more of the processors of the electronic device 1300 may be based on an Advanced RISC Machines (ARM) instruction set, where RISC stands for “reduced instruction set computing.”[0126] The electronic device 1300 also includes memory 1324, which is representative of one or more static and/or dynamic memories, such as a dynamic random
access memory (DRAM), a flash-based static memory, and the like. In this example, memory 1324 includes computer-executable components, which may be executed by one or more of the aforementioned processors of the electronic device 1300, including the neural network circuit 1307. The depicted components, and others not depicted, may be configured to perform various aspects of the methods described herein.[0127] In some aspects, such as where the electronic device 1300 is a server device, various aspects may be omitted from the example depicted in FIG. 13, such as one or more of the multimedia processing block 1310, wireless connectivity processing block 1312, antenna(s) 1314, sensor processors 1316, ISPs 1318, or navigation processor 1320.Example Clauses[0128] In addition to the various aspects described above, specific combinations of aspects are within the scope of the disclosure, some of which are detailed in the clauses below:[0129] Clause 1 : A processing element (PE) circuit for machine learning, the PE circuit comprising: a first accumulator circuit, a flip-flop array having an input coupled to an output of the first accumulator circuit, a write register, and a first multiplexer having a first input coupled to an output of the write register, having a second input coupled to an output of the flip-flop array, and having an output coupled to a first input of the first accumulator circuit.[0130] Clause 2: The PE circuit of Clause 1, further comprising a read register having an input coupled to the output of the flip-flop array. For certain aspects, the read register is configured to store data received from the flip-flop array.[0131] Clause 3 : The PE circuit of Clause 2, further comprising a write bus coupled to an output of the read register. For certain aspects, the read register is configured to write the stored data to the write bus. In some cases, the write bus may be configured to transfer the data to a memory.[0132] Clause 4: The PE circuit of Clause 2 or 3, further comprising a read bus coupled to an input of the write register. For certain aspects, the read bus is configured to deliver data to the write register, and the write register may be configured to store the data.
[0133] Clause 5: A neural network circuit comprising a plurality of PE circuits, wherein at least one of the plurality of PE circuits comprises the PE circuit of any of Clause 4, the neural network circuit further comprising: a tightly coupled memory coupled to the write bus and to the read bus; and a global memory coupled to the read bus, wherein another one of the plurality of PE circuits has an output coupled to a second input of the first accumulator circuit. For certain aspects, the tightly coupled memory is configured to store first data from the read register delivered via the write bus and/or to write second data to the write register via the read bus. For certain aspects, the global memory is configured to store data received from the tightly coupled memory via the read bus. For certain aspects, the first accumulator circuit is configured to accumulate data received from the other one of the plurality of PE circuits and/or the first multiplexer.[0134] Clause 6: The neural network circuit of Clause 5, wherein the other one of the plurality of PE circuits does not include a write register.[0135] Clause 7: The PE circuit of any of Clauses 1-3, further comprising a read bus coupled to an input of the write register, wherein the read bus is configured to couple to at least one of a tightly coupled memory or a global memory, external to the PE circuit. For certain aspects, the read bus is configured to deliver data to the write register, and the write register may be configured to store the data.[0136] Clause 8: The PE circuit of any of Clauses 1-3 and 7, further comprising: an adder circuit; and an accumulator-and-shifter circuit having an input coupled to an output of the adder circuit and having an output coupled to a second input of the first accumulator circuit.[0137] Clause 9: The PE circuit of any of Clauses 1-3 and 7-8, further comprising: a second accumulator circuit; and a second multiplexer having a first input coupled to an output of the second accumulator circuit and having an output coupled to the first input of the first accumulator circuit.[0138] Clause 10: The PE circuit of any of Clauses 1-3 and 7-9, wherein the PE circuit is a digital compute-in-memory (DCIM) PE circuit and wherein the PE circuit further comprises: a DCIM array; a bit-column adder tree circuit coupled to the DCIM array; and a weight-shift adder tree circuit coupled to the bit-column adder tree circuit.
[0139] Clause 11 : The PE circuit of Clause 10, wherein the DCIM array comprises a plurality of compute-in-memory cells and wherein at least one of the compute-in-memory cells comprises an eight-transistor (8T) static random-access memory (SRAM) cell.[0140] Clause 12: A neural network circuit comprising: a first set of cascaded processing element (PE) circuits, wherein an output of a first PE circuit in the first set is coupled to an input of a second PE circuit in the first set and wherein each PE circuit in the first set of cascaded PE circuits comprises: a multiply-and-accumulate (MAC) circuit, a local accumulator circuit having an input coupled to an output of the MAC circuit, and a set of flip-flops having an input coupled to an output of the local accumulator circuit; and a first global accumulator circuit having an input coupled to an output of the first set of cascaded PE circuits.[0141] Clause 13: The neural network circuit of Clause 12, wherein each PE circuit in the first set of cascaded PE circuits is configured to concurrently perform a MAC operation with the MAC circuit and a shift operation with the set of flip-flops to shift a value from the PE circuit to a next PE circuit in the first set of cascaded PE circuits or to the first global accumulator circuit.[0142] Clause 14: The neural network circuit of Clause 12 or 13, further comprising a memory, wherein: the first global accumulator circuit is configured to write partial sums to, and read the partial sums from, the memory; and the first set of cascaded PE circuits is not configured to write the partial sums to, or read the partial sums from, the memory.[0143] Clause 15: The neural network circuit of any of Clauses 12-14, wherein the first global accumulator circuit comprises: a first accumulator, a flip-flop array having an input coupled to an output of the first accumulator, a write register, and a first multiplexer having a first input coupled to an output of the write register, having a second input coupled to an output of the flip-flop array, and having an output coupled to a first input of the first accumulator.[0144] Clause 16: The neural network circuit of Clause 15, wherein the first global accumulator circuit further comprises a read register having an input coupled to the output of the flip-flop array.
[0145] Clause 17: The neural network circuit of Clause 16, further comprising a tightly coupled memory, wherein the first global accumulator circuit further comprises: a write bus coupled between an output of the read register and the tightly coupled memory; and a read bus coupled between the tightly coupled memory and an input of the write register.[0146] Clause 18: The neural network circuit of Clause 17, further comprising a global memory coupled to the read bus of the first global accumulator circuit.[0147] Clause 19: The neural network circuit of any of Clauses 12-18, wherein the first set of cascaded PE circuits is configured such that weights are loaded in parallel into the first set of cascaded PE circuits.[0148] Clause 20: The neural network circuit of any of Clauses 12-19, wherein the first set of cascaded PE circuits comprises a number of cascaded PE circuits, such that the first global accumulator circuit is configured to receive a partial sum from the first PE circuit through all the PE circuits in the first set after a number of activation-input-bit cycles has occurred that matches the number of cascaded PE circuits.[0149] Clause 21 : The neural network circuit of any of Clauses 12-19, wherein: the first global accumulator circuit is configured to receive a partial sum from the first PE circuit through all the PE circuits in the first set after a number of activation-input-bit cycles has occurred; and a number of cascaded PE circuits in the first set is greater than or equal to the number of activation-input-bit cycles.[0150] Clause 22: The neural network circuit of any of Clauses 12-21, wherein each PE circuit in the first set of cascaded PE circuits is a digital compute-in-memory (DCIM) PE circuit, wherein the MAC circuit in each PE circuit comprises a DCIM array, wherein the DCIM array comprises a plurality of compute-in-memory cells, and wherein at least one of the compute-in-memory cells comprises an eight-transistor (8T) static randomaccess memory (SRAM) cell.[0151] Clause 23: The neural network circuit of any of Clauses 12-22, further comprising: a second set of cascaded PE circuits, wherein an output of a first PE circuit in the second set is coupled to an input of a second PE circuit in the second set and wherein each PE circuit in the second set of cascaded PE circuits comprises: a multiply-and- accumulate (MAC) circuit, a local accumulator circuit having an input coupled to an
output of the MAC circuit, and a set of flip-flops having an input coupled to an output of the local accumulator circuit; a second global accumulator circuit having an input coupled to an output of the second set of cascaded PE circuits; a first copy-flop having an input coupled to an output of the first global accumulator circuit; a second copy-flop having a first input coupled to an output of the second global accumulator circuit and having a second input coupled to an output of the first copy-flop; and a super global accumulator circuit having an input coupled to an output of the second copy-flop.[0152] Clause 24: A method of neural network processing, comprising: receiving, at a first input of a multiplexer, first data from a write register; receiving, at a second input of the multiplexer, second data from a flip-flop array; receiving, at an accumulator circuit, third data from a processing element (PE) circuit; selecting, with the multiplexer, data to output to the accumulator circuit between the first data and the second data; and accumulating, with the accumulator circuit, the selected output data from the multiplexer and the third data received from the PE circuit to generate accumulated data.[0153] Clause 25: The method of Clause 24, further comprising: outputting the accumulated data to the flip-flop array; shifting, with the flip-flop array, the accumulated data to a read register; and writing the accumulated data from the read register to a tightly coupled memory (TCM) via a write bus.[0154] Clause 26: The method of Clause 24, further comprising: outputting the accumulated data to the flip-flop array; shifting, with the flip-flop array, the accumulated data to a read register; processing the accumulated data from the read register with digital post-processing logic; and writing the processed, accumulated data to a tightly coupled memory (TCM) via a write bus coupled between the digital post-processing logic and the TCM.[0155] Clause 27: A method of neural network processing, comprising: performing a multiply-and-accumulate (MAC) operation in each processing element (PE) circuit in a set of cascaded PE circuits, wherein an output of a first PE circuit in the set of cascaded PE circuits is coupled to an input of a second PE circuit in the set of cascaded PE circuits and wherein each PE circuit in the set of cascaded PE circuits comprises: a MAC circuit, a local accumulator circuit having an input coupled to an output of the MAC circuit, and a set of flip-flops having an input coupled to an output of the local accumulator circuit; performing a shifting operation with the set of flip-flops in each PE circuit to shift a value
from the PE circuit to a next PE circuit in the set of cascaded PE circuits or to a global accumulator circuit, wherein in each PE circuit, the shifting operation is performed concurrently with the performance of the MAC operation; and accumulating, with the global accumulator circuit, the shifted values from a last PE circuit in the set of cascaded PE circuits to generate accumulated data.[0156] Clause 28: The method of Clause 27, further comprising loading weights in parallel into the set of cascaded PE circuits before performing the MAC operation in each PE circuit with the weights.[0157] Clause 29: The method of Clause 27 or 28, wherein the accumulating comprises: writing, with the global accumulator circuit, partial sums to a memory; and reading, with the global accumulator circuit, the partial sums from the memory, wherein the set of cascaded PE circuits does not write the partial sums to, or read the partial sums from, the memory.[0158] Clause 30: The method of any of Clauses 27-29, wherein the accumulating comprises: receiving, at a first input of a multiplexer in the global accumulator circuit, first data from a write register in the global accumulator circuit; receiving, at a second input of the multiplexer, second data from a flip-flop array in the global accumulator circuit; receiving, at another accumulator circuit in the global accumulator circuit, third data from a last PE circuit in the set of cascaded PE circuits; selecting, with the multiplexer, data to output to the other accumulator circuit between the first data and the second data; and accumulating, with the other accumulator circuit, the selected output data from the multiplexer and the third data to generate the accumulated data.Additional Considerations[0159] The preceding description is provided to enable any person skilled in the art to practice the various aspects described herein. The examples discussed herein are not limiting of the scope, applicability, or aspects set forth in the claims. Various modifications to these aspects will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other aspects. For example, changes may be made in the function and arrangement of elements discussed without departing from the scope of the disclosure. Various examples may omit, substitute, or add various procedures or components as appropriate. For instance, the methods described may be
performed in an order different from that described, and various steps may be added, omitted, or combined. Also, features described with respect to some examples may be combined in some other examples. For example, an apparatus may be implemented or a method may be practiced using any number of the aspects set forth herein. In addition, the scope of the disclosure is intended to cover such an apparatus or method that is practiced using other structure, functionality, or structure and functionality in addition to, or other than, the various aspects of the disclosure set forth herein. It should be understood that any aspect of the disclosure disclosed herein may be embodied by one or more elements of a claim.[0160] As used herein, the word “exemplary” means “serving as an example, instance, or illustration.” Any aspect described herein as “exemplary” is not necessarily to be construed as preferred or advantageous over other aspects.[0161] As used herein, a phrase referring to “at least one of’ a list of items refers to any combination of those items, including single members. As an example, “at least one of: a, b, or c” is intended to cover a, b, c, a-b, a-c, b-c, and a-b-c, as well as any combination with multiples of the same element (e.g., a-a, a-a-a, a-a-b, a-a-c, a-b-b, a-c-c, b-b, b-b-b, b-b-c, c-c, and c-c-c or any other ordering of a, b, and c).[0162] As used herein, the term “determining” encompasses a wide variety of actions. For example, “determining” may include calculating, computing, processing, deriving, investigating, looking up (e.g., looking up in a table, a database or another data structure), ascertaining, and the like. Also, “determining” may include receiving (e.g., receiving information), accessing (e.g., accessing data in a memory), and the like. Also, “determining” may include resolving, selecting, choosing, establishing, and the like.[0163] The methods disclosed herein comprise one or more steps or actions for achieving the methods. The method steps and/or actions may be interchanged with one another without departing from the scope of the claims. In other words, unless a specific order of steps or actions is specified, the order and/or use of specific steps and/or actions may be modified without departing from the scope of the claims. Further, the various operations of methods described above may be performed by any suitable means capable of performing the corresponding functions. The means may include various hardware and/or software component(s) and/or module(s), including, but not limited to a circuit, an application specific integrated circuit (ASIC), or processor. Generally, where there are
operations illustrated in figures, those operations may have corresponding counterpart means-plus-function components with similar numbering.[0164] The following claims are not intended to be limited to the aspects shown herein, but are to be accorded the full scope consistent with the language of the claims. Within a claim, reference to an element in the singular is not intended to mean “one and only one” unless specifically so stated, but rather “one or more.” Unless specifically stated otherwise, the term “some” refers to one or more. No claim element is to be construed under the provisions of 35 U.S.C. § 112(f) unless the element is expressly recited using the phrase “means for” or, in the case of a method claim, the element is recited using the phrase “step for.” All structural and functional equivalents to the elements of the various aspects described throughout this disclosure that are known or later come to be known to those of ordinary skill in the art are expressly incorporated herein by reference and are intended to be encompassed by the claims. Moreover, nothing disclosed herein is intended to be dedicated to the public regardless of whether such disclosure is explicitly recited in the claims. |
Methods and structures for efficiently implementing an accumulator-based load-store CPU architecture in a programmable logic device (PLD). The PLD includes programmable logic blocks, each logic block including function generators that can be optionally programmed to function as lookup tables or as RAM blocks. Each element of the CPU is implemented using these logic blocks, including an instruction register, an accumulator pointer, a register file, and an operation block. The register file is implemented using function generators configured as RAM blocks. This implementation eliminates the need for time-consuming accesses to an off-chip register file or to a dedicated RAM block. |
1. A circuit implementation of a central processing unit (CPU) in a programmable logic device (PLD) comprising an array of similar programmable logic blocks and programmable routing resources interconnecting the logic blocks, the circuit implementation comprising:at least a first one of the logic blocks configured to implement an instruction register;at least a second one of the logic blocks configured to implement an accumulator pointer;at least a third one of the logic blocks configured to implement an operation block;at least a fourth one of the logic blocks comprising one or more programmable function generators configured as RAM blocks, the at least a fourth logic block being configured to implement a register file within the one or more function generators;a first set of routing resources configured to couple the first logic block to the second, third, and fourth logic blocks; anda second set of routing resources configured to couple the fourth logic block to the second and third logic blocks.2. The circuit implementation of claim 1, wherein:the first set of routing resources provides signals from the instruction register to the accumulator pointer, from the instruction register to the operation block, and from the instruction register to the register file; andthe second set of routing resources provides signals from the accumulator pointer to the register file, from the operation block to the register file, and from the register file to the operation block.3. The circuit implementation of claim 1, wherein the PLD is a field programmable gate array (FPGA).4. The circuit implementation of claim 3, wherein the first, second, third, and fourth logic blocks are configurable logic blocks (CLBs).5. The circuit implementation of claim 1, wherein the first, second, third, and fourth logic blocks are all distinct from each other.6. The circuit implementation of claim 1, wherein the function generators are paired, each pair of function generators is configured as a 16*1 dual-port RAM block, each pair of function generators provides one bit of the register file, and the register file includes no more than 16 registers.7. The circuit implementation of claim 1, wherein the fourth logic block comprises eight paired function generators, each pair of function generators is configured as a 16*1 dual-port RAM block, and the fourth logic block implements a 16*4 register file.8. The circuit implementation of claim 1, wherein the at least a fourth logic block includes at least a fifth logic block coupled to the fourth logic block, the function generators are paired, each pair of function generators is configured as a 16*1 dual-port RAM block, and the register file includes more than 16 registers.9. A method of implementing a central processing unit (CPU) in a programmable logic device (PLD) comprising an array of similar programmable logic blocks and programmable routing resources interconnecting the logic blocks, the method comprising:configuring at least a first one of the logic blocks to implement an instruction register;configuring at least a second one of the logic blocks to implement an accumulator pointer;configuring at least a third one of the logic blocks to implement an operation block;configuring at least a fourth one of the logic blocks as a register file, the at least a fourth logic block comprising one or more programmable function generators, comprising configuring the one or more function generators as RAM blocks implementing the register file;configuring a first set of routing resources to couple the first logic block to the second, third, and fourth logic blocks; andconfiguring a second set of routing resources to couple the fourth logic block to the second and third logic blocks.10. The method of claim 9, wherein:configuring the first set of routing resources comprises configuring the first set of routing resources to provide signals from the instruction register to the accumulator pointer, from the instruction register to the operation block, and from the instruction register to the register file; andconfiguring the second set of routing resources comprises configuring the second set of routing resources to provide signals from the accumulator pointer to the register file, from the operation block to the register file, and from the register file to the operation block.11. The method of claim 9, wherein the PLD is a field programmable gate array (FPGA).12. The method of claim 11, wherein the first, second, third, and fourth logic blocks are configurable logic blocks (CLBs).13. The method of claim 9, wherein the first, second, third, and fourth logic blocks are all distinct from each other.14. The method of claim 9, wherein configuring the at least a fourth logic block as a register file comprises configuring pairs of the function generators as 16*1 dual-port RAM blocks, each pair of function generator provides one bit of the register file, and the register file includes no more than 16 registers.15. The method of claim 9, wherein the fourth logic block comprises eight paired function generators, configuring the at least a fourth logic block as a register file comprises configuring each pair of function generators in the fourth logic block as a 16*1 dual-port RAM block, and the fourth logic block implements a 16*4 register file.16. The method of claim 9, wherein configuring the at least a fourth logic block as a register file comprises configuring pairs of function generators in the at least fourth and fifth logic blocks as 16*1 dual-port RAM blocks, and the register file includes more than 16 registers.17. A central processing unit (CPU) implemented in a programmable logic device (PLD) comprising an array of similar programmable logic blocks, the CPU comprising:an accumulator pointer having an input terminal and an output terminal;an operation block having first, second, and third input terminals and an output terminal;an instruction register having a first output terminal coupled to the first input terminal of the operation block, a second output terminal coupled to the input terminal of the accumulator pointer, and a third output terminal; anda register file having a first input terminal coupled to the output terminal of the accumulator pointer, a second input terminal coupled to the output terminal of the operation block, a third input terminal coupled to the third output terminal of the instruction register, and first and second output terminals coupled to the second and third input terminals of the operation block,wherein the register file comprises one or more of the programmable logic blocks of the PLD each comprising function generators optionally configurable as RAM blocks, the function generators being configured as RAM blocks in which the register file data is stored during operation of the CPU,and wherein each of the accumulator pointer, the operation block, and the instruction register comprises one or more of the programmable logic blocks of the PLD.18. The CPU of claim 17, wherein the PLD is a field programmable gate array (FPGA).19. The CPU of claim 18, wherein the logic blocks are configurable logic blocks (CLBs).20. The CPU of claim 17, wherein the function generators are paired, each pair of function generators is configured as a 1 6*1 dual-port RAM block, each pair of function generators provides one bit of the register file, and the register file includes no more than 16 registers.21. The CPU of claim 17, wherein each logic block comprises eight paired function generators, each pair of function generators is configured as a 16*1 dual-port RAM block, and the register file is a 16*4 register file.22. The CPU of claim 17, wherein the register file comprises at least two logic blocks including paired function generators configured as 16*1 dual-port RAM blocks, and the register file includes more than 16 registers.23. A programmable logic device (PLD), comprising:a programmable routing structure; anda plurality of similar programmable logic blocks interconnected by the programmable routing resources, each logic block including a plurality of one-bit registers and a plurality of function generators configurable as lookup tables and as RAM blocks, wherein:at least a first one of the logic blocks is configured to implement an instruction register,at least a second one of the logic blocks is configured to implement an accumulator pointer,at least a third one of the logic blocks is configured to implement an operation block,at least a fourth one of the logic blocks is configured to implement a register file by configuring pairs of the function generators of the fourth logic block as dual-port RAM blocks wherein register file data is stored,a first set of routing resources is configured to couple the first logic block to the second, third, and fourth logic blocks, anda second set of routing resources is configured to couple the fourth logic block to the second and third logic blocks.24. The PLD of claim 23, wherein:the instruction register is implemented using the one-bit registers in the first logic block;the accumulator pointer is implemented using the one-bit registers in the second logic block; andthe operation block is implemented by configuring the function generators in the third logic block as lookup tables.25. The PLD of claim 23, wherein the PLD is a field programmable gate array (FPGA).26. The PLD of claim 25, wherein the first, second, third, and fourth logic blocks are configurable logic blocks (CLBs).27. The PLD of claim 23, wherein the first, second, third, and fourth logic blocks are all distinct from each other.28. The PLD of claim 24, wherein the fourth logic block is the same logic block as one of the first and second logic blocks. |
FIELD OF THE INVENTIONThe invention relates to a central processing unit (CPU) for a computer system implemented in a programmable logic device (PLD). More particularly, the invention relates to an efficient PLD implementation of an accumulator-based load-store CPU architecture.BACKGROUND OF THE INVENTIONA computer system typically contains a CPU, a main memory, and one or more input/output (I/O) devices. FIG. 1 is a simplified diagram of a computer system 100. The CPU 101 fetches instructions from the main memory 102, and then executes these instructions. Main memory 102 is a memory storage device that stores blocks of instructions and data copied from an external disk memory 111 that is accessed via the I/O devices 103. I/O devices 103 are used to access external devices such as disk memory 111, user input devices 112 (e.g., keyboards), and display devices 113 (e.g., monitors).Memory access times play an important role in determining the operating speed of a computer system. Accesses to disk memory are much slower than accesses to main memory, because the instructions and data must be provided through an I/O device. Therefore, the main memory is provided to reduce the frequency of accesses to disk memory. However, instructions that require accessing main memory are still significantly slower than instructions that can be carried out entirely within the CPU.FIG. 2 shows a first type of CPU having an "accumulator-based" CPU architecture. Accumulator-based CPU 200 includes an instruction register 201, an accumulator 202, and an operation block 203. Instruction register 201 is a register in which the currently-executing instruction is stored. Accumulator 202 is a special register that provides one of the values on which the current instruction operates, and for some instructions (e.g., when the instruction provides a numerical result) is also used to store the result of the instruction. Operation block 203 is a control and execution circuit that can include, for example, an Arithmetic Logic Unit (ALU), a program counter register containing an address pointer to the main memory location in which the next instruction is stored, a parallel port providing access to the main memory, and so forth.Accumulator-based CPUs were among the earliest-developed CPUs. They are best used in architectures having a relatively small instruction size, e.g., 8-16 bits. To reduce the instruction size, only one source address is included in the instruction, and no destination address is included. Instead, the value in the accumulator is always used as one of the operands, and the destination address is always the accumulator. Thus, at most one memory address is included in the instruction, that of the second operand.Because only one operand is specified in each instruction, accumulator-based CPUs allow efficient instruction encoding and decoding, which decreases the cycle time of the CPU.As an example of accumulator-based operation, the following sequence of pseudo-code instructions performs the function "a=b+c+d" in an accumulator-based CPU. The letters "a", "b", "c", and "d" are addresses in main memory. The term "Acc" refers to the accumulator. Note that four memory accesses are required; three to fetch the operands, and one to store the result. Each of these memory accesses has an associated latency, which is added to the latency of the arithmetic (e.g., addition) operation.<tb>(1)<sep>load<sep>b<sep>// Acc <- b<tb>(2)<sep>add<sep>c<sep>// Acc <- Acc + c<tb>(3)<sep>add<sep>d<sep>// Acc <- Acc + d<tb>(4)<sep>store<sep>a<sep>// a <- AccIn step (1), the value at memory location "b" is loaded into the accumulator. In step (2), the value at memory location "c" is added to the value in the accumulator. In step (3), the value at memory location "d" is added to the value in the accumulator. In step (4), the value in the accumulator is stored in memory location "a".FIG. 3 shows another CPU architecture called a "load-store" architecture. A load-store architecture does not include an accumulator; instead, a register file 304 is used. (Other portions of CPU 300 are similar to those of FIG. 2; therefore, they are not further described here.) Register file 304 includes several registers that can be used as source registers and destination registers for instructions executed by the operation block.For example, the following sequence of pseudo-code instructions performs the function "a=b+c+d" in a load-store CPU. In this CPU, the register file includes at least five registers, R1-R5.<tb>(5)<sep>load<sep>R1,b<sep>// R1 <- b<tb>(6)<sep>load<sep>R2,c<sep>// R2 <- c<tb>(7)<sep>load<sep>R3,d<sep>// R3 <- d<tb>(8)<sep>add<sep>R4,R1,R2<sep>// R4 <- R1 + R2<tb>(9)<sep>add<sep>R5,R4,R3<sep>// R5 <- R4 + R3<tb>(10)<sep>store<sep>a, R5<sep>// a <- R5In step (5), the value at address "b" is stored in register R1. In step (6), the value at address "c" is stored in register R2. In step (7), the value at address "d" is stored in register R3. In step (8), the values stored in registers R1 and R2 are added, and the result is stored in register R4. In step (9), the values stored in registers R4 and R3 are added, and the result is stored in register R5. In step (10), the value stored in register R5 is stored in address "a" of the main memory.In comparing the two instruction sequences, it can be seen that the same number of memory accesses are required, i.e., three memory reads to load the values stored at locations "b", "c", and "d", and one memory write to store the result at location "a". However, in the load-store sequence (steps (5)-(10)), the memory accesses (i.e., the load and store commands) have been separated from the add instructions. This separation allows for simpler instructions (e.g., a simpler operation block) and a consequent faster CPU cycle time.Additionally, separating memory accesses from execution instructions such as the add instruction allows compilers to produce highly optimized code. For example, the values of "b", "c", "d", "b+c", and "b+c+d" remain in the register file, and can be reused by the program at a later time without fetching the values from memory or recalculating the addition results. Thus, the total number of memory accesses is typically reduced. Because memory accesses often make a significant contribution to the overall execution time of a program, a load-store CPU can execute some types of code significantly faster than an accumulator-based CPU. However, load-store architectures typically require a larger instruction size, in order to specify two operands and a destination address.Another type of CPU architecture combines the architectural features of the accumulator-based and load-store CPUs. FIG. 4 shows a first such architecture, a load-store CPU with a fixed accumulator. CPU 400 includes both an accumulator 402 and a register file 404. Values are loaded from main memory to the accumulator, stored into main memory from the accumulator, and moved back and forth between the accumulator and the register file. The accumulator also provides one operand and serves as the destination address for instructions. Thus, the register file essentially provides a "local memory" for the accumulator.Following is an exemplary sequence of instructions that execute the function "a=b+c+d" in the accumulator-based load-store architecture of FIG. 4.<tb>(11)<sep>load<sep>b<sep>// Acc <- b<tb>(12)<sep>movea<sep>R1<sep>// R1 <- Acc<tb>(13)<sep>load<sep>c<sep>// Acc <- c<tb>(14)<sep>movea<sep>R2<sep>// R2 <- Acc<tb>(15)<sep>load<sep>d<sep>// Acc <- d<tb>(16)<sep>add<sep>R2<sep>// Acc <- Acc + R2<tb>(17)<sep>add<sep>R1<sep>// Acc <- Acc + R1<tb>(18)<sep>store<sep>a<sep>// a <- AccIn step (11), the value at address "b" is stored in the accumulator. In step (12), the value in the accumulator is stored in register R1. In step (13), the value at address "c" is stored in the accumulator. In step (14), the value in the accumulator is stored in register R2. In step (15), the value at address "d" is stored in the accumulator. In step (16), the value in register R2 is added to the accumulator. In step (17), the value in register R1 is added to the accumulator. In step (18), the value in the accumulator is stored in address "a" of the main memory.The accumulator-based load-store CPU of FIG. 4 has the advantage that small instruction sizes can be used, because only one operand is required, as in the accumulator-based CPU of FIG. 2. However, any operation performed changes the value in the accumulator. This makes it difficult for a compiler to optimize the code.FIG. 5 shows another CPU architecture that more successfully combines the virtues of the accumulator-based and load-store architectures, a load-store CPU with a moveable accumulator. CPU 500 includes a register file 504 in which any one of the registers can act as an accumulator. An accumulator pointer 505 selects one of the registers in register file 504 and designates that register as the accumulator. The value of the accumulator pointer can be changed using a "set" instruction. By setting the location of the accumulator prior to executing another instruction, operations can be performed in any register in the register file, and the results can be left in the register file for later use, minimizing accesses to main memory.For example, the following pseudo-code implements the function "a=b+c+d" in the accumulator-based load-store architecture of FIG. 5.<tb>(19)<sep>set<sep>1<sep>// Acc = R1<tb>(20)<sep>load<sep>b<sep>// R1 <- b<tb>(21)<sep>set<sep>2<sep>// Acc = R2<tb>(22)<sep>load<sep>c<sep>// R2 <- c<tb>(23)<sep>set<sep>3<sep>// Acc = R3<tb>(24)<sep>load<sep>d<sep>// R3 <- d<tb>(25)<sep>add<sep>R2<sep>// R3 <- R3 + R2<tb>(26)<sep>add<sep>R1<sep>// R3 <- R3 + R1<tb>(27)<sep>store<sep>a<sep>// a <- R3In step (19), register R1 of the register file is selected to act as the accumulator. In step (20), the value at address "b" is stored in register R1. In step (21), register R2 of the register file is selected to act as the accumulator. In step (22), the value at address "c" is stored in register R2. In step (23), register R3 of the register file is selected to act as the accumulator. In step (24), the value at address "d" is stored in register R3. In step (25), the value in register R2 is added to the value stored in register R3. In step (26), the value in register R1 is added to the value stored in register R3. In step (27), the value in register R3 is stored in address "a" of the main memory.As described above, the accumulator-based load-store CPU architecture shown in FIG. 5 successfully combines the advantages of accumulator-based and load-store architectures. Only a single operand is included in each instruction, so the instruction size can be small. However, the moveable accumulator permits a compiler to retain the operands of previous instructions in the register file, which can significantly reduce the number of memory accesses.The use of programmable logic devices (PLDs) to implement CPUs is increasing rapidly. PLDs are now available that include dedicated on-board CPUs, such as the Virtex(R)-II Pro family of field programmable gate arrays (FPGAS) from Xilinx, Inc. However, some PLD users prefer to implement "soft processors" in their PLDs, i.e., microprocessors built from the fabric of programmable logic blocks traditionally included in PLDS, and configured using a configuration bitstream. Because a "soft" PLD implementation generally uses more silicon area than a processor designed using dedicated transistors (a "hard" processor), these soft processors preferably have a small instruction size.Therefore, it is desirable to provide a PLD implementation of an accumulator-based load-store CPU architecture that promotes the efficient use of PLD resources and the rapid execution of CPU instructions.SUMMARY OF THE INVENTIONThe invention provides methods and structures for efficiently implementing an accumulator-based load-store CPU architecture in a programmable logic device (PLD). The PLD includes programmable logic blocks, each logic block including function generators that can be optionally programmed to function as lookup tables or as RAM blocks. Each element of the CPU is implemented using these logic blocks, including an instruction register, an accumulator pointer, a register file, and an operation block. The register file is implemented using function generators configured as RAM blocks. This implementation eliminates the need for time-consuming accesses to an off-chip register file or to a dedicated RAM block.In some embodiments, the PLD is an FPGA, and the logic blocks are CLBs (configurable logic blocks).A first aspect of the invention provides a circuit implementation of a CPU in a PLD that includes a plurality of programmable logic blocks and programmable routing resources interconnecting the logic blocks. The circuit implementation includes at least a first logic block configured to implement an instruction register, at least a second logic block configured to implement an accumulator pointer, at least a third logic block configured to implement an operation block, and at least a fourth logic block configured to implement a register file. The circuit implementation also includes routing resources that are configured to couple the first logic block to the second, third, and fourth logic blocks, and the fourth logic block to the second and third logic blocks. The logic block or blocks implementing the register file do so by configuring the function generators within the logic blocks as RAM blocks. Thus, for example, a register file can be implemented in the function generators of a single logic block.In one embodiment, the routing resources provide signals from the instruction register to the accumulator pointer, from the instruction register to the operation block, from the instruction register to the register file, from the accumulator pointer to the register file, from the operation block to the register file, and from the register file to the operation block.In some embodiments, the logic blocks used to implement the various elements of the CPU are all distinct from each other. In other embodiments, a single logic block is used to implement two different elements. For example, the function generators of a logic block can be used to implement at least a portion of the operation block, while the one-bit registers in the logic block are used to implement the instruction register or the accumulator pointer.Another aspect of the invention provides a method of implementing a CPU in a PLD. The method includes configuring at least a first logic block to implement an instruction register, configuring at least a second logic block to implement an accumulator pointer, configuring at least a third logic block to implement an operation block, and configuring at least a fourth logic block to implement a register file. The register file is implemented by configuring the function generators within the logic blocks as RAM blocks. The method also includes configuring routing resources to couple the first logic block to the second, third, and fourth logic blocks, and the fourth logic block to the second and third logic blocks.According to another aspect of the invention, a CPU implemented in a PLD includes an accumulator pointer, an operation block, an instruction register, and a register file. The instruction register has a first output terminal coupled to a first input terminal of the operation block, and a second output terminal coupled to an input terminal of the accumulator pointer. The register file has a first input terminal coupled to an output terminal of the accumulator pointer, a second input terminal coupled to an output terminal of the operation block, a third input terminal coupled to a third output terminal of the instruction register, and first and second output terminals coupled to second and third input terminals of the operation block. The register file implementation includes one or more programmable logic blocks of the PLD, the logic blocks comprising function generators optionally configurable as RAM blocks, the function generators of the register file being configured as RAM blocks in which the register file data is stored during operation of the CPU.Another aspect of the invention provides a PLD that includes programmable logic blocks and programmable routing resources interconnecting the logic blocks. Each logic block includes one-bit registers and function generators that are optionally configurable as lookup tables and as RAM blocks. The PLD includes logic blocks configured to implement an instruction register, an accumulator pointer, an operation block, a register file, and routing resources interconnecting these elements. The register file is implemented by configuring pairs of function generators of the respective logic block as dual-port RAM blocks.In one embodiment, the instruction register is implemented using the one-bit registers in a first logic block, the accumulator pointer is implemented using the one-bit registers in a second logic block, and the operation block is implemented by configuring the function generators in a third logic block as lookup tables. In some embodiments, these elements are implemented in distinct logic blocks. In other embodiments, elements implemented in function generators are combined with elements implemented in one-bit registers in a single logic block.BRIEF DESCRIPTION OF THE DRAWINGSThe present invention is illustrated by way of example, and not by way of limitation, in the following figures.FIG. 1 is a block diagram of a typical computer system.FIG. 2 is a block diagram of an accumulator-based CPU.FIG. 3 is a block diagram of a load-store CPU.FIG. 4 is a block diagram of a load-store CPU with a fixed accumulator.FIG. 5 is a block diagram of a load-store CPU with a moveable accumulator.FIG. 6 is a block diagram of an exemplary FPGA.FIG. 7 is a block diagram of an exemplary configurable logic block (CLB) in an FPGA.FIG. 8 shows a first implementation of a load-store CPU with a moveable accumulator in an FPGA that includes dedicated RAM blocks.FIG. 9 shows a more efficient implementation of a load-store CPU with a moveable accumulator in an exemplary FPGA.FIG. 10 shows a series of steps that can be used to implement the CPU of FIG. 9 in an FPGA having function generators implemented as lookup tables.DETAILED DESCRIPTION OF THE DRAWINGSThe present invention is believed to be applicable to a variety of PLD and PLD implementation systems. The present invention has been found to be particularly applicable and beneficial for FPGAs including arrays of programmable logic blocks known as CLBs. While the present invention is not so limited, an appreciation of the present invention is presented by way of specific examples directed to these FPGAs.Programmable logic devices (PLDs) are a well-known type of digital integrated circuit that can be programmed to perform specified logic functions. One type of PLD, the field programmable gate array (FPGA), typically includes an array of configurable logic blocks (CLBs) that connect to off-chip components via programmable input/output blocks (IOBs). The CLBs and IOBs are interconnected by a programmable interconnect structure. Some FPGAs also include additional logic blocks with special purposes (e.g., DLLs, block RAM, and so forth).The interconnect structure, CLBs, IOBs, and other logic blocks are typically programmed by loading a stream of configuration data into internal configuration memory cells that define how the interconnect structure and the various logic blocks are configured. The configuration data can be read from memory (e.g., an external PROM) or written into the FPGA by an external device. The collective states of the individual memory cells then determine the function of the FPGA.A user's design is typically "implemented" in a PLD by implementation software provided by the PLD manufacturer. The implementation software accepts a design description in netlist format, assigns the logic elements of the design to the various available logic blocks, and designates the interconnect paths that will be used to couple the logic blocks together. The end result provided by the implementation software is a stream of configuration data targeted to a specific PLD. Thus, the number of device resources used and the speed of the resulting circuit are heavily dependent upon the implementation software. For example, the choice of which logic blocks to use to implement the various sub-circuits in the design can be critical.FIG. 6 is a block diagram of a Virtex(R)-II FPGA, one type of FPGA that includes several different types of logic blocks. In addition to the standard CLBs and IOBs, the Xilinx Virtex-II FPGA includes blocks of Random Access Memory (BRAM) and blocks implementing Global Clock Managers (GCM) and Digital Clock Managers (DCM). The interconnect structure is not shown in FIG. 6, for clarity. However, the Xilinx Virtex-II FPGA is described in detail in pages 33-75 of the "Virtex-II Platform FPGA Handbook", published December, 2000, available from Xilinx, Inc., 2100 Logic Drive, San Jose, Calif. 95124, which pages are incorporated herein by reference.FIG. 7 is a simplified block diagram of a Virtex-II CLB. CLB 700 includes four "slices" SLICE-0-3, each slice including the logic shown in FIG. 7 for SLICE-0. (Other logic in the slice not relevant to the present application is omitted from FIG. 7, for clarity.) Each slice includes two function generators 701-702. Each function generator can be programmed to function as any of a 4-input lookup table, a 16-bit shift register, and 16 bits of random access memory (RAM) in any of several configurations. When the function generators are configured to function as RAM, a write strobe generator circuit 711 is active, and controls the write functions of the RAM.Multiplexer MUX1 passes either the output of function generator 701 or an independent input signal Reg-DI-1 to 1-bit register 721. Register 721 can be configured as either a flip-flop or a latch. The outputs of function generator 701 and register 721 are both optionally provided as outputs of the slice (OUT1 and Q1, respectively). Thus, the function generator and 1-bit register can be used independently of each other or can be coupled together so the register stores the function generator output signal.The elements in the other half of the slice, including function generator 702, multiplexer MUX2, and 1-bit register 722, are coupled together in a similar manner.Thus, it can be seen that a Virtex-II CLB includes eight function generators that optionally be configured as RAM blocks. Each function generator can be configured, for example, as a 16*1 single-port RAM. The two function generators of a single CLB slice can also be configured to work together as a 16*1 dual-port RAM, as described on pages 48-50 of the "Virtex-II Platform FPGA Handbook", referenced above. By combining all eight function generators in a CLB, the function generators can be used to implement a 16*4 RAM, i.e., a RAM that includes 16 words of 4 bits each.As shown in FIG. 6, a Virtex-II FPGA also includes blocks of dedicated RAM (BRAM or Block RAM). Large memories are most efficiently implemented (in terms of resource usage and resulting operating speed) in the dedicated RAM blocks. However, in some applications that include only small memories it can be advantageous to implement memory circuits in the much smaller function generators of the CLBs. The accumulator-based load-store CPU architecture described above provides one such application.FIG. 8 shows a relatively straightforward implementation of an accumulator-based load-store CPU architecture in a Virtex-II FPGA. The CPU includes an instruction register 801, an accumulator pointer 805, an operation block 803, and a register file 804. As described above, this CPU architecture is well suited to small instruction sizes. Therefore, an instruction size of eight bits is assumed for exemplary purposes. The instruction register, being 8 bits wide, is implemented in this example in the eight 1-bit registers of a single CLB.The register file in this example includes 16 words. Therefore, the accumulator pointer must be able to address one of 16 locations, and is consequently four bits wide. Even with supporting logic (if needed), the accumulator pointer can also be implemented in this example in a single CLB. However, the operation block includes enough registers and combinatorial logic to require several CLBs.The exemplary register file includes 16 words of 8 bits. A Block RAM in the Virtex-II FPGA can easily implement a register file of this size. Thus, the register file in this example is implemented in a single Block RAM logic block.There are drawbacks to this implementation, however. First, although only a portion of the Block RAM is needed for this application, the entire Block RAM has been allocated and is now unavailable for other purposes. Second, a large dedicated RAM block is designed to implement large memories. Therefore, it may be unnecessarily slow when used to implement smaller memories.FIG. 9 shows another implementation of the accumulator-based load-store CPU architecture of FIG. 5. This implementation takes advantage of the properties of the Virtex-II function generator to implement the entire CPU using CLBs, without using a dedicated RAM block.As in the implementation of FIG. 8, the instruction register 901, accumulator pointer 905, and operation block 903 are implemented using CLBs. However, in implementation 900 of FIG. 9, the register file 904 is implemented in CLBs as well.As described above, the eight function generators of a CLB can be used to implement a 16*4 dual-port RAM, i.e., a dual-port RAM that includes 16 words of 4 bits each. Therefore, the exemplary 16*8 register file can be implemented using two CLBs. (In implementations with larger register files, more than two Virtex-II CLBs are required. Registers files that are 16*4 or smaller can be implemented in a single CLB.)The embodiment of FIG. 9 has several advantages. One advantage is that the pictured implementation can be used in FPGAs that do not include dedicated Block RAM blocks. Another is that the implementation is smaller than that of FIG. 8, because only two CLBs are used instead of an entire Block RAM block. Another advantage is that the CLBs can be accessed using special fast routing resources called "direct connects", and other CLB routing that can be faster than the routing used to access the Block RAM.FIG. 10 shows a series of steps to be followed when implementing an accumulator-based load-store CPU using a CLB implementation such as that shown in FIG. 9. The order of the steps shown in FIG. 10 is immaterial. The steps can be performed in any order, or simultaneously.In step 1001, at least a first logic block (e.g., a CLB) is configured to implement an instruction register. In step 1002, at least a second logic block is configured to implement an accumulator pointer. In step 1003, at least a third logic block is configured to implement an operation block. In step 1004, at least a fourth logic block is configured as a register file by configuring one or more function generators in the logic block as RAM blocks that will be used to store the register file data. In step 1005, routing resources are used to make the interconnections as shown in FIG. 5.In one embodiment, the first, second, third, and fourth logic blocks are all distinct from one another. However, in some instances the logic can be combined into the same CLB. For example, the operation block includes a large amount of combinatorial logic, while the instruction register is conveniently implemented in the one-bit registers of a CLB. Thus, the combinatorial logic of the operation block can be implemented using the function generators of a logic block configured as lookup tables, while the instruction register is implemented using the one-bit registers of the same logic block. Referring to FIG. 10, in this example the first and third logic blocks are the same logic block. Similarly, the second and third logic blocks can be the same logic block. Because the register file is implemented using the function generators of the fourth logic block, the fourth logic block can be the same, for example, as the first or second logic block.The exemplary register file described herein is 16*8, i.e., it includes 16 registers of eight bits each. However, register files of other sizes can be used. For example, a 16*16 register file can be implemented in four Virtex-II CLBs, and a 16*4 register file can be implemented in a single Virtex-II CLB. Register files of other sizes can also be implemented using this technique, although the Block RAM implementation is more efficient for large register files.The methods of the present invention can be performed in either hardware, software, or any combination thereof, as those terms are currently known in the art. In particular, the present methods can be carried out by software, firmware, or microcode operating on a computer or computers of any type. Additionally, software embodying the methods of the present invention can comprise computer instructions in any form (e.g., source code, object code, interpreted code, etc.) stored in any computer-readable medium (e.g., ROM, RAM, magnetic media, punched tape or card, compact disc (CD) in any form, DVD, etc.). Further, such software can also be in the form of a computer data signal embodied in a carrier wave, such as that found within the well-known Web pages transferred among computers connected to the Internet. Accordingly, the present invention is not limited to any particular platform, unless specifically stated otherwise in the present disclosure.Those having skill in the relevant arts of the invention will now perceive various modifications and additions that can be made as a result of the disclosure herein. For example, PLDs, FPGAs, logic blocks, CLBs, function generators, registers, accumulator pointers, instruction registers, register files, operation blocks, and other components other than those described herein can be used to implement the invention.Moreover, some components are shown directly connected to one another while others are shown connected via intermediate components. In each instance the method of interconnection establishes some desired electrical communication between two or more circuit nodes. Such communication may often be accomplished using a number of circuit configurations, as will be understood by those of skill in the art.Accordingly, all such modifications and additions are deemed to be within the scope of the invention, which is to be limited only by the appended claims and their equivalents. |
Described herein is mask design and modeling for a set of masks to be successively imaged to print a composite pattern on a substrate, such as a semiconductor wafer. Further described herein is a method of double patterning a substrate with the set of masks. Also described herein is a method of correcting a drawn pattern of one of the mask levels based on a predicted pattern contour of the other of the mask levels. Also described herein is a method of modeling a resist profile contour for a mask level in which photoresist is applied onto a inhomogeneous substrate, as well as method of predicting a resist profile of a Boolean operation of two masks. |
CLAIMSWhat is claimed is:1. A method for composing polygons of a first mask level and a second mask level into a target pattern of polygons to be formed on a substrate by sequentially printing the first mask level and the second mask level, the method comprising:receiving a design layout defining the target pattern;synthesizing, based on the target pattern, a sacrificial enabling pattern which, when added to the target pattern, increases a regularity of edges to approximate a diffraction grating;generating, as the first mask level, a grating pattern to be printed on the substrate an exposure wavelength, the grating pattern being inclusive of the target pattern and the enabling pattern; andgenerating, as the second mask level, a plug pattern to be printed on the substrate with the exposure wavelength, the plug pattern to remove substantially all of the sacrificial enabling pattern from the substrate while retaining the target pattern on the substrate.2. The method as in claim 1, wherein generating the plug pattern further comprises generating plug polygons to fully cover polygons of the sacrificial enabling pattern without covering the target pattern.3. The method as in claim 1 , wherein synthesizing the sacrificial enabling pattern further comprises generating a plurality of enabling polygons, all of which can be removed by the plug pattern without generating a plug polygon touching a longest edge of a target polygon having a shortest edge drawn to a minimum dimension design rule for the target pattern.4. The method as in claim 1, wherein the target pattern is bi-directional to include a first subset of target polygons having a longest length along a first mask dimension and a second subset of target polygons having a longest length along a second mask dimension.5. The method as in claim 1 , wherein synthesizing the sacrificial enabling pattern further comprises extending a first length of a first target polygon with a first enabling polygon, wherein the first enabling polygon has a second length that is no greater than a second length of the first target polygon.6. The method as in claim 5, wherein the first enabling polygon joins the first target polygon to the second target polygon to eliminate an end-to-end space between the first and second target polygons drawn to a minimum dimension design rule for the target pattern.7. A mask set comprising multiple masks which are to be sequentially printed with a same lithography technique, onto a substrate to form a target pattern of polygons, the mask set comprising:a first mask having a fist level pattern approximating a diffraction grating pattern that is inclusive of the target pattern and a sacrificial enabling pattern, wherein the target pattern is bi-directional to include a first subset of target polygons having a longest length along a first mask dimension and a second subset of target polygons having a longest length along a second mask dimension; anda second mask having a second level pattern to remove from the substrate substantially all of the sacrificial enabling pattern while retaining the target pattern.8. The mask set as in claim 7, wherein the plug pattern removes the sacrificial enabling pattern from the grating pattern without printing a plug polygon touching a longest edge of a target polygon having a shortest edge drawn to a minimum dimension design rule for the target pattern.9. A device manufacturing method for forming a target pattern of polygons a substrate by sequentially printing, with a same exposure wavelength, a first mask level and a second mask level, the method comprising:providing a substrate having a first layer of photo-sensitive material disposed thereon;printing, in the first layer of photo-sensitive material, a grating pattern inclusive of the target pattern and a synthesized sacrificial enabling pattern which, when printed along with the target pattern, increases a regularity of edges;disposing a second layer of photo-sensitive material over the printed grating pattern; andprinting, in the second layer of photo-sensitive material, a plug pattern to remove substantially all of the sacrificial enabling pattern from the substrate while retaining the target pattern on the substrate.10. The device manufacturing method as in claim 9, wherein printing the grating pattern further comprises forming trenches in the first layer of photo- sensitive material; wherein printing the grating pattern to remove substantially all of the sacrificial enabling pattern comprises filling, with the second layer of photo-sensitive material, a subset of the trenches corresponding to the sacrificial enabling pattern; and wherein retaining the target pattern on the substrate further comprises removing the second layer of photo-sensitive material from a subset of the trenches corresponding to the target pattern.1 1. A method for synthesizing polygons of a first mask level and a second mask level into a target pattern of polygons to be formed on a substrate by sequentially printing the first mask level and the second mask level, the method comprising:synthesizing, based on the target pattern, a sacrificial enabling pattern;performing a first optical proximity correction (OPC) process on the first mask level pattern to be printed, wherein the first mask level pattern is inclusive of the target pattern and the sacrificial enabling pattern;determining a printing performance of the first mask level pattern;sizing polygons of a second mask level pattern with a second OPC process bounded by the determined printing performance of the first mask level pattern and the target pattern to remove the sacrificial enabling pattern while retaining the target pattern.12. The method as in claim 1 1 , wherein determining the printing performance of the first mask level pattern further comprises generating a model predicted first mask level pattern contour based on the first OPC process.13. The method as in claim 12, wherein the second mask level pattern fully encloses the sacrificial enabling pattern and wherein the second OPC process further comprises sizing the second mask level pattern to cover portions of the model predicted first mask level pattern contour not enclosed by either the target pattern or the sacrificial enabling pattern.14. The method as in claim 1 1 , wherein performing the second OPC process further comprises:segmenting polygons of the second mask level pattern;determining, for each segment, a distance between the first mask level pattern contour and second mask level pattern contour;determining for each segment, a distance to the nearest target pattern edge; and displacing the segment depending on the relationship of the determined distances to the target pattern and to the first pattern contour.15. A method of computing a resist profile of a patterned object formed by a photolithographic projection system in a photoresist layer disposed over a topographic feature, the method comprising: calculating a base intensity for the photoresist layer as a function of at least a first homogenous planar substrate material disposed below the photoresist layer;calculating an optical scattering correction as a function of a distance from an edge of the topographic feature;calculating a diffusion correction as a function of the distance from the topographic feature edge;scaling the base intensity with each of the optical scattering correction function and the diffusion correction function to generate a topography dependent intensity;inputting the topography dependent blur intensity into an optical proximity correction algorithm; andcorrecting, based on the topography dependent blur intensity, a dimension of a polygon projected by the lithography system to modify the pattern object..16. The method as in claim 15, further comprising:calculating the base intensity as a function of a second homogenous planar substrate material disposed below the photoresist layer; wherein the second substrate material is different than the first material and wherein each of the first and second substrate materials are disposed below different areas of the first photoresist layer.17. The method as in claim 16, wherein calculating the base intensity as a function of the first and second homogenous planar substrate materials further comprises:determining a first blur intensity for the photoresist layer disposed over a first homogenous substrate material;determining a second blur intensity for the photoresist layer disposed over a second homogenous substrate material; anddetermining the base intensity as a function of both the first and second blur intensity, wherein determining the base intensity as a function of both the first and second blur intensity further comprises:calculating a convolution of the Gaussian function with a rectangular function; scaling the first blur intensity with the convolution;scaling the second blur intensity with a complement of the convolution; and summing the scaled first and second intensities.18. The method as in claim 17, wherein scaling the base intensity with the optical the diffusion correction function further comprises: determining a first diffusion perturbation component as a function of the diffusion correction function, a slope of the base intensity in a first dimension, and a slope of the topographic edge in the first dimension; anddetermining a second diffusion perturbation component as a function of the diffusion correction function, a slope of the base intensity in a second dimension, and a slope of the topographic edge in the second dimension.19. A method for predicting a Boolean operation of polygons modeled for a photolithographic mask set for double pattering, the method comprising:receiving a first image intensity signal for a first mask level of the double patterning mask set;receiving a second image intensity signal for a second mask level of the double patterning mask set;modifying at least one of the first and second image intensity signals with a cross term from the other of the first and second image intensity signals;generating, based on a Boolean operation of the second mask level and the first mask level, a function of the image intensity signals, as modified;determining a composite contour from the function of the first and second image intensity signals; andoutputting the composite contour as a prediction of an image that results on a substrate from successively printing the first and second mask levels on the substrate.20. The method as in claim 19, wherein the proximity influence term in the modified image intensity signal is an exponential function of the other image intensity signal, and wherein the proximity influence term increases in magnitude as the first and second image intensity signals become more proximate to each other within the mask area.21. A computer readable storage media with instructions stored thereon, which when executed by a processor, cause the processor to perform the any of the methods of claims 1 -6, and 9-20. |
MASK DESIGN AND OPC FOR DEVICE MANUFACTURETECHNICAL FIELDThis disclosure relates generally to the field of lithographic masks for the manufacture of microelectronic devices, and more particularly to design of a set of masks to be successively imaged to print a target pattern in a single level of the device (e.g., double patterning),BACKGROUNDFor the past several decades, the scaling of features in integrated circuits has been a driving force behind an ever-growing semiconductor industry. Scaling to smaller and smaller features enables increased densities of functional units on the limited real estate of semiconductor chips.A photolithographic mask comprises geometric patterns of polygons corresponding to the circuit components to be integrated onto a wafer. The patterns used to create such masks are typically generated utilizing CAD (computer-aided design) programs via an EDA (electronic design automation) process. Most CAD programs follow a set of predetermined design rules in order to position the polygons to create functional masks. These rules are set by manufacturing process and circuit design limitations. For example, design rules define the space tolerance between circuit components (such as gates, capacitors, etc.) or interconnect lines to ensure a high device yield. The design rule limitations are typically referred to as "critical dimensions" (CD). A CD of a circuit can be defined as the smallest length of a line or trench or the smallest space between two lines or two trenches. Thus, the CD determines the overall size and density of the designed circuit.Recently device scaling has outpaced the development of lithography systems (e.g., scanners). Patterning interconnect geometries, for example, at the 22nrn node, using a 193 lira wavelength scanner may require a nesting of every narrow drawn line to force a circuit design to an optimal pitch (e.g., having a design-rule minimum space CD for every minimum CD line) with several compliance features and a drawing of large end-to-ends in an effort to reduce the mask error enhancement factor (MEEF) of the lithographic process to an acceptable level for the lithographic technology node. The resulting increase in circuit footprint can, however, negate the benefit of scaling the CD down to the 22nm technology node. One technique to overcome this difficulty is double-patterning technology or "DPT." Conventional double patterning involves decomposing a dense circuit pattern into two separate, less-dense patterns which are then printed separately on a target wafer utilizing two separate masks. One of the two masks is utilized to image one of the less- dense patterns, and the other mask is utilized to image the other less-dense patterns. The second pattern is printed in between the lines of the first pattern such that the imaged wafer has, lor example, a feature pitch which is half that found on either of the two masks (i.e., "pitch doubling"). Another conventional technique forms a unidirectional pattern in each interconnect layer. These conventional double patterning techniques however require a very strict set of design rules, are ultimately still exposed to the problems of increased circuit footprint and an increased risk of "escapes" during design rule checking, and double the cost of forming each functional interconnect layer.Accordingly, a procedure for designing a set of masks to be successively imaged to print a composite pattern in a manner which can overcome these difficulties isadvantageous.BRIEF DESCRIPTION OF THE DRAWINGSFigure 1 illustrates a flow diagram depicting selected operations in a successive imaging of the set of masks to lithographically print a composite pattern on a substrate, in accordance with an embodiment.Figure 2 illustrates a flow diagram depicting selected operations in the design generation of a set of masks to be successively imaged to print a composite pattern, in accordance with an embodiment.Figure 3 A illustrates a layout view an exemplary drawn target pattern, in accordance with an embodiment.Figures 3B and 3C illustrate layout views of a set of masks to be generated and successively imaged to print the target pattern illustrated in Figure 3A on a substrate, in accordance with an embodiment.Figure 3 D illustrates a layout view of set of masks depicted in Figures 3B-3C and showing a composite of the target pattern depicted in Figure 3 A, in accordance with an embodiment.Figures 4A, 4B and 4D illustrate layout views of portions of the patterns depicted in Figures 3A, 3B and 3D, respectively, in accordance with an embodiment. Figures 4C and 4E illustrate intensity contours of a first and second level pattern, as corrected from their drawn state, in accordance with an embodiment.Figure 5 illustrates a How diagram depicting selected operations in an optical proximity correction process for a second mask of the set of masks to successively imaged during a double patterning lithographically process, in accordance with an embodiment.Figure 6 illustrates a layout view of a portion of a first level intensity contour being utilized for OPC of a second level polygon, in accordance with an embodiment.Figure 7A is a flow diagram of an OPC algorithm which makes corrections to a second level pattern based on a second level resist profile model as a function of first level pattern topography, in accordance with an embodiment.Figure 7B is a flow diagram for modeling a resist profile for an inhomogeneous substrate, in accordance with an embodiment.Figure 7C illustrates a layout view of a second level mask pattern contour modeled based on a first level pattern and a second level patternFigure 8A illustrates a How diagram for predicting a Boolean operation of polygons modeled for a photolithographic mask set, in accordance with an embodiment.Figures 8B and 8C illustrate a composite contour resulting from a Boolean operation of polygons modeled for a photolithographic mask set, in accordance with an embodiment.Figure 8D illustrates a layout view of a first level mask pattern contour and second level mask pattern contour and a pinch violation.Figure 8E illustrates a flow diagram for predicting a Boolean operation of polygons modeled for a photolithographic mask set with reduced false violations, in accordance with an embodiment.Figure 9 illustrates a block diagram of an exemplary computer system used to practice embodiments of the present invention.DETAILED DESCRIPTIONDescribed herein is mask design and modeling for a set of masks to besuccessively imaged to print a composite pattern on a substrate, such as a semiconductor wafer. Further described herein is a method of double patterning a substrate with the set of masks. Also described herein is a method of correcting a drawn pattern of one of the mask levels based on a predicted pattern contour of the other of the mask levels. Further described herein is a method of modeling a resist profile contour for a mask level in which photoresist is applied onto a inhomogeneous substrate, as well as method of predicting a resist profile of a Boolean operation of two masks.An algorithm is here, and generally, considered to be a self-consistent sequence of acts or operations leading to a desired result. These include physical manipulations of physical quantities. Usually, though not necessarily, these quantities take the form of electrical or magnetic signals capable of being stored, transferred, combined, compared, and otherwise manipulated. It has proven convenient at times, principally for reasons of common usage, to refer to these signals as bits, values, elements, symbols, characters, terms, levels, numbers or the like. It should be understood, however, that all of these and similar terms are to be associated with the appropriate physical quantities and are merely convenient labels applied to these quantities.Unless specifically stated otherwise, as apparent from the following discussions, it is appreciated that throughout the specification discussions utilizing terms such as "processing," "computing," "calculating," "determining," or the like, refer to the action and/or processes of a computer or computing system, or similar electronic computing device, that manipulate and/or transform data represented as physical, such as electronic, quantities within the computing system's registers and/or memories into other data similarly represented as physical quantities within the computing system's memories, registers or other such information storage, transmission or display devices.The multiple patterning mask design embodiments described herein generate first and second masks which are to be successively printed on a substrate at a first masking level and a second masking level, respectively. Generally, the first mask pattern is synthesized through an addition of a sacrificial enabling pattern, at least some of which is to be printed onto the substrate by the lithography system at a first masking level, to a drawn design target pattern and thereby approximate a grating pattern. This grating pattern is then used to create a first level mask pattern that includes the target pattern, as a subset of the grating pattern. The second mask is generated based on at least the enabling pattern to eliminate, from the printed first level, the sacrificial enabling pattern. As such, it should be appreciated that the double patterning described herein is distinct from conventional techniques which decompose a target pattern into pattern subsets which are printed at separate levels. Rather, embodiments herein print the entire target pattern as well as the sacrificial enabling pattern in a single patterning level and then use the second patterning level to remove the sacrificial enabling pattern from a substrate. Thus, rather than seeking to reduce a target pattern's density through the use of multiple mask imaging, embodiments herein leverage a second patterning operation to enable the imaging performance for a target pattern to be improved (e.g., by a increasing a regularity of edges) through the printing of sacrificial features onto the substrate. Absent the second patterning operation, such enabling features would not be sacrificial and the device design rules would need to be modified to incorporate permanent patterning artifacts. However, as described further elsewhere herein, embodiments of the present invention require little, if any, modification of device design rules because theAs one of ordinary skill in the art will appreciate, the multiple pattern mask designs described herein may be utilized to print a double pattern on a substrate using a wide variety of photolithographic patterning techniques and systems. As one example, Figure 1 depicts a "grating and plug" double patterning method 100. Figure 1 illustrates selected operations of the grating and plug double patterning method 100 along with an exemplary cross-section representation of a substrate as it is processed through the selected operations. The grating and plug double patterning method 100 begins at operation 1 10 with coating a wafer 1 1 1 with a first layer of photoresist 1 12. Any conventional coating process and photoresist (negative or positive) known in the art may be employed.At operation 1 15, a first mask is printed as the first patterning level to expose a trench region 1 3 of the first photoresist layer 1 12. In the exemplary embodiment, the first level pattern approximates a grating and includes a target pattern comprising bidirectional polygons, as discussed in detail elsewhere herein. Any lithography system known in the art may be used to image the first patterning level on the wafer 1 1 1 , such as but not limited to, conventional UV-illuminated steppers or scanners employing a 193 nm wavelength radiation source. In alternative embodiments, shorter wavelengths (e.g., 157 nm or an extreme ultraviolet (EUV)) may also be employed. At operation 120, the first photoresist layer 1 12 is developed and/or baked to form the first level pattern topography. The trench 121 is formed in the first photo resist layer 1 12 having a longest dimension L!longer than the target pattern with a shortest dimension (CD) equal to a fraction of the imaging wavelength λ/χ where x is, for example, at least 4. As discussed further elsewhere herein, features having a smaller CD (and resulting in a higher MEEF) have been subsumed into the grating approximation. At operation 130, the wafer 1 1 1 is coated with a second layer of photoresist 131 , substantially filling the patterned topography (e.g., trenches) present in the first layer of photoresist 112. Depending on the embodiment, the second layer of photoresist 13 1 may be of the same or different composition as that employed for the first layer of photoresist 1 12.At operation 140, a second mask is printed as the second patterning level to expose a portion 141 of the second photoresist layer 13 1 . In the exemplary embodiment, the second level pattern reduces the grating printed at the first pattern layer to the target pattern, as further discussed elsewhere herein. In the grating and plug double patterning method 100 where the first level pattern is to form trench regions 1 13, the second level pattern "plugs" the portion of the trench regions 1 13 which do not correspond to a target pattern. Prior to the operation 140, the first photoresist layer 1 12 is rendered non- photosensitive using various techniques, such as a hard bake or chemical treatment. Any lithography system known in the art may be employed at operation 140, such as but not limited to, UV-based steppers or scanners employing 193 nm, 157 nm or an extreme ultraviolet (EUV) wavelength radiation source. In the preferred embodiment, the wavelength employed at operation 140 is the same as that employed at operation 1 15.At operation 150 the second photoresist layer 131 is developed and/or baked to form the target pattern with the remaining portions of both the first and second resist layers 1 12, 13 1. As shown, the second level pattern has a trench 151 with a longest length L2that is now to be equal to the polygon target. With the shortened length (minimum end- to-ends are now present), the trench 151 is to retain the shortest length CD equal to that which was printed at the first level (e.g., a fraction of the imaging wavelength). Yet, because this CD and end-to-end spacing is a result of overlap between the first and second resist layer 1 12, 13 1 , the second level patterning may also have a low MEEF (e.g., edges having dimensions greater than those of the trench 151). Following operation 150, the grating and plug double patterning method 100 is substantially complete and the wafer may be further processed as known in the art (etch, etc.).While the exemplary grating and plug double patterning method 100 illustrates one application whereby trenches formed in the first level patterning are subsequently filled and partially reopened, the multiple patterning techniques described herein may also be applied to double patterning techniques employing a single layer of resist. For example, the first mask level described herein may also be applied to print a grating approximation comprising unexposed lines which include both the target pattern and the sacrificial enabling pattern and the second mask level described may also be applied to remove the sacrificial unexposed lines to arrive at the target pattern. Figure 2 illustrates a mask design and generation method 200 for composing polygons of a first mask level and a second mask level into a target pattern of polygons to be formed on a substrate by sequentially printing the first mask level mask level. Method 200 begins at operation 210 with receipt of a drawn design layout, for example as generated by a layout description language such as GDS-II or OASIS. Figure 2 also illustrates an exemplary target pattern 21 1 representing a portion of an interconnect design, for example, as received from an IC designer, which is to be reproduced onto a level of a semiconductor wafer. The target pattern 21 1 includes the polygons 212 which are eventually to be faithfully imaged onto the wafer at the designed dimensions by lithographic printing. In an embodiment, the target pattern 21 1 is bidirectional, including polygons having a longest length in two directions.Figure 3 A further illustrates an exemplary target pattern 1 1 , as drawn. As shown, the polygon 3 12 has a longest length I. along the x-dimension and a shortest length, CD| along the y-dimension. The polygon 313 has a longest length along the y-dimension and a shortest length along the x-dimension such that the target pattern 3 1 1 is bidirectional. In the exemplary embodiment depicted in Figure 3 A, the target pattern 3 1 1 further includes polygon 314 with segments which run along both the x and y dimensions.Returning to Figure 2, at operation 215 a sacrificial enabling pattern is synthesized, based on the target pattern. The sacrificial enabling pattern is to be added to the target pattern (e.g., with a Boolean OR operation) to improve imaging performance of the first mask level for a given lithographic tool beyond what would be achieved with a mask including only the target pattern. For example, a sacrificial enabling pattern 216 containing a single polygon is generated, to have a particular position, shape and size based on the input target pattern 21 1. Although a number of rules and objectives may control generation of the sacrificial enabling pattern, in one embodiment, the generation of the enabling pattern 216 is performed with an objective function that drives the target pattern 21 1 to increase a regularity of edges toward an approximation of a diffraction grating. In a particular embodiment, at operation 215, a plurality of sacrificial enabling polygons are generated which result in both the grating pattern (including the sacrificial polygons) and the second level pattern used to remove the sacrificial polygons from the grating pattern having a mask error enhancement factor (MEEF) below that of the target pattern. Within this bound, sacrificial polygons may be synthesized to drive an arbitrary target pattern to converge at a grating pattern which can be imaged and then "cleaned up" with a second patterning more easily than directly imaging only the target pattern.At operation 220, a Boolean OR operation is performed on the drawn target pattern 21 1 and the sacrificial enabling pattern 216 to generate a synthesized grating pattern 221. As further depicted in Figure 3B, a synthesized diffraction grating pattern 321 is a Boolean OR of the drawn target pattern 3 1 1 of Figure 3A and a sacrificial enabling pattern 316 (e.g., generated at operation 215 of Figure 2). In one particular embodiment, the enclosed polygons are to form open regions in a mask with the surrounding region to be lower transmittance (i.e., chrome). The synthesized di ffraction grating pattern 321 includes polygons having a minimum pitch resolvable by the optical projection system, Pitch( (. and defines a shortest length of all polygons in one of the two mask dimensions (e.g.,'D | ). It will be appreciated by those of ordinary skill in the art that a diffraction grating pattern can be printed with a minimum pitch smaller than is possible for a non-periodic pattern. It is therefore possible to print the target polygons 312, as embedded within a grating pattern, with a reduced CDi (or with a reduced spacing, Si, along the y-dimension between adjacent polygons 3 12).In one embodiment, as illustrated in Figure 3B, the sacrificial enabling pattern 316 extends a first length of the target polygon 312 with an enabling polygon 317 has a shortest length (CD) that is no greater than that of the target polygon 312. As shown, the enabling polygon 317 and the target polygon 312 are drawn to substantially the same critical dimension (CDi). Extension of the target polygon 312 with the enabling polygon 317 serves to improve the edge regularity or periodicity of the synthesized grating pattern 321 by extending the longest edge of the target polygon 312 to be adjacent to the entirety of the longest edge of the target polygon 327.In another embodiment, also illustrated in Figure 3B, an enabling polygon 318 joins the target polygon 312 to an adjacent target polygon 319 to eliminate the end-to-end space 320 present in the drawn target pattern 31 1 (Figure 3A). With the joining of adjacent target polygons, the sacrificial enabling pattern 316 may greatly reduce the iMEEF of the first pattern level, particularly where the end-to-end space 320 is a sized at the design rule minimum space. This elimination of minimum end-to-end spaces, along with the ability to print grating structures at a reduced pitch, is an important aspect of the double patterning methods described herein. In further embodiments, end-to-end spaces between target polygon having different CDs are also advantageously eliminated. Enabling polygon 325, for example, eliminates an end-to-end space between the target polygon 326 and the target polygon 327 even though the target polygon 326 has a larger CD than the target polygon 327 and is also offset from the target polygon 327 in the y-dimension. Nevertheless, a diffraction grating is approximated by drawing the enabling polygon 325 with a first length to join the target polygons 326, 327 and with a second length equal to the y-dimension overlap between the target polygons 326, 327. As such, the shortest length of the enabling polygon 325 is smaller than that of either the target polygons 326, 327. For particular embodiments where the shortest length of the polygon 327 is equal to the design rule minimum dimension (e.g., CDi), the shortest drawn length of the enabling polygon 325 is below the minimum dimension design rule for the drawn target pattern 31 1. Such sub-design rule connectors are created to eliminate pullback concerns due to high MEEF as well as eliminate mask manufacturing inspection constraints. In particular embodiments, sub- design rule connectors like the enabling polygon 325 are sized during a subsequent correction step (e.g., OPC operation 225 discussed elsewhere herein) to the minimum size required at mask manufacture. Because the enabling polygon 325 will likely be sub- resolution when imaged onto a wafer (e.g., operation 1 15 of Figure 1), the enabling polygon 325 patterns marginally, if at all, and is never the less cleaned out in the subsequent level two patterning (e.g., operation 140 of Figure 1).In another embodiment, asacrificial enabling pattern is synthesized to nest a target polygon with one or more enabling polygons having an edge positioned adjacent to the longest edge of the target polygon. As an example, in Figure 3B a target polygon 330 is nested with the enabling polygon 331 having an edge adjacent to the longest edge of the target polygon 330. Such nesting may allow the target polygon 330 to print with the reduced minimum dimension, CD| because the synthesized grating pattern 321 is extended past the target polygon 330. In another embodiment, an enabling polygon joins less than all of a longest length of a first target polygon to a second target polygon to provide a continuous edge that is proximate to a longest edge of a third target polygon. For example, in Figure 3B, a portion of the longest edge of the target polygon 332 is joined to a portion of the longest edge of the target polygon 333 by the enabling polygon 334. The shortest edges of the target polygons 332, 333 combine with an edge of the enabling polygon 334 to form a continuous edge of the synthesized grating pattern 321. This continuous edge, in turn, benefits the imaging of the adjacent target polygon 330, which is particularly advantageous where the target polygon 330 is a narrow feature having a shortest edge drawn to a minimum dimension design rule for the target pattern (e.g., CD, ).Returning to Figure 2, with the grating pattern 221 synthesized, the method 200 proceeds to correct the drawn grating at the OPC operation 225. The OPC performed is to compute and adjust extension of edges of the opaque/non-transparent(e.g., chrome), very low transmittance (e.g., 6% in embedded phase shift masks), etc. areas of a mask that will be used to print the synthesized grating pattern 221. The OPC treatment at operation 225 may include the addition of assist features 227 (i.e.. scatter bars), serif structures, and the like to the synthesized grating pattern 221 to generate a corrected grating pattern 226.At operation 230, a synthesized second level (plug) pattern 231 is generated from the synthesized enabling pattern 216. As with the synthesized enabling pattern 216. the plug pattern 231 does not form a part of the electrically active set of polygons and so polygons comprising the plug pattern 231 are therefore also referred to herein as"synthesized." In one embodiment, as illustrated in Figure 2, the plug pattern is synthesized at operation 230 to fully cover the sacrificial enabling polygons while maintaining a mask error enhancement factor (MEEF) below that of the target pattern. In a further embodiment, the polygons of the synthesized enabling pattern 216 are synthesized to overlap the drawn sacrificial enabling polygons. Figure 3C further depicts an exemplary plug pattern 351 generated to remove the sacrificial enabling patterns 316 from the synthesized grating pattern 321 . In one particular embodiment, the polygons 352 form high transmittance regions in a mask with the surrounding regions 353 being opaque. One of skill in the art will appreciate however, that the methods described herein may be adapted to utilize a plug pattern 351 having an opposite polarity as that depicted in the exemplary embodiment depicted in Figure 3C.Figure 3D depicts the plug pattern 351 overlayed on the synthesized grating pattern 321 to illustrate how the two mask levels are to compose the target pattern 31 1. In one particular embodiment where the enclosed polygons 312 are to form open regions in the first level mask corresponding to the target pattern, the enclosed polygons 322 are to form open regions in the first level mask corresponding to the sacrificial enabling structures, and the enclosed polygons 352 are to form closed regions in the second level mask, the double pattern 3 1 0 results in a trench in photoresist rendering the target pattern. Such an embodiment is advantageous for a metal interconnect level, for example.As shown in Figure 3D, the plug pattern 35 1 essentially fully covers the sacrificial enabling pattern 3 16 without covering any substantial portion of the target pattern 3 1 1 . The plug pattern 351 , although not complex, removes the sacrificial enabling pattern 3 16 without generating a plug polygon that touches a longest edge of a target polygon having a shortest edge drawn to a minimum dimension design rule (CD [ ) for the target pattern 3 1 1 . Notably, the plug pattern 35 1 need only touch longest edges of target polygons oriented along the y-axis. Scumming concerns may be reduced with polygons oriented along the y- axis being less narrow than those oriented along the x-axis of the grating.At operation 240, an OPC process is performed on the drawn plug pattern.Because the double patterning mask design has stringent overlay requirements, in one embodiment, the OPC process performed at operation 240 is based on a predicted grating pattern contour 2 1 . The contour 25 1 may be generated by a Constant Threshold Model, or a Variable Threshold Resist (VTR) model. For such embodiments, the OPC performed on the second level pattern is dependent on the OPC performed on the first level patterning. Generally, the OPC performed at operation 240 takes, as an input, the output of the first level (grating pattern) OPC and generates the predicted grating pattern contour 251.At operation 250, the expected or predicted grating pattern contour 25 1 is generated from the corrected grating pattern 221 . In a particular embodiment, the grating pattern contour 251 is a model prediction of the grating pattern's imaging performance. The model prediction may be based on one or more statistical and phenomenological models characterizing a lithographic process that is to be used to print the grating pattern (e.g., at operation 1 15 of Figure 1 ), such as, but not limited to, diffractive, diffusive and fluidic mechanics.In the exemplary embodiment depicted in Figure 2, the model predicted grating pattern contour 25 1 is utilized in the correction of the second level pattern (e.g., "plug" pattern) to bound a sizing of the plug pattern at operation 260. It should be noted the exemplary plug OPC embodiments herein avoid a rule-based approach to pre- compensating the plug pattern for potential grating upsizing or grating pattern contour shifts because such rule-based approaches may require significantly more die-area. Figure 4A illustrates a layout view of a portion 41 1 of the target pattern 3 1 1 depicted in Figure 3 A with a corresponding portion 421 of the synthesized grating pattern 321 is further shown in Figure 4B. Figure 4C illustrates a predicted grating contour 451 for the grating pattern portion 421. The grating contour may deviate substantially from the synthesized grating pattern, as shown with contour regions 452 and 453 for example.Figure 4D illustrates a plug pattern 431 synthesized to fully cover the sacrificial enabling polygons (as synthesized). Figure 4E illustrates a predicted plug contour 471 generated in accordance with an embodiment of the present invention which corrects the plug pattern 431 based on the predicted grating pattern contour 451. As such, the plug OPC process is made aware of where the grating does not converge exactly to the synthesized grating target so that the plug grows out to meet a specification for coverage or overlap of the sacrificial enabling structures. Following this paradigm, the second level OPC described herein avoids forming a bridge 472 between two adjacent target structures or an artifact 473 of the sacrificial enabling pattern that may have formed had the plug pattern 431 not been corrected in a manner "aware" of the predicted grating contour 451.Figure 5 illustrates a flow diagram depicting selected operations in the OPC process 500 performed at operation 240, in accordance with an embodiment. Generally, the OPC process 500 is to determine a distance between the plug contour and grating contour, and then assign a corresponding displacement to the synthesized plug pattern to ensure the plug contour encloses the sacrificial grating contour. As shown, the second- level OPC process begins at operation 510 by reading in the OPC output for the grating pattern (first level pattern). The grating contour is then generated at operation 520. At operation 512, the synthesized plug pattern (second level pattern) is read in. In the exemplary embodiment, polygons of the synthesized plug pattern are segmented at operation 522. For example, each polygon may be segmented into n segments where the number n may depend on the complexity of the plug polygon. Beginning with a first segment of a first plug polygon, a distance between the nearest plug contour and the nearest grating contour is determined. In the exemplary embodiment shown in Figure 6, an outside distance Dout, as measured externally from the drawn plug edge to the nearest the grating contour 451, is determined. In a further embodiment, an inside distance Djn, as measured internally from the drawn plug edge to the nearest the grating contour 451, is also determined for each segment. The inside and outside distances are then used to bound and drive a displacement of the plug segment from its drawn position. Returning to Figure 5, at operation 513, the drawn target pattern and synthesized grating pattern is read in. At operation 530, an outside distance between the plug segment and the nearest drawn target polygon edge is determined so that the plug segment is not displaced over a target polygon in the effort to expand the plug polygon edges to meet a nearest grating contour 451. At operation 540, the plug polygon segment is displaced toward the nearest grating contour without covering a drawn target pattern, based on the distances determined at operations 525 and 530 which define a relationship(inside/outside/collinear) between the drawn plug segment and both the drawn target and the grating contour.One or more objective functions may be applied at operation 540. For example, a segment may be displaced to maintain a minimum distance between the segment and a target pattern polygon edge. In an alternative embodiment, a plug segment is displaced to be outside of the sacrificial enabling pattern if the determined distances indicate the nearest grating contour is outside of the sacrificial enabling pattern and the target pattern by more than a threshold tolerance. Such a threshold tolerance may be based on an empirically determined lithographic process tolerance. Segment displacement proceeds iteratively until all segments of a plug polygon have meet the objective function and the OPC method 500 completes when a similar sizing algorithm has been executed for all plug polygons.In a further embodiment, the second level OPC is based on a model predicted pattern contour which is derived from a second level resist profile model as a function of underlying topography (e.g., topography resulting from the first level pattern). A resist profile is an output of a simulation of the chemically altered portion of the resist layer corresponding to the spatial distribution of a radiation intensity through the thickness of the resist layer as would be formed by a patterned mask. Because the second level photoresist is applied over the first level pattern, topographical effects are much more significant than for the first level photoresist which is applied to a more ideally planar substrate surface as typically encountered in contemporary photolithography.The modeled resist profile embodiments described herein are three to four orders of magnitude faster than a rigorous computation of the radiation intensity which may be performed with a 3D field solver using a finite-difference time-domain (FTD) method combined with a photoresist chemistry solver. As such the modeled resist profile embodiments described herein are full-chip capable. Rigorous methods are prohibitive slow for chip-scale OPC due to the enormous number of image calculations that OPC algorithms must carry out. For OPC, multiple intensity calculations are made to characterize the image rendering any given one of the hundreds of millions of features in a mask level and pattern adjustments are also typically iterated to accommodate the interaction of each adjusted feature with its neighbors.In addition to the modeled resist profile embodiments described herein being very fast, because they are physics-based rather than rule-based they may be applied to arbitrary polygon shapes and arbitrary topographies. In contrast, rule-based models or models which adopt empirically derived topography rules that don't otherwise account for topography are inherently limited to a finite set of tabulated correction factors.Furthermore, because bulk intensity and photoresist development models contain many fitting parameters which do not correspond to measurable physical phenomena relating to topography they are difficult to adjust to accommodate topographic effects.While the photoresist profile model is described herein for the specific application of multi-level patterning process such as illustrated in Figure 1 , it should be appreciated that the methodology is readily applicable to modeling of any resist profile where the substrate is inhomogeneous. For example, the above two component model conditions where no topographic feature is present under a photoresist to be modeled, but there is a variation in the composition of the underlying materials across a given area of photoresist which present a different index contrast that may impact the resist profile. As another example, underlying topography may impact the resist profile even where a single layer of photoresist is applied because the substrate may not be completely planar (e.g., after short recess etches, cleans, etc.). In this situation too, the photoresist profile model described herein advantageously improves an OPC calculation which assumes planarity or is limited to an application of an empirically-based topographic rule set. As such, the resist profiling embodiments described herein are not limited to multiple-patterning processes.Generally, the topography dependent resist profile model employs a two-step approach. First, a base intensity is computed as a function of both a first intensity modeled for a first homogenous substrate and a second intensity modeled for a second homogenous substrate. Second, the base intensity is corrected for edge effects on each of scattering and diffusion. As such, the resist profile is model predicted in a manner which accounts for topography based on two distinct spaces. The first space, optical scattering, is affected by index contrast at the interface between the photoresist and the topographic material (e.g., first level photoresist feature). The second space, chemical diffusion, is affected by the physical barriers presented by the surface of the topographic material (e.g., first level photoresist feature).Both optical scattering and diffusion effects are modeled as edge-driven perturbations from the nominal bulk value which are correlated to the known location of the underlying topographic edges. In the exemplary double-patterning embodiment, these underlying topographic edges are known from a model of the first level pattern contour and may be input in to the second level OPC algorithm to model the second level photoresist profile. In alternative embodiments, because substrate inhomogenity is usually a direct result of a previously lithographic operation, the relevant prior layer pattern contour may be used to identify the edge locations for determining the subsequent photoresist profile.It has been found that separating the model into a diffusion and scattering correction function is accurate, as compared to a rigorous method and yet fast enough for full-chip OPC applications. Furthermore, bifurcating the profile computation into the above two steps enables the model to be generated very quickly because only the first step need be calculated for evaluation points sufficiently far from an edge of a topographic feature. Also, in treating each of these effects separately, physically meaningful fitting parameters may be included in each correction function to tune a resist profile model to the particular lithographic process employed in wafer manufacture.Figure 7A illustrates a flow diagram of an exemplary method 700 for modeling a resist profile in the presence of topography. The method 700 begins at operation 701 with reading in the first level pattern from which edge locations may be determined using any technique known in the art for such purposes. In one embodiment, the model-predicted grating pattern contour 251 (Figure 2) is read in at operation 701. At operation 702, the uncorrected second level pattern is read in at a first iteration of operation 701 or a second level pattern as corrected by a previous iteration of method 700 is read in for a subsequent iteration. At operation 750, the resist profile for the second level pattern (e.g., plug pattern 231 of Figure 2) is computed based on the first level pattern.Operation 750 is expanded in Figure 7B. Referring to Figure 7B, at operations755 and 756, a first and second blur intensity is computed. Each of the first and second blur intensities may be computed at each evaluation point using conventional techniques known in the art for a homogeneous substrate. In one embodiment, a commercially available resist profile modeling package is utilized to first generate the first blur intensity and then generate the second blur intensity.The first blur intensity is computed from a model that assumes the substrate is a planar homogeneous film of a first type (e.g., the first level grating pattern photo resist). The second blur intensity is then computed from a model that assumes the substrate is a planar homogeneous film of a second type which is present where the underlying topographic feature is absent (e.g., the substrate material where upon which the grating pattern is disposed). At operation 760 a base intensity as a function of each of the first and second blur intensities is computed. The "mixture" of blur intensities calculated at operation 760 represents a nominal intensity for the inhomogeneous substrate present at the second patterning level where each of the first and second substrate materials are disposed below different areas of the first photoresist layer. In one embodiment, the base intensity is a sum of the first and second intensities scaled based on the topographic pattern. In the exemplary embodiment:Ibase(x) = /iWCiC ) + /2(x)(i - CiW), (1)where IbaSe is the base intensity computed as a function of the first blur intensity (Ii) obtained based on top of the first substrate (grating level), the second blur intensity (I?) obtained based on the second substrate (with no grating present) and the convolution (Ci) of the topography layer (e.g., grating represented by one or more rectangular functions) and the Gaussian function.With the base intensity computed, the method 700 proceeds to operation 765 to apply an optical scattering correction function,scatto the base intensity. In the exemplary embodiment, operation 765 determines a multiple of the base intensity and the optical scattering correction function at the evaluation point:Ibase (2)The optical scattering correction is a function of a distance, d, between the evaluation point and the topographic edge for which the scattering is attributed (Fscat(d)) and generally represents a perturbation of the base intensity induced by the index contrast of the topographic edge at the evaluation point. The optical scattering correction function may take any number of forms, depending on the fitting parameters utilized. In an exemplary embodiment, the optical scattering correction function is an exponential function. However, one of skill in the art may determine other suitable functions (square, polynomial, etc.) by fitting rigorous field computation results to various functions scaling the base intensity.In the exemplary embodiment, at operation 770, a diffusion scattering correction function, F^ar is applied to the base intensity. Like the optical scattering correction function, the diffusion correction function may take any number of forms, depending on the fitting parameters utilized. In an exemplary embodiment, the diffusion scattering correction function is a sinusoidal function. However, one of skill in the art may determine other suitable functions (exponential, polynomial, etc.) by quantifying a photoresist chemistry solver's modification of an output from rigorous field computation output and fitting that quantification to various functions scaling the base intensity.In the exemplary embodiment, the diffusion correction factor is appl ied to the base intensity in dimensional components to arrive the di ffusion perturbation resulting from an edge. Generally, to model how an exposure chemistry gradient would deviant from the nominal case (no topography) one needs to know where a gradient of the second level exposure chemistry coincides with an edge of a topographic feature. At that location, the nominal diffusion function is hindered by the presence of the edge surface/interface. In one embodiment therefore, a first diffusion perturbation component is determined at operation 770 based on the diffusion correction function, a slope of the base intensity in a first dimension, and a slope of the topographic edge in the x-dimension:and in the y-dimension:Fdiff (d) ^, (4)Gx +Gywhere Gxis the topography x slope and Gyis the topography y slope. In further embodiments, a z-dimension (thickness) diffusion perturbation component may also be determined at operation 770.At operation 775, the optical scattering (Eq. (2)) and diffusion perturbations (Eq. (2) and (3)) are summed with the base intensity (Eq. (1)) to compute the topography dependent blur intensity:This topography dependent blur intensity may then be input, at operation 780, into the optical proximity correction algorithm as would a non-topography dependent blur intensity (e.g., IT). Returning to Figure 7A, at operation 751 , a correction to a dimension of a polygon in the second level pattern is adjusted based on the second level resist profile computed in the presence of the first level topography. The OPC method 700 then ends at operation 752 with an output of the corrected second level pattern. As will be appreciated by one familiar with the art, the method 700 may be iterated a number of times for each of a plurali ty of polygon segments for each of the polygons in the second level mask to arrive at a full-chip OPC second level pattern.An example of an output from the OPC method 700 is illustrated in Figure 7C.The second patterning level contour 271 (solid line) is generated based on an embodiment of the topography dependent resist profile model described herein. As shown, the topography from grating pattern 221 results in a significant resist break at the second level (as compared to the uncorrected synthesized plug pattern 23 1). The dashed line 272 represents a second patterning level contour output by a conventional resist profile simulator which does not model topography (rule-based). As shown, the dashed line 272 does not display any substantial resist breaking. A second patterning level contour 273 (dotted line) as generated by a rigorous 3D simulation is also depicted in Figure 7C. The improved performance of the topography dependent resist profile model algorithms described herein are evident by the better fit of the patterning level contour 271 to the rigorous contour 273. Returning to Figure 2, at operation 270, a plug contour, such as that depicted in Figure 7C, may then be generated for the corrected plug pattern 261 for points where the modeled resist profile crosses a threshold.In one embodiment of the present invention, the verification algorithm employed at operation 280 is configured for a multiple patterning process which captures the interpolate of the first and second masks (e.g., grating and plug masks). Proper verification of layers in double patterning processes rely on a computation of a resist profile that results from a composite effect of two masks. For example, for the grating and plug mask patterns described herein, the composite effect of the first and second mask levels is a subtraction of the plug level from the grating level. At verification therefore, a subtraction of the modeled contour of the plug mask from the modeled contour of the grating mask should arrive at the target pattern, and if not indicate a violation. In an alternative pitch doubling embodiment where a final mask is a sum of two masks, a final contour is a sum of two contours, a Boolean OR operation with the contours of each mask. In still another embodiment, the Boolean of two contours is employed in an algorithm to verify differences between two models which may be due to two different processes. The difference function contour can provide information for how different the two contours or processes are at different patterning regions. Although software packages exist which are capable of performing geometric Booleans of arbitrary shaped contours, such acomputation is a relatively slow process. For the purposes of mask OPC and verification of a double patterning mask set, performing a Boolean of the billions of polygons present in the mask patterns is prohibitive.In an embodiment of the present invention, the verification algorithm employed at operation 280 generates a composite contour 281 based on a function of the resist signals from each of at least two mask patterns. Figure 8A illustrates an exemplary fast method 800 for performing a Boolean operation of arbitrarily shaped contour. Method 800 begins with reading in a first resist signal ("A") for the first mask level of a double patterning process at operation 801 and reading in a second resist signal ("B") for the second mask level of the double patterning process at operation 802. The image intensity signal may be generated using a conventional grid-based aerial image model that calculates an expected image intensity I(x,y) at a regular series of fixed grid points.At operation 805 a Boolean function of the first and second level resist patterns is mapped to a function, /(A,B) of the first and second resist signals. The functional form of /(A,B) is based on the Boolean operation being performed by the double pattering and the polarity of the mask patterns. For example, in the grating and plug double patterning embodiment described elsewhere herein, the second level plug mask is a deduction from the first level grating mask. Because the polarity of the masks for the exemplary grating and plug embodiment are the same, the mask level relationship is:Grating Contour - Plug Contour (6)so that . (A,B) takes the form (A-B). For other mask polarities, the signage of the signals A and B may change so that the (A,B) takes the form (A+B), (B-A), etc.At operation 810, /(A,B) is mapped to the functions which are evaluated by the resist simulator for computing a pattern contour. The functions to which 7fA,B) is mapped is dependent on the type of model being used to generate the contour. As one example, where a VTR model is employed with a grid of evaluation points positioned across the mask area, local image intensities, various derivatives of the image intensity, and the variable resist threshold with which the resist material will be cleared are functions to which /(A, B) may be mapped. While any method of generating a contour from such image data may be used, in an exemplary embodiment a marching square algorithm is used to detect thresh old crossings. The marching square algorithm performs it's function as known in the art, and evaluates /(A,B) to output a zero crossing which corresponds to the Boolean operation mapped to the resist modeling functions.At operation 815, the composite contour is output by the resist simulator as a result of mapping the Boolean function to the functions from which the pattern contour is generated through thresholding. As such, the method 800 modifies the modeling functions at the resist signal level to generate a Boolean contour of the signals, which can be performed much more quickly than can a geometry Boolean of two arbitrary shaped contours. Figure 8B illustrates an exemplary output in which /(A,B) takes the form (A-B) and a difference contour 881 is generated. The first level contour 251 (A signal or grating contour), the second level contour 271 (B signal or plug contour), and the patterns 321 , 351 from which the resist models were generated are also depicted. As shown in Figure 8B, the difference contour 881 produced by the marching square algorithm corresponds to regions where the first level contour 251 extends beyond the second level contour 271. The region 822, where the second level contour 251 extends beyond the first level contour 271 is thresholded out (e.g., A-B is below the zero crossing).Figure 8D illustrates a example of a topographic effect which may not be accurately captured by a verification process absent the embodiments described herein which are designed for multi-level composite patterns. In the layout view shown, the first level contour 251 predicts the imaging performance of the synthesized grating pattern 321 while the second level contour 271 predicts the imaging performance of the plug pattern 352. As shown, upon arriving at the composite effect of the two mask levels (either by the method 800 or by a conventional geometric Boolean operation of the pattern contours), the sliver 882 would result in a pinch error violation because portions of the synthesized grating pattern 321 are bridged. However, if contour 271 should snap toward the second level pattern 352 because of effects of the double-patterning process, a verification algorithm which merely operates on pattern contours generated as independent layers will result in numerous false violations.To reduce such false violations, the method 800 is expanded in a further embodiment to include an additional operation. One or more of the image intensity signals is first modified with a cross term from the other of the image intensity signals before generating, based on a Boolean operation of the second mask level and the first mask level, a function of the image intensity signals. Figure 8E illustrates an exemplary method 850 which begins at with operations 851 and 852 at which the first and second resist signals ("A" and "B") are read in, as discussed elsewhere herein. At operation 853, at least one of the signals is modified with a cross term from the other of the resist signal. In the exemplary embodiment, both resist signals are modified with a cross term.Generally, the cross term is to impart some dependency of one or both of the masks on the other in a double patterning mask set to capture the sequence andtopography-based lithographic effects that occur during double-patterning processes. For example, as depicted in Figure 1). where the first level contour 271 results in a false violation, the resist signal from which the first level contour 271 was generated is moditied at operation 853 to include a cross term from the resist signal B to arrive at: A A similar modification can be made to the resist signal of the second patterning level to introduce a cross term from the first level resist signal to arrive at: BIn one embodiment, the cross term comprises a proximity influence term which increases in magnitude as the first and second image intensity signals become more proximate to each other within the mask area. This causes the magnitude of the modification to increase with proximity (i.e., as the first and second level patterns approach colinearity, the modified signal is to deviate from the nominal by a greater amount. In a particular embodiment, the proximity influence term in the modified image intensity signal is an exponential function of the other image intensity signal (e.g., A '=fI(B)=e,R)).In a further embodiment, the cross term is applied to the resist signal as a function of a curvature or slope of the resist signal to capture second order effects that are not captured by proximity alone. Generally this term is to limit signal modification to a subset of the greatest topography of the mask level for which the signal being modified corresponds. As one embodiment, the cross term (e.g., proximity influence term) is multiplied by a curvature of the electric field (slope of signal A). For example, for a modification to first pattern level signal:ofA *e<B)). Similar treatments may be applied to the signal B in further embodiments.Next, at operation 855, a function of the image intensity signals, as modified, is generated based on a Boolean operation of the second mask level and the first mask level substantially as described elsewhere herein for an unmodified signal. At operation 860 the composite contour from the function of the first and second image intensity signals is determined at output at operation 865 as a modified prediction of an image that results on a substrate from successively printing the first and second mask levels on the substrate.Figure 8C illustrates a difference contour 881 employing the method 850. As shown, many of the areas that are identified as a difference between the first and second patterns 321 , 352 in Figure 8B are no longer so identified. In this manner the method of 850 may be employed to incorporate composite mask effects into a composite contour to arrive at a better model of a double-patterning process and reduce the number of false violations.Returning to Figure 2, upon verification, the mask design and generation process outputs instructions from which a mask shop can manufacture the first level (grating) mask at operation 290 and the second level (plug) mask at operation 295. The masks generated may then be utilized by a microelectronics manufacture to practice a double patterning process, such as that depicted in Figure 1.Embodiments of the present invention may include apparatuses for performing the operations herein. The simulation algorithms of the present invention may beimplemented on a stand-alone or networked computer system based on a number of instructions that are executed by the computer(s) to simulate how a mask pattern will print on a wafer. From the estimate of how the mask pattern will print, one or more of the resolution enhancement techniques, such as OPC or subresolution assist features (SRAFs) can be used in order to produce mask pattern data that will be used to generate a mask which is projected onto a wafer to print the target pattern into a thin film layer of a microelectronic device.An apparatus may be specially constructed for the desired purposes, or it may comprise a general purpose computing device selectively activated or reconfigured by a program stored in the device. Such a program may be stored on a storage medium, such as, but not limited to, any type of disk including floppy disks, optical disks, compact disc read only memories (CD-ROMs), magnetic-optical disks, read-only memories (ROMs), random access memories (RAMs), electrically programmable read-only memories (EPROMs), electrically erasable and programmable read only memories (EEPROMs), magnetic or optical cards, or any other type of media suitable for storing electronic instructions, and capable of being coupled to a system bus for a computing device.The present invention may be provided as a computer program product, or software, that may include a machine-readable medium having stored thereon instructions, which may be used to program a computer system (or other electronic devices) to perform a process according to the present invention. A machine-readable medium includes any mechanism for storing or transmitting information in a form readable by a machine (e.g., a computer). For example, a machine-readable (e.g., computer-readable) medium includes a machine (e.g., a computer) readable storage medium (e.g., read only memory ("ROM"), random access memory ("RAM"), magnetic disk storage media, optical storage media, flash memory devices, etc.), a machine (e.g., computer) readable transmission medium (electrical, optical, acoustical or other form of propagated signals (e.g., carrier waves, infrared signals, digital signals, etc.)), etc.Figure 9 illustrates a diagrammatic representation of a machine in the exemplary form of a computer system 900 within which a set of instructions, for causing the machine to perform any one or more of the design and modeling methodologies discussed herein, may be executed. In alternative embodiments, the machine may be connected (e.g., networked) to other machines in a Local Area Network (LAN), an intranet, an extranet, or the internet. The machine may operate in the capacity of a server or a client machine in a client-server network environment, or as a peer machine in a peer-to-peer (or distributed) network environment. The machine may be a personal computer (PC), a tablet PC, a set- top box (STB), a Personal Digital Assistant (PDA), a cellular telephone, a web appliance, a server, a network router, switch or bridge, or any machine capable of executing a set of instructions (sequential or otherwise) that specify actions to be taken by that machine. Further, while only a single machine is illustrated, the term "machine" shall also be taken to include any collection of machines (e.g., computers) that individually or jointly execute a set (or multiple sets) of instructions to perform any one or more of the methodologies discussed herein.The exemplary computer system 900 includes a processor 902, a main memory 904 (e.g., read-only memory (ROM), flash memory, dynamic random access memory (DRAM) such as synchronous DRAM (SDRAM) or Rambus DRAM (RDRAM), etc.), a static memory 906 (e.g., flash memory, static random access memory (SRAM), etc.), and a secondary memory 918 (e.g., a data storage device), which communicate with each other via a bus 930.Processor 902 represents one or more general-purpose processing devices such as a microprocessor, central processing unit, or the like. More particularly, the processor 902 may be a complex instruction set computing (CISC) microprocessor, reduced instruction set computing (RISC) microprocessor, very long instruction word (VL1W)microprocessor, processor implementing other instruction sets, or processorsimplementing a combination of instruction sets. Processor 902 may also be one or more special-purpose processing devices such as an application specific integrated circuit (ASIC), a field programmable gate array (FPGA), a digital signal processor (DSP), network processor, or the like. Processor 902 is configured to execute the processing logic 926 for performing the operations and steps discussed herein.The computer system 900 may further include a network interface device 908. The computer system 900 also may include a video display unit 910 (e.g., a liquid crystal display (LCD) or a cathode ray tube (CRT)), an alphanumeric input device 912 (e.g., a keyboard), a cursor control device 914 (e.g., a mouse), and a signal generation device 916 (e.g., a speaker).The secondary memory 918 may include a machine-accessible storage medium (or more specifically a computer-readable storage medium) 931 on which is stored one or more sets of instructions (e.g., software 922) embodying any one or more of the methodologies or functions described herein. The software 922 may also reside, completely or at least partially, within the main memory 904 and/or within the processor 902 during execution thereof by the computer system 900, the main memory 904 and the processor 902 also constituting machine-readable storage media. The software 922 may further be transmitted or received over a network 920 via the network interface device 908.The machine-accessible storage medium 931 may store sets of instructions (e.g., software 922) embodying any one or more of the methodologies or functions described herein.. While the machine- accessible storage medium 931 is shown in an exemplary embodiment to be a single medium, the term "machine-readable storage medium" should be taken to include a single medium or multiple media (e.g., a centralized or distributed database, and/or associated caches and servers) that store the one or more sets of instructions. The term "machine-readable storage medium" shall also be taken to include any medium that is capable of storing or encoding a set of instructions for execution by the machine and that cause the machine to perform any one or more of the methodologies of the present invention. The term "machine-readable storage medium" shall accordingly be taken to include, but not be limited to, solid-state memories, and optical and magnetic media.In the foregoing specification, embodiments of the invention have been described with reference to specific exemplary features thereof. It will, however, be evident that various modifications and changes may be made thereto without departing from the broader spirit and scope of the invention as set forth in the appended claims. The specification and figures are, accordingly, to be regarded in an illustrative rather than a restrictive sense. |
This disclosure provides systems, methods, and apparatus, including computer programs encoded on computer storage media, for displaying information in various display regions within wearable display devices in a manner that enhances user experience. The wearable display devices may include a flexible display region that may be capable of operating in a wrinkled state. In one aspect, a wearable display device includes one or more sensors configured to provide information regarding the position of one or more display regions on the wearable display device. In some aspects, the device includes a processor that is capable of selecting where to display image data on the device. In some aspects, the selection of an appropriate display region is based at least in part on a privacy level associated with the image data and/or the positioning of the wearable device in space with respect to the user. |
CLAIMSWhat is claimed is:1. A wearable electronic device comprising:a display, including:a first display region and a second display region;at least one sensor configured to provide information regarding the position of the first and second display regions; anda processor capable of selecting one of the first and second display regions to display first image data, wherein the selection of a display region is based at least in part on the position of the first and second display regions and a privacy level associated with the first image data.2. The wearable electronic device of claim 1, wherein at least a portion of the first display region faces a first direction and at least a portion of the second display region faces a second direction that is opposite of the first direction.3. The wearable electronic device of claim 1 , wherein the processor is also capable of determining a privacy level associated with the first image data.4. The wearable electronic device of claim 3, wherein the privacy level is based at least in part on one of content of the image data, source of the image data, and user selection.5. The wearable electronic device of claim 1 , wherein the display is capable of operating in a curved state.6. The wearable electronic device of claim 1 , wherein the display is capable of operating in a wrinkled state.7. A wearable electronic device comprising:a display capable of being operated in a curved state, the display including: a first display region, anda second display region facing a different direction than the first display region when the display is in the curved state; anda processor capable of: determining a privacy level of first image data to be displayed on the display; andselecting one of the first and second display regions to display the image data based at least in part on the privacy level of the image data.8. The wearable electronic device of claim 7, wherein the wearable display device is configured to be worn on a user's arm, and wherein the display is capable of being operated while curved around a user's arm.9. The wearable electronic device of claim 8, wherein the first display region is configured to face a user's body.10. The wearable electronic device of claim 9, wherein the second display region is configured to face away from a user's body.1 1. The wearable electronic device of claim 7, further comprising at least one sensor configured to determine the orientation of the first and second display regions relative to the user.12. The wearable electronic device of claim 1 1, wherein the display is flexible and the boundaries of the first and second display regions are determined at least on part on the orientation and deformation of the flexible display.13. The wearable electronic device of claim 7, wherein the processor is capable of comparing the privacy level of the first image data with the privacy level of second image data.14. A method of displaying data on a wearable display comprising:designating a first display region and a second display region on the wearable display based at least in part on how the wearable display is oriented in space;determining a privacy level of image data to be displayed on the wearable display; anddisplaying the image data on the first display region or the second display depending on the privacy level of the image data.15. The method of claim 14, further comprising determining how the wearable display is oriented in space.16. The method of claim 14, further comprising adjusting the boundaries of the first and second display regions based at least on part on the orientation of the wearable display.17. The method of claim 14, wherein the wearable display is a flexible display and the method further comprises adjusting the boundaries of the first and second display regions based at least on part on the deformation of the flexible display.18. The method of claim 14, designating a first display region and a second display region on the wearable display based at least in part on how the wearable display is oriented in space includes designating a first display region that faces a user's body and designating a second display region that faces away from the user's body.19. The method of claim 18, further comprising displaying first image data on the first display region and displaying second image data on the second display region.20. The method of claim 19, wherein the privacy level of the first image data is higher than the privacy level of the second image data. |
CONTROLLED DISPLAY OF CONTENT ON WEARABLE DISPLAYSTECHNICAL FIELD[0001] This disclosure relates to wearable devices. More particularly, this disclosure is directed to devices, systems, and methods for displaying information in various display regions within wearable display devices in a manner that enhances user experience and extends battery life.DESCRIPTION OF THE RELATED TECHNOLOGY[0002] Mobile and/or portable electronic devices with touch screen displays have become ubiquitous. Screen sizes of such devices have increased over time and flexible displays can become widely available. Some display devices are available in a wearable form or can be adapted to a wearable form, in which the display device can be releasably attached to a user's wrist, forearm, or the like. As screen flexibility and durability increases display devices which are wearable and highly-flexible may become more common. Power consumption of such devices will be an important consideration, as larger displays will require more power. As such, a need exists for devices, systems, and methods for displaying content in a user friendly and energy efficient manner.SUMMARY[0003] The systems, methods, and devices of this disclosure each have several innovative aspects, no single one of which is solely responsible for the desirable attributes disclosed herein.[0004] In some aspects, a wearable display device includes a display. Driver circuitry may be electrical communication with the display. The driver circuitry may be configured to drive a first region of the display at a first image quality and a second region of the display at a second image quality different than the first quality. The device may include a processor. The processor may be capable of selecting a region of the display in which image data will be displayed. The processor may be capable of directing the driver circuitry to display image data in the selected region of the display. The selection of the region of the display in which image data will be displayed may be based at least in part on one or more of the following: a content type associated with the image data, an image format associated with the image data, a priority associated with the image data, user preference associated with one or more user profiles, and biometric information indicative of a current user. In some aspects, the processor is capable of assigning an image quality for a region of the display. The image quality may be one or more of the following: color gamut, resolution, range of colors, frame rate, size and shape of the image, refresh rate, and the like.[0005] In some aspects, a wearable display device having a display area having at least two sub-regions configured to display at least two different image qualities may be operated by a method including receiving a command to display image content, selecting an appropriate sub-region of the display area for displaying the content, and displaying the content in the selected sub-region of the display area. In some aspects, a command is received from a software application. The software application may have a priority level associated with it and the selection of the appropriate sub-region of the display area may be based at least in part on the priority level. In some aspects, selecting the appropriate sub-region is based at least in part on information relating to the usage history of the software application. In some aspects, selecting the appropriate sub-region is based at least in part on remaining battery life of the wearable display device.[0006] In some aspects, a wearable display device includes a display area. The display area may include at least a first display region and a second display region. Driver circuitry may be in electrical communication with the first display region and the second display region. The driver circuitry may be capable of displaying images within the first display region at a first image quality and displaying images within the second display region at a second image quality different than the first image quality. In some aspects, the first display region has a first pixel density and the second display region has a second pixel density different than the first pixel density. In some aspects, the driver circuitry is configured to drive the first display region at a first refresh rate and drive the second region at a second frame rate different than the first refresh rate. In some aspects, the first display region is capable of displaying a first color gamut and the second display region is capable of displaying a second color gamut different than the first color gamut.[0007] In some aspects, a wearable electronic device includes a display. The display may include a first display region, a second display region, at least one sensor configured to provide information regarding the position of the first and second display regions, and a processor capable of selecting one of the first and second display regions to display first image data. The selection of a display region may be based at least in part on the position of the first and second display regions and a privacy level associated with the first image data. The processor may be capable of determining a privacy level associated with the first image data. The privacy level may be based at least in part on one or more of the following: content of the image data, source of the image data, and user selection.[0008] In some aspects, a wearable electronic device includes a display capable of being operated in a curved state. The display may include a first display region, a second display region facing a different direction than the first display region when the display is in the curved state, and a processor. The processor may be capable of determining a privacy level of first image data to be displayed on the display and selecting one of the first and second display regions to display the image data based at least in part on the privacy level of the image data. The processor may be capable of comparing the privacy level of the first image data with the privacy level of second image data. In some aspects, the device includes at least one sensor configured to determine the orientation of the first and second display regions relative to the user. In some aspects, the display is flexible and the boundaries of the first and second display regions are determined at least on part on the orientation and deformation of the flexible display.[0009] In some aspects, a method of displaying data on a wearable display includes designating a first display region and a second display region on the wearable display. The designation may be based at least in part on how the wearable display is oriented in space. The method may include determining a privacy level of image data to be displayed on the wearable display. The method may include displaying the image data on the first display region or the second display depending on the privacy level of the image data. In some aspects, the method includes determining how the wearable display is oriented in space. The method may include adjusting the boundaries of the first and second display regions based at least on part on the orientation of the wearable display.[0010] In some aspects, a wearable electronic device includes a display, a plurality of sensors coupled to the display and configured to determine the state of the display, and a processor in electrical communication with the plurality of sensors. The processor may be configured to provide image data to the display. The processor may be capable of changing at least one characteristic of the image data provided to the display based at least in part on input received from the sensors. The changing at least one characteristic of the image data may include one or more of the following: resizing the image data, reshaping the image data, adjusting the resolution of the image data, and altering the brightness of the image data.[0011] In some aspects, a method of displaying content on a flexible display includes displaying content on a flexible display, receiving electrical signals from one or more deformation sensors coupled to the flexible display, and altering the displayed content based at least in part on the received electrical signals. In some aspects, altering the displayed content includes increasing a font size of text within the displayed content.[0012] In some aspects, a wearable electronic device includes a display. The display may include a plurality of ambient light sensors. A processor may be in electrical communication with the plurality of sensors. The processor may be configured to deactivate at least a portion of the display based at least in part on input received from the sensors. In some aspects, the display is capable of bending over at least a 180 degree arc. The display may be capable of being operated in a curved and/or wrinkled state.[0013] Details of one or more implementations of the subject matter described in this disclosure are set forth in the accompanying drawings and the description below. Although the examples provided in this disclosure are primarily described in terms of wearable and flexible displays configured to be worn on a user's arm, the concepts provided herein may apply to other types of displays such as, for example, liquid crystal displays, organic light-emitting diode ("OLED") displays, and field emission displays. Other features, aspects, and advantages will become apparent from the description, the drawings, and the claims. Note that the relative dimensions of the following figures may not be drawn to scale.[0014] It is to be understood that not necessarily all objects or advantages may be achieved in accordance with any particular implementation described herein. For example, aspects of certain implementations may be embodied or carried out in a manner that achieves or optimizes one advantage or group of advantages as taught herein without necessarily achieving other objects or advantages as may be taught or suggested by other implementations. Furthermore, the aspects and features described herein are not limited to any particular implementation of a wearable display device. Thus, wearable display devices may include more or less of the features and advantages herein described. Moreover, the various aspects and features from different implementations may be interchangeable. For example, the features of the wearable display devices in the various implementations may be switched between implementations.BRIEF DESCRIPTION OF THE DRAWINGS[0015] The following is a brief description of each of the drawings. From figure to figure, the same reference numerals are used to designate the same steps or components of the illustrated example implementations.[0016] FIG. 1 illustrates an implementation of a large-format wearable display device.[0017] FIGS. 2A-2D depict various example implementations of displays including a plurality of display regions.[0018] FIGS. 3A and 3B illustrate an implementation in which a mobile phone serves as a complementary device for a wearable display device.[0019] FIGS. 4A and 4B illustrate an implementation in which a smart watch serves as a complementary device for a wearable display device.[0020] FIG. 5 is a flow diagram illustrating an example method for operating a wearable display device.[0021] FIGS. 6A and 6B illustrate an example of content reorganization on a wearable display device.[0022] FIGS. 7A and 7B show another example of content reorganization on a wearable display.[0023] FIGS. 8A and 8B show another example of content reorganization on a wearable display.[0024] FIGS. 9A and 9B show an example of content display on a wearable device.[0025] FIGS. 10A and 10B show another example of content display and reorganization on a wearable device.[0026] FIGS. 11A and 11B show another example of content display on a wearable device.[0027] FIGS. 12A-12D show another implementation in which a mobile phone serves as a complementary device for a wearable display device. [0028] FIG. 13 is a flow diagram illustrating an example method for displaying content in display regions of a wearable display device.[0029] FIGS. 14A-14C show an implementation of a wearable device having a plurality of deformation sensors.[0030] FIGS. 15A-15B show an example of content display and reorganization on a wearable device having a plurality of deformation sensors.[0031] FIGS. 16A-16B show an implementation of a wearable device having a plurality of light sensors.[0032] FIG. 17 is a flow diagram illustrating another example method for displaying content in display regions of a wearable display device.DETAILED DESCRIPTION[0033] The present disclosure provides systems, methods, and devices that may be used in connection with a wearable display. In some implementations, the device can be flexible. In some implementations, the device can be configured to be worn on or secured relative to a user's arm. For example, the device may include a sleeve, which may be semi-elastic and flexible. In other implementations, the display can be rigid or not substantially flexible. In some implementations the device includes one or more rigid sections which, may be planar or curved in a convex or concave manner.[0034] In addition to a flexible display, the display device may include other components of varying flexibility. Such components may include one or more display screens, microphones, speakers, antennas, batteries, sensors, haptic feedback elements, processors, integrated circuits, input or output ports, and other components of a mobile computing device. Thus, the device may be a fully independent mobile computing device, while in other implementations the display device may be a companion device for use with a separate mobile computing device, such as a smartphone.[0035] In some implementations, the device may have more than one display region, and be capable of displaying images at different image qualities in separate display regions. In other words, rather than having a display region capable of displaying images at a single image quality across the entire display, different display regions within the display can be capable to simultaneously display images at different image qualities within different display regions. In some implementations, a flexible display may be subdivided into two separate display regions, but in other implementations, a flexible display may be subdivided into more than two separate display regions.[0036] These display regions may differ structurally from one another, or may be driven in such a manner that the displayed images are of different quality. In one implementation, a first display region may have a higher pixel density than a second display region, and the first display region may be capable of displaying higher resolution images than the second display region. In some implementations, for example, the device includes a first display region capable of displaying relatively complex image data, such as high-resolution images or video, and a second display region which is used to display relatively simple image data at a lower image quality, such as static text. In this way, information may be displayed from a multitude of sources and/or at a variety of different image qualities at appropriate display regions one of a display device.[0037] The wearable display device may also be configured to determine and/or select a display region in which specific image content is displayed. For example, the wearable display device may display text in a display region that is best suited to display text and display video in a display region that is best suited to display video. In some implementations, the wearable display device may be configured to move the displayed content from a first display region to a second display region. For example, a video may be displayed in a first display region capable of displaying images at a high quality. When the video is paused, the device may move the video to a second display region having a second image quality that is less than the high image quality of the first display region. In this way, the content can be arranged and displayed in a more efficient manner.[0038] The wearable display device may be capable of simultaneously running multiple applications or programs, or may be used in conjunction with a companion device with such capabilities. Each of these programs or applications may simultaneously display image data on respective portions of the wearable display device. The applications may be configured and/or assigned to display date in particular display regions. For example, the applications may display data in a particular display region depending on the type of image data that the application wishes to display. In some implementations, applications can be launched in a particular display region based on the content displayed by the applications. In some implementations, the display device can assign applications to specific display regions based on a display priority of the applications. For example, applications that will benefit from a higher quality display may launch in or be moved to display regions having appropriate display capabilities.[0039] Techniques according to this disclosure may enable a wearable display device to selectively display public or private information based on the physical orientation of the wearable device with respect to the user. That is to say, the wearable computing device may be configured to display private information in areas within the display that are less likely or impossible to be viewed by someone other than the user. In some implementations, the wearable display device may display private information in areas that can only be seen by the user. In this way, individuals other than the user may be prevented from observing a user's private information displayed on the wearable display device.[0040] The wearable display device may also be configured such that the device can determine the orientation of the device with respect to the user. That is to say, one or more sensors may enable the device to determine which portions of the device are facing towards a user's body and which portions are facing away from the user's body. In some implementations, the device may be configured to display more private information on display regions facing toward a user's body and less private information on display regions facing away from the user's body.[0041] The wearable display device may also be configured such that the device can deactivate portions of the display that are obscured from a user's view. In this way, the device can conserve energy consumption. For example, the wearable display device may include one or more sensors configured to determine when portions of the display are covered by, for example, a shirt sleeve. These covered portions may be deactivated to conserve power consumption. In other implementations, the wearable display device can be configured to deactivate portions of the display that cannot be seen from a user's vantage point.[0042] In some implementations, the wearable display device is configured to determine the status of the display and adjust the displayed content accordingly. For example, the device may include one or more deformation sensors configured to determine the physical deformation of the display area. In response to excessive wrinkling, for example, the displayed content may be moved to areas of the device that are less wrinkled and/or the relative size of the displayed content may be increased such that the readability and/or visualization of the displayed content is enhanced.[0043] Particular implementations of the subject matter described in this disclosure can be implemented to realize one or more of the following potential advantages. By providing a device with varying display capabilities across a large- format display, the device may more efficiently render and display image data across a large screen. For example, different display regions in a display that are configured to display image data at different qualities may reduce the overall power consumption of the device. It may also reduce manufacturing costs, and may also reduce the required computing power required to drive the display. The various display regions may also enhance the user experience.[0044] Flexible displays exist in the art. Such flexible displays may be available from, for example, Samsung, Sony, Sharp, Plastic Logic, and/or Arizona State University. These displays may be deformed, bent, curved, folded, wrinkled, unwrinkled, rolled, twisted, and/or stretched. As flexible display technology improves and prices decrease, new techniques of using and interacting with the flexible displays are needed to, for example, encourage widespread adaption of the technology. As such, one or more aspects of the present disclosure may prove especially useful as flexibility of displays continue to increase. However, it is to be understood that the implementations described herein may be implemented in devices which do not include flexible displays.[0045] Other aspects of the disclosed devices may also be flexible. Flexible printed circuit boards, antennas, sensors, and/or batteries may also be included in the wearable display. In some implementations, the flexible, wearable, display devices may include some components or portions that are less flexible and more rigid than other components or portions. For example, more rigid components may be positioned relatively un-deformed orientations and more flexible components may be placed in areas that are subject to more bending and/or curvature. As such, portions of the display device along, for example, the top and bottom of the flexible display that extend parallel to the length of a user's arm may be less flexible than the portions that are generally perpendicular to the length of the arm. In this way, the use of more rigid components may be employed in a flexible display device. In some implementations, more rigid components may be housed in a separate companion component, such as a watch, bracelet, or smartphone. [0034] FIG. 1 illustrates an implementation of a large-format wearable display device. As shown, the wearable display device 100 may be sized and shaped to snugly fit on a user's forearm between the wrist and the elbow. Thus, the wearable display device 100 may be flexible and/or somewhat elastic. However, as described above, one or more portions of the wearable display device 100 may be rigid. For example, in some implementations, the wearable display device 100 comprises a rigid display that is configured to fit around at least a portion of a user's arm in a bent or curved state, or a display with multiple rigid sections which can be moved independent of one another to allow the display to fit around at least a portion of a user's arm in a bent or curved state. In some implementations, the wearable display device 100 may include a supporting member 190, such as a band, strap, or sleeve which supports a display area 110 configured to display image data, such as text, images, social media, video, user interface elements, and/or other data that may be displayed to a user.[0046] In some implementations, the wearable display device 100 may comprise a glove or other article of clothing. For example, the wearable display device 100 may be part of a long sleeved shirt, sweater, or jacket. In some implementations the wearable display device 100 includes at least one display region that is capable of operating in a curved state. In some implementations the wearable display device 100 includes at least one display region that is capable of operating in wrinkled state.[0047] As shown in the illustrated example in FIG. 1, the display area 110 is subdivided into three display regions 120, 130, and 140. The wearable display device 100 may also include one or more of the following exemplary components: memory, a processor, an audio processing unit, a speaker, a microphone, a communication system, input and output controllers, a touch screen, input devices, and the like. Each display region 120, 130, and 140 is capable of operating in a curved state.[0048] In the illustrated implementation, the display area 110 of the surface area of the wearable display device 100 may cover only a portion of the wearable display device 100 as illustrated in FIG 1. However, in other implementations, the display area or region 110 may extend further around the users arm and may even encompass the entire wearable display device 100, such that wearable display device 100 is configured to display images on the entire surface area of the wearable display device 100. In other words, while only one the side of the wearable display device 100 is illustrated, portions of the device 100 that are hidden from view in FIG. 1 may also be configured to display images. In one implementation, the display area 110 of the wearable display device 100 may be roughly 8 inches in length, and have a width of 6 inches at its widest point, but other shapes and sizes may also be used as appropriate.[0049] In some implementations, the wearable display device 100 includes one or more input mechanisms, such as touch screen functionality, gesture sensing functionality, buttons, dials, or knobs. In one implementation, the wearable display device 100 includes a touch-sensitive region aligned with the display area 110, although in some other implementations the touch-sensitive regions may extend outside of the display area 110 or may cover only a portion of the display area 110.[0050] The input mechanisms allow a user to interact with the wearable display device 100. A user may, for example, launch applications, interact with one or more applications, or rearrange applications on the display area 110 by touching the display area 110 and/or the wearable display device 100 or otherwise interacting with an input mechanism of the wearable display device 100. In some implementations, the wearable display device 100 can be configured as a "rolling" display, in which a user may scroll the displayed images around the circumference of the user's arm (e.g. up and down as shown in FIG. 1). In some implementations, the user may be able to zoom in and out within a display region, or may zoom to expand or reduce the number of display regions used to display an application or user interface element.[0051] In some implementations, wearable display device 100 includes at least one component that is capable of bending over at least a 180 degree arc. In other implementations, the wearable display device 100 includes at least one component that is capable of bending over at least a 270 degree arc. In other implementations, the wearable display device 100 includes at least one component that is capable of bending over at least a 360 degree arc. In some implementations, the at least one component includes a display. In some implementations, the wearable display device 100 includes at least one display that can be deformed into a substantially tubular shape.[0052] Furthermore, not every component of the wearable display device 100 is required to be curved and/or flexible. In some implementations the wearable display device 100 includes one or more substantially planar surfaces. In some implementations the wearable display device 100 includes one or more rigid components. The one or more rigid components may be curved. In some implementations, the wearable display device 100 may be semi-rigid, such that it is capable of being bent and/or flexed and remain in the bent and/or flexed state. [0053] The wearable displays devices 100 disclosed herein may include one or more sensors that are configured to determine the orientation of the display. The sensors may include, for example, motion sensors, accelerometers, gyroscopes, light detectors, gaze detectors, thermal sensors, cameras, and the like. In some implementations, the sensors may be embedded into the device 100. In some implementations, the sensors are embedded into the display area 110.[0054] The display area 110 may be configured such that information is displayed on sections of the display area 110 based on their visibility to a user. For example, the devices 100 described herein may be configured to determine a display area within a wearable display that is currently visible to the user and then to display content in the display area that is currently visible. In an implementation in which the display area 110 wraps more than 180 degrees around the arm of a user, the displayed image data may move to remain visible to the user as the users arm moves or twists.[0055] It can be seen in FIG. 1 the display area 110 is displaying image data from three different applications in display regions 120, 130, and 140 of display area 110. Image data in the form of video content 101 from a first application is being displayed in display region 120, image data from a second application in the form of weather information 103 is being displayed in display region 130, and image data from a third application in the form of stock market information 105 is being displayed in display region 140. In the illustrated implementation, display region 120 may be capable of displaying image data at a higher quality than display regions 130 and 140.[0056] The wearable display device 100 may include driver circuitry configured to display image data within the various display regions 120, 130, and 140 of display area 110, and may provide part of an interface between a processor generating image data and the display region on which the image data is displayed. In some implementations, discrete driver circuitry may be associated with each of the display regions 120, 130, or 140, or with a subgroup including multiple display regions. For example, the driver circuitry may include a first driver circuitry in electrical communication with a first display area and a second driver circuitry in electrical communication with a second display area. In other implementations, the driver circuitry may include a first driver circuitry in electrical communication with a first display sub-region of a display area and a second driver circuitry in electrical communication with a second display sub-region of the display area. In other implementations, a single display driver may be configured to display at least two different image qualities within the display area 110.[0057] In the illustrated implementation, the display region 120 is capable of displaying data at a first image quality, and the second and third display regions 130 and 140 are capable of displaying data at a second image quality. In some implementations, differences in the capabilities of the various display regions to display image data are due to physical differences between the various display regions. For example, two or more discrete subdisplays with differing characteristics may be positioned close to one another to form display area 110, with each individual subdisplay serving as a display region or being subdivided into multiple display regions. In a particular implementation, a first subdisplay may form display region 120, and a second subdisplay may be divided into display regions 130 and 140. In other implementations, differences in driver circuitry or image data may cause the display regions to display data at different image qualities. In further implementations, display region 130 may be capable of displaying image data at a second image quality while display region 140 may be capable of displaying image data at a third image quality different than the first and second image qualities. In still further implementations, the display area 110 may include additional display regions capable of displaying image data at any of a range of image qualities.[0058] The term image quality is used herein to refer to a variety of display characteristics, any number of which can differ between various display regions. For example, differences in image quality can include, but are not limited to, differences in pixel density, image resolution, frame rate, color depth, color gamut, sharpness, brightness, and contrast.[0059] For example, in some implementations, the first display region 120 has a higher pixel density than the second display region 130 and the third display region 140, allowing the first display region 120 to display image data at a higher resolution than the second display region 130 and the third display region 140. For example, the first display region 120 may have about 300 pixels per inch (PPI) while the second display region 130 and the third display region 140 may have a lower pixel density, such as 100 PPI. A display area 110 which includes lower density pixel regions may decrease the overall cost of the wearable display device 100. The lower pixel density regions may also consume less power than the higher pixel density regions, as it may require less power to both generate and display the image data for the lower pixel density regions.[0060] In other implementations the physical pixel density is substantially uniform across the display area 110, but certain display regions may be driven with a lower effective pixel density. For example, the first display region 120 may have about the same pixel density as the second display region 130 and the third display region 140. In such implementations, the driver hardware and/or software may be configured to provide a lower effective pixel density in the second display region 130 and the third display region 140. For example, multiple adj acent pixels can be driven together as a single pixel, lowering the effective resolution of the display, or alternating pixels or lines may be left undriven. In such an implementation, the display area 110 may be divided into two or more display regions in a dynamic manner, in which the display regions need not be located in a fixed position within the display area 110. Rather, the display regions may be moved around within the display area 110 and re-sized and/or reshaped as desired.[0061] In some implementations, the first display region 120 is capable of displaying image data having a first color depth and the second and third display regions 130 and 140 are capable of displaying image data having a second and/or third color depth. Color depth, also referred to as bit depth, may refer to the number of bits used to indicate the color of a single pixel or the number of bits used for each color component of a single pixel. For example, in some implementations the first color depth may be 16-bit color and the second color depth may be 8-bit color. In some implementations, at least a portion of the display area 110 may be configured to display 1-bit or 2-bit color. For example, simple text may be displayed in 1-bit or 2- bit color to conserve energy and promote battery life. Thus, the wearable display device 100 may include various sub regions capable of displaying image data at a variety of color depths.[0062] In some implementations, the first display region 120 is capable of displaying image data at a first frame rate and the second and third display regions 130 and 140 are capable of displaying image data at a second and/or third frame rate. For example, in some implementations the first frame rate may be 30 frames per second and the second frame rate may be 15 frames per second. Thus, the wearable display device 100 may include various display regions that are capable of displaying content at a variety of different frame rates. In some implementations, the differences in frame rate may be due to physical differences between the display regions, or the capabilities of the associated driver circuitry or other device components. In such implementations, the display regions may be fixed, such that a first display region is only capable of displaying image data at a first maximum frame rate, and a second display region is only capable of displaying image data at a second maximum frame rate. In other implementations, the frame rate may be a function of the image data provided to the display area 110 and may be variable and/or dynamic across the display regions.[0063] In some implementations, the first display region 120 is capable of displaying image data having a first color gamut and the second and third display regions 130 and 140 are capable of displaying image data at a second and/or third color gamut which may be different than the first color gamut. In general, color gamut refers to the complete subset of colors that the driver circuitry is configured to display. In some implementations, for example, a first sub-region may be capable of displaying a first color gamut that is broader than a second color gamut associated with second sub-region of the wearable display device 100. In some implementations, certain display regions of the display area 110 may only be capable of displaying a given color gamut, which may be due to physical characteristics of the display region or may be due to the associated driver circuitry or other device components.[0064] In some implementations, the first display region 120 is capable of displaying image data having a first brightness level and the second and third display regions 130 and 140 are capable of displaying image data at a second and/or third brightness level which may be different than the first brightness level. In some implementations, the certain display regions of the display area 110 may only be capable of displaying images at a given brightness level. The range of brightness which can be displayed by a portion of a device may be constrained by physical characteristics of a display region, or by the associated driver circuitry or other device components.[0065] Even in some implementations in which physical differences in the display regions or associated driver circuitry limit the image quality of some display regions relative to other display regions, a display region capable of displaying image data at a high image quality may nevertheless be driven at a lower image quality if desired. In some implementations, the wearable display device 100 can dynamically adjust the image quality. Such an adjustment may be made in response to one or more of user input, application output, and content of the image data to be displayed. For example, when relatively static text is displayed in the first display region 120, the content may be displayed at a relatively low frame rate. However, when video is displayed in the same display region 120, the frame rate may be increased to a relatively higher frame rate.[0066] In some implementations, one aspect of image quality may be higher in a first display region while a different aspect of image quality may be higher in a second display region. For example, a first display region 120 may be capable of displaying image data at a frame rate higher than that of a second display region 130, while the second display region 130 may be capable of displaying image data at a resolution higher than that of the first display region 120. Such a configuration may allow various display regions to be optimized to display specific types of data.[0067] Various sizes and shapes of displays and display regions can be used. FIGS. 2A-2D depict various implementations of displays including a plurality of display regions. FIG. 2A illustrates a rectangular display 210A which is subdivided into a first display region 120 and a second display region 130, with the facing edges of the first and second display regions 120 and 130 abutting one another. The first display region 120 is displaying video content 101 while the second display region 130 is displaying weather information 103.[0068] Due to the rectangular shape of the display 210A, the upper edge 112 and lower edge 114 of the display area 110 are two substantially parallel lines formed by the upper and lower edges of the display regions 120 and 130. Although the two display regions 120 and 130 are referred to as abutting one another, there may be some spacing between the two display regions 120 and 130 for additional components, particularly in implementations in which two different displays are combined to form display area 110.[0069] FIG. 2B illustrates another implementation of a display 210B which is subdivided into a first display region 120 and a second display region 130. The display 210B differs from the display 210A of FIG. 2A in that the display 210B is generally in the shape of a parallelogram instead of a rectangle, with the upper and lower edges 112 and 114 of the display 210B oriented at an angle to one another. A tapering display such as display 210B may be well-suited for use as a component of a wearable display, due to the narrowing shape of a user's forearm. In some implementations, one or more display regions taper in size from the elbow to the wrist of a user.[0070] FIG. 2C illustrates another implementation of a display 2 IOC which is subdivided into a first display region 120 and a second display region 130. Like display 210B of FIG. 2B, the display 210C of FIG. 2C is also a tapering shape, but differs in that the upper and lower edges 112 and 114 are concave lines, rather than straight lines. In other implementations, other non-straight edges may be used, such as convex lines, or lines with an angle or other discontinuity.[0071] FIG. 2D illustrates a rectangular display 210D which is subdivided into a first display region 120, a second display region 130, and a third display region 140. Unlike the display regions of FIG. 1, the third display region 140 in the display 210D of FIG. 2D is extends horizontally across the entire length of the display 210D. In the particular implementation illustrated in FIG. 2D, the third display region 140 is thin compared to the other display regions 120 and 130, but in other implementations can be made thicker or thinner than the implementation illustrated in FIG. 2D.[0072] In other implementations, additional display regions may also be used. In other implementations, the boundaries between display regions need not be generally horizontal or vertical straight lines as depicted in certain illustrations herein, but may be in any appropriate orientation or shape.[0073] FIGS. 3A and 3B illustrate an implementation in which a mobile phone serves as a complementary device 200 for a wearable display device 100. The wearable display device 100 may be configured to be physically or wirelessly coupled to a complementary device 200 such as a smartphone. The complementary device 200 can be any suitable device such as a computer, laptop, smartphone, smart- television. In this way, the control of the display of the images in the display area 110 of the wearable display device 100 may be at least partially controlled by the complementary device 200. In some implementations, the wearable display device 100 may mirror the display of the complementary device 200. In other implementations, the wearable display device 100 displays at least a portion of the image data being displayed by the complementary device 200. In some implementations, the wearable display device 100 may be dependent upon the complementary device 200 and may be connected either directly or wirelessly to the complementary device 200. In other implementations, the wearable display device 100 may be fully functional without the complementary device 200 but can also be used in a mode in which the complementary device 200 interacts with the wearable display device 100.[0074] In some implementations, the wearable display device 100 may be capable of interacting with and/or at least partially controlling the complementary device 200. For example, in some implementations, the wearable display device 100 may be configured such that a user can select a display region and/or application displayed on the wearable display device 100 and choose to have that display region and/or application displayed on a complementary device 200.[0075] In the implementation illustrated in FIGS. 3A and 3B, an application or other user interface element may be selected on the complementary device 200 and moved onto the display area 110 of the wearable display device 100, such as by swiping across the display of the complementary device 200. The complementary device 200 may be capable of running one or more applications or programs capable of displaying image data on the wearable display device 100 or otherwise communicating with an application or program on the wearable display device 100 to display image data on the wearable display device 100. In some implementations, the wearable display device 100 is running one or more applications or programs capable of displaying image data on the wearable display device 100. That is to say, the display area 110 of the wearable display device 100 is capable of displaying image data that is output and/or rendered by an application or program running either on the wearable display device 100 or on an associated complementary device 200.[0076] FIGS. 4A and 4B illustrate an implementation in which a smart watch serves as a complementary device 200 for a wearable display device 100. In some implementations, the watch may be a complementary device 200 which houses components that are not flexible enough to be included in the wearable display device 100. In some implementations, the watch may house components which are heavier and/or less-breathable than other components of the wearable display device 100.[0077] FIGS. 4A and 4B also illustrate that the display 110 may include a first display region 120 and a second display region 130. As discussed above, the first display region 120 may be capable of or configured to display images within the first display region 120 at a first image quality, while the second display region 130 may be capable of or configured to display images within the second display region 130 at a second image quality. The second image quality may be different from the first image quality. In the particular implementation illustrated in FIGS. 3A and 3B, the second display region 130 is configured to display images at an image quality is less than the image quality the can be displayed in the first display region 120. However, in some implementations the second display region 130 may be configured to display higher-quality images than the first display region 120.[0078] While the illustrated implementation of FIGS. 4A and 4B illustrates the display regions 120 and 130 as regions that are divided in a direction extending roughly perpendicular to an axis extending roughly parallel to a user's forearm, the display regions may be divided up in any manner. If the display 110 wraps around the user's forearm, the display regions may be similar in shape to bands or rings circling a part of the user's forearm. In other implementations, the display is divided into a plurality of elongated display regions that run roughly parallel to the user's forearm.[0079] It can be seen in FIGS. 4A and 4B that display regions 120 and 130 each include an upper boundary running roughly parallel to the user's forearm on the top side (e.g. the side closer to a user's thumb), which together form the upper edge of the display 110. The display regions 120 and 130 may also include a lower boundary running roughly parallel to the user's forearm on the bottom side (e.g. the side closer to a user's pinky), which together form the lower edge of the display 110. The display regions 120 and 130 share a boundary line extending roughly perpendicular between the upper and lower boundary that roughly divides the display 110 generally in half to define the two display regions 120 and 130. In some implementations, the boundary lines defining the sub-regions are dynamic. In other words, the sub-regions may be re-sized and/or re-configured. In other implementations, the boundary lines are static and unchanging, and may be the boundaries between two discrete displays combined to form display 110.[0080] As shown in FIG. 4A, the first display region 120 may display video content 101 and the second display region 130 may display weather content 103. Because the weather content 103 is generally static and primarily text-based while the video content 101 is more dynamic in comparison to the weather content 103, the video content 101 may be displayed in a first display region 120 that is capable of or configured to display content at a different image quality than the second display 130 region. In some implementations, the first display region 120 may be capable of or configured to display content at a higher resolution, frame rate, color depth, color range, and/or color gamut than the second display region 130. In this way, the video content 101 may be displayed in a different manner from the weather content 103. Thus, in some implementations, content that will benefit from being displayed at a higher image quality (e.g. video content) may be displayed in the first display region 120 and other content may be displayed on the second display region 130.[0081] As shown in FIGS. 4A and 4B, the wearable display device 100 may interact with a complementary device 200. As shown in FIG. 4A, a first video content 101 is being displayed in the first display region 120, weather content 103 is being displayed in the second display region 130, and a second video content 102 is being displayed on the display of the complementary device 200. In some implementations, the user may wish to view the second video content 102 in a larger format. As such, in some implementations, the user may cause the second video content 102 to be displayed on the wearable display device 100 by interacting with the mobile device 200. In turn, the wearable display device 100 and/or the mobile device 200 may cause the second video content 102 to be displayed in the first display region 120 as shown in FIG. 4B. The second display region 130 may be dynamically divided into a second display region 130 and a third display region 140, as shown. The first video content 101 may be paused and/or displayed in second display region 130, and the weather content 103 moved into the third display region 140 when the second video content 102 is selected for viewing in the first display region 120. The second display region 130 may be dynamically divided into a second display region 130 and a third display region 140, as shown.[0082] In some implementations, one or both the wearable display device 100 and/or the complementary device 200 may be configured to select a region of the display 110 in which to display particular image data. One or both of the wearable display device 100 and/or the complementary device 200 may include a processor configured to select a region of the display 110 to display the selected image data. In some implementations, the processor is configured to select a region within the display 110 to display data based at least in part on the content of the data to be displayed. That is to say, the processor may be configured to determine the type of image data or content that an application will display. For example, the processor may be configured to determine if the application will display static text, scrolling text, photos, animations, and/or videos. In some implementations the processor determines the type of content at least in part by the file type extension of the content to be displayed. The processor can then determine which region within the display 110 is best suited to display the content.[0083] In some implementations, the processor is configured to select a region within the flexible display area 110 to display data based at least in part on the type of software application that wishes to display content. In some implementations, the processor is configured to select a region within the flexible display area 110 to display data based at least in part on the rate and/or amount of data being received from the application. For example, multiple commands to refresh the display may be indicative of animation and/or video. In response, the processor may be configured to direct the display data to the optimal sub-region of the display for displaying animation and/or video.[0084] Other aspects of the display data may be used to at least partially determine the optimal sub-region for displaying various types of content on the wearable display. For example, in some implementations, the processor is capable of determining, for example, the number of colors or range of colors to be displayed, the frame rate, the level of detail in the frequency domain, the size, shape, and/or area of the content window, and/or the brightness level of the content to be displayed. The processor can then determine which region within the display 110 is best suited to display the content.[0085] In some implementations, the processor may compare one or more aspects of the display data from one or more applications and determine the optimal sub- regions for displaying the data. The processor may compare, for example, the number of colors in the image data from a first application to the number of colors in image data from a second application and direct the image data from the application generating image data which includes more colors to a to sub-region of the display that is capable of displaying more colors. In another example, the processor may compare the relative sizes and/or shapes of the content windows from applications and determine the sub-region that is best utilized for the size and/or shape of the content. For example, a content window that is relatively narrow and elongated may best be displayed on a sub-region of the display that has a corresponding shape and/or in a sub-region that has a relatively greater curvature while a content window that is substantially square may best be displayed in a sub-region that has a corresponding shape and/or in a sub-region that has a less curvature. In another example, the processor may perform a fast Fourier transform ("FFT") algorithm on two or more sets of image data to determine the relative sharpness of the images to be displayed and direct the sharper images to a display region that is capable of displaying higher resolution images. In another example, the processor may compare the refresh rates and/or the level of complexity in the image data in order to determine which content would most benefit from a display region having better display capabilities.[0086] In some implementations, the relative battery life of the wearable display and/or companion device may be used to at least partially determine where and/or how various types of content are displayed on the display 110. For example, if the battery life is determined to be below a threshold value, the processor may direct display data to sub-regions of the display 110 that consume less power. In some implementations, the threshold value may be selected by the user. In some implementations, the driver circuitry may be configured to drive the display 110 at a lower image quality when the batter is low, such as by decreasing brightness levels, decreasing the number of colors displayed, and/or decreasing the frame rate. In some implementations, displayed content may be moved from sub-regions that consume more power to sub-regions that consume less power in order to prolong the use of the wearable display.[0087] While the processor may be capable of determining a preferable or optimal sub-region of the display to display various types of content without user input, selection of a display region can in some implementations also be made by a user selecting a display region where content is displayed. For example, with reference to FIG. 4B, a user may wish to view the weather content 103 in the first display region 120 even if the processor determines that the weather content 103 is best displayed in the second display region 130. In some implementations, when a user opens an application, a pop-up window may prompt the user to select where to display the application content. In some implementations, a pop-up window recommends where to display the application content and the user can either agree or disagree and select where to display the content. In other implementations, user input such as touch or gesture input can be used to move content to other regions of the display.[0088] FIG. 5 is a flow diagram illustrating an example method 400 for operating a wearable display device 100 is shown. While the steps in the flow diagrams depicted in the present disclosure are illustrated in a particular order, the various steps are only arranged in these orders as examples and are not limited to the specific order or hierarchy presented. In addition, all of the steps may not be necessary. Moreover, the methods disclosed herein use may include additional steps that are not explicitly illustrated by flow diagrams. As shown in FIG. 5, the method 400 may begin at block 401 by receiving a command to display content. The command may be received from a user interface, or may be triggered without immediate user interaction, such as when an email is received or an alarm goes off. In some implementations, the command is received from one or more software applications.[0089] The method 400 may continue at block 403 by identifying a display region within a wearable display suitable for displaying the content. The wearable display may include a plurality of different display regions, with different display regions capable of or configured to display different image qualities. The image quality may include one or more display characteristics such as resolution, frame rate, color gamut, and color depth. The identification of the appropriate display region may be based at least in part on the software application that desires to display the content.[0090] In some implementations software applications may have an associated image quality priority ranking. In such implementations the identification of the appropriate display region may be based at least in part on the priority ranking of the application relative to other applications simultaneously displaying image data. In some implementations, the priority ranking may be user selected or based at least in part on user preferences. User preferences may be part of a user profile that may be stored on the wearable display device 100 or a companion device. In this way, more than one user may customize the wearable display device 100 and/or the display area 110. For example, a first user may prefer to have particular content displayed in the same sub-region of the display at all times, while second user may have different preferences. In some implementations, the wearable display device 100 may be configured to utilize one or more biometric parameters to identify which of a plurality of users is currently wearing the wearable display device 100 and select one of a plurality of user profiles. For example, the wearable display device 100 may be configured to select a particular user profile based at least in part on how much the wearable display device 100 is stretched when worn by the user. In another example, the wearable display device 100 may be configured to select a particular user profile based at least in part on another biometric parameter, such as average heart rate, temperature, and/or blood pressure. In another example, the wearable display device 100 may use one or more sensors to determine the relative size of the arm that the wearable display device 100 is wrapped around in order to determine which user is wearing the device 100. For example, one or more strain gauges or pressure sensors could be used to determine the relative approximate diameters of one or more sections of the wearable display device 100 when a user is wearing the device 100 in order to identify the wearer and select a user profile associated with the user.[0091] In some implementations the priority ranking may change dynamically. For example an application that receives a notification or desires to display a notification to a user may be given higher priority when the application receives the notification or desires to display that notification. In some implementations, higher priority data may be displayed in a specific location, such as in display regions that are closer to a user's wrist. In this way, higher priority data can be seen more easily by a user than lower priority data.[0092] In some implementations, a priority ranking may be determined at least in part by the processor. The processor may consider, for example, the usage history of an application and/or the usage history associated with a specific user of the wearable display device 100. Usage history may include, for example, how many times a user launches an application, how many times a user interacts with an application, how often the application refreshes and/or receives an alert, the amount of time a user displays content from a particular applications, as well as the size and/or display characteristics of the of the content to be displayed. In this way, the processor can determine the relative priority level of the applications and arrange the higher priority applications accordingly. Alternatively and/or in addition, the processor may be configured to determine where to display content based at least in part on the privacy level of the content as explained in further detail below.[0093] As a non-limiting example, first image content may be output from a first application and second image content may be output from a second application. The wearable display device 100 may include two display regions. The first display region may be positioned along a relatively planar surface extending across the top side of a user's forearm while the second display region may be positioned below the first display region on a relatively curved surface extending across the side of the user's forearm. A processor may need to choose how to best display the two image contents. In other words, the processor may compare the first image content with the second image content in order to determine which sub-region to display the content in. The processor may make the selection based in part on one or more factors. For example, the processor may consider which of the two applications are used most frequently, which of the two applications have the most running time to date, which application best fits the physical sizes of the first and second display regions, as well as one or more image qualities that are to be displayed. The first display region may be preferred for applications that are used more frequently and/or display content at a higher frequency because the first display region may be more easily seen by a user and is more easily accessible.[0094] In some implementations the identification of the appropriate sub-region may be based at least in part on the desired image quality for the content to be displayed. For example, in some implementations the identification of the appropriate sub-region may be based at least in part on the number of different colors that need to be displayed. In some implementations the identification of the appropriate sub- region may be based at least in part frame rate of image data being output by an application.[0095] The method 400 may move to a block 405 by displaying the content in the selected display region of the wearable display device 100. In some implementations, displaying the content in the identified display region includes moving other content currently being displayed from one display region to another display region.[0096] In some implementations a user may actively select the display region within the display 110 in which the content is to be displayed. In other implementations, the wearable display device 100 and/or mobile complementary device 200 may automatically select the sub-region in which the content is to be displayed. In some implementations the wearable display device 100 and/or the complementary device 200 may dynamically determine which display regions will display various image data. As such, the wearable display device 100 and/or the complementary device 200 may be configured to actively move content from one display region to another, either automatically or in response to user input.[0097] FIGS. 6A and 6B illustrate an example of content reorganization on a wearable display device 100. The wearable display device 100 may be configured to operate as a stand-alone device or may be configured to operate in conjunction with another device such as a computer, laptop, smartphone, smart-television, and the like. As shown in FIG. 6A, first video content 101 is playing in first display region 120. Second video content 102 is paused and displayed in a second display region 130, while weather content 103 is displayed in a third display region 140. In such an implementation a user may wish to resume watching the second video content 102. Thus the user may select the second video content 102, such as by contacting the area of the second display region 130 where the second video content 102 is displayed. The selection of the second video content 102 may cause the wearable display device 100 to move the second video content 102 from the second display region 130 to the first display region 120 and to move the first video content 101 from the first display region 120 to the second display region 130 as shown in FIG. 6B. The first video content 101 may be paused when moved to the second display region 130.[0098] FIGS. 7 A and 7B show another example of content reorganization on a wearable display. The wearable display device 100 of FIGS. 7 A and 7B include the three different display regions 120, 130, and 140, each illustrated for the purpose of convenience as having approximately the same width. In the illustrated implementation, the display regions may have display capabilities that decrease with respect to at least one display characteristic from left to right, so that the first display region 120 is capable of displaying image data at a higher quality than the second display region 130, which in turn is capable of displaying image data at a higher quality than the third display region 140. However, the image quality levels may be arranged in another order or configuration in other implementations.[0099] As shown in FIG. 7A, video content 101 is being displayed in the first display region 120, scrolling stock content 105 is being displayed in the second display region 130, and weather content 103 is being displayed in the third display region 140. This arrangement may be due to a determination made by the wearable display device 100 has that the video content 101 is best suited for display in the display region with the greatest display capability, or may be due to previous user input. Similarly, the wearable display device 100 may have determined that the scrolling stock content 105 is best suited for display in the display region with the second greatest display capability, and may have determined that the weather content 103 is best suited for display in the display region with the lowest display capability. In this way, the display of three different types of content may be optimized.[0100] If the video content 101 is paused, as is shown in FIG. 7B, the wearable display device 100 may reassign or move the display of the video content 101 to the third display region 140. When paused, the display of the video content 101 may no longer need to be refreshed and/or displayed at a high resolution. As such, it can be moved to the display region with the lowest display capability when paused. The first display region 120 may then be powered off to reduce power consumption and extend battery life.[0101] FIGS. 8A and 8B show another example of content reorganization on a wearable display device 100. The wearable display device 100 is again shown having three display regions with varying display capabilities. As shown in FIG. 8A, a web browser 109 is being displayed in the second display region 130 and scrolling stock content 105 is being displayed in the third display region. The web browser 109 may include a link to an embedded video 119. A user may wish to watch the embedded video 119. As such, when the user selects the embedded video 119, the wearable display 500 may open the video application such that the video content 101 is displayed in the first display region 120 as shown in FIG. 8B.[0102] FIGS. 9A-12D show illustrative implementations of displaying content on a wearable display device 100 based at least in part on the relative privacy level of the content to be displayed. These implementations may be used alternatively and/or in addition to the implementations described above. As will be described in further detail below, the wearable display device 100 may include a public display region and a private display region. Certain displayed content may be constrained to a display region designated as a private display region based at least in part on a privacy level associated with the display content. The privacy levels may be dynamic and based on the type of content being displayed and/or the application from which the content arrives from. The private and/or public regions may be dynamic depending on the location, position, and orientation of the display and/or the user. The wearable display device 100 may include one or more sensors to determine the position and/or orientation of the display and/or user. In some implementations, multiple levels of privacy may be used.[0103] In some implementations, the wearable display device 100 includes a first region having a first privacy level or threshold and a second region having second privacy level or threshold. The first and second privacy levels or thresholds may be different. Restrictions or allowances may be placed on displayed content based on privacy thresholds. In other words, in some implementations, applications can include restrictions which prevent the application from being launched or displayed in specific regions of the wearable display based on the privacy level of the information that the application intends to display. For example, emails, SMS, Facebook, and other private information may be displayed in a private region while less sensitive information such as news, stock prices, twitter feeds, and sports scores may displayed in a public or less-private region.[0104] In some implementations, the designation of sub-regions of the display as public or private may be dynamic, and the relative sizes and locations of the public and private sub-regions can be adjusted based on, for example, how the user positions their arm. Designation of privacy regions may change based on context and/or user position. For example, the size and positioning of a private sub-region may rotate away from other viewers and towards the body of the user dynamically based on the user's movement and/or positioning (e.g. , walking, sitting, or driving). The device may include self-learning features in order to customize the size and positioning of the private and public sub-regions for each user based on, for example, the size and positioning of the user's arms. Privacy permissions can also be dynamic based on the physical location of the device, with privacy thresholds being lowered when a user is at home, or in another location designated as private or semi-private, such as an office or a vehicle.[0105] With reference to FIGS. 9A-9B, the display of public and private information on a wearable display device 100 is illustrated. FIG. 9A illustrates a top side view of the wearable display device 100 having a public display region 110a. FIG. 9B is a bottom side view of FIG. 9A and illustrates a bottom side view of the wearable display device 100 having a private display region 110b. As such, in the illustrated implementation, the wearable display device 100 includes a top side that is designated for the display of public information and a bottom side that is designated for the display of private information. In general, the public display area faces away from the user while the private display area faces towards the user's body so that it is difficult for people other than the user to see.[0106] Continuing with FIGS. 9A-9B, the wearable display device 100 may include an inactive region 170. The inactive region 170 may be less flexible than the display areas. The inactive region 170 may provide a space for components other than the display. For example, the inactive region 170 may include batteries, processors, sensors, memory, antenna, and the like. As shown, the inactive region 170 is located along a narrow section on the lower and/or underside of the wearable display device 100 when a user's palm is facing up (as shown in FIG. 9B) as this portion of the wearable display device 100 is less visible from the point of view of the user when the device is worn. However, the inactive region 170 may be placed anywhere on the wearable display device 100 more than one inactive region 170 may be included. In some implementations, wearable display device 100 does not include an inactive region 170.[0107] As shown in FIG. 9A, the public display region 110a is subdivided into three display regions 120, 130, and 140. As discussed above, the three display regions 120, 130, and 140 may be capable of displaying various image qualities that may be the same or different from region to another. In some implementations, the three display regions 120, 130, and 140 may be capable of being driven differently to provide varying image qualities across the public display region 110a. As shown, video content 101 from a first application is being displayed in display region 120, image data from a second application in the form of weather information 103 is being displayed in display region 130, and image data from a third application in the form of stock market information 105 is being displayed in display region 140. More or less than three display regions may be employed.[0108] As shown in FIG. 9A, the private display region 110b is located on the underside of the wearable display device 100 and includes two display regions 160 and 180. More or less than two display regions may be employed. Display regions 160 and 180 may be capable of displaying various image qualities that may be the same or different from region to another. In some implementations, the two display regions 160 and 180 may be capable of being driven differently to provide varying image qualities across the private display region 110b. As shown, image data in the form of personal contacts 111 is being displayed in display region 160 and image data in the form of personal messages 115 is being displayed in display region 180.[0109] The wearable display device 100 may be configured such that the wearable display device 100 can determine the orientation of the wearable display in space. For example, the wearable display device 100 may include one or more sensors. The one or more sensors may include may include, for example, motion sensors, accelerometers, gyroscopes, light detectors, gaze detectors, thermal sensors, deformation sensors, pressure sensors, cameras, and the like. In some implementations, the sensors are disposed within the inactive region 170. In some implementations, the sensors are embedded into portions of the display regions. The sensors may be configured to provide information regarding the positions of the display regions 120, 130, 140, 160, 170 and/or information regarding the positions of the public display region 110a and/or the private display region 110b. [0110] In some implementations, the sensors can be used in determining which portions of the wearable display device 100 are facing towards a user and which portions are facing away from the user. For example, while FIGS. 9A-9B illustrate the private display region 110b generally located on the palm up facing side of the wearable display device 100 and the public display region 110a generally located on the palm down facing side of the wearable display device 100, the private display region 110b and the public display region 110a may switch positions as the wearable display device 100 is rotated. In other words, when the wearable display device 100 is moved in space with respect to the user, orientation of the private display region 110b and the public display region 110a may be changed. In some implementations, the wearable display device 100 is configured to display more private information on display regions facing toward a user's body and less private information on display regions facing away from the user's body. The wearable display device 100 may include one or more processors that are electronically coupled to one or more sensors. In some implementations, the processor(s) is configured to select where content is to be displayed on the wearable display device 100.[0111] In some implementations, the wearable display device 100 is configured such that when the private display region 110b is moved such that it is facing away from the user and/or is moved to a position where the user cannot visualize the private display region 110b and/or when the private display region 110b is moved such that someone other than the user may visualize the private display region 110b, the private display region 110b is inactivated. In some implementations the wearable display device 100 is configured such that the private display region 110b is resized and/or reshaped in response to information provided by the one or more sensors of the wearable display device 100.[0112] FIGS. 10A-10B illustrate the display of public and private information on a wearable display device 100 according to another implementation. As shown in FIG. 10A, the private display region 110b may be located in a generally rectangular area overlapping a user's inner wrist. The public display region 110a may encompass the remainder of the display area. In another implementation, shown in FIG. 10B, the private display region 110b may be located along a relatively thin and rectangular strip extending over a user's inner forearm while the public display region 110a encompasses the remainder of the display area. The relative size and/or positioning of the public and private display regions 110a, 110b may be fixed. [0113] In other implementations, the relative size and/or positioning of the public and private display regions 110a, 110b is dynamic. For example, as shown in FIGS. 10A-10B, the private display region 110b may be resized and reshaped is response to additional content displayed on the public display region 110a. In FIG. 10A, personal messages 115 is being displayed within the private display region 110b and no content is being displayed in the public display region 110a. In FIG. 10B, the private display region 110b is resized and reshaped after video content 101 and weather content 103 are displayed within the public display region 110a.[0114] FIGS. 11A-11B illustrate the display of public, private, and semi-private information on a wearable display device 100 according to another implementation. FIG. 11A shows a side view of the wearable display device 100 being worn by a user. FIG. 11B illustrates the wearable display device 100 of FIG. 11A in a position where the user's arm has been rotated approximately 90° such that the user's palm is facing upward, exposing the underside of the wearable display device 100. As shown, in FIGS. 11A and 11B, the public display region 110a extends around the outside of the user's forearm, while the private display region 110b and semi-private display region 100c are located along the inside of the user's forearm. The private display region 110b may be generally disposed over the user's inner wrist, while the semi-private region 110c may be located along the inner forearm of the user; distal to the wrist and the private display region 110b. Public content such as, for example, video content 101, weather content 103, and stock content 105 may be displayed within the public display region 110a. Private content such as, for example, personal messages 115 and health information 121 may be displayed within the private display region 110b. Semi-private content such as, for example, email content 125 may be displayed within the semi-private display region 110c.[0115] The boundaries of the public display region 110a, private display region 110b, and/or semi-private display region 110c may be fixed and/or dynamic. In other words, the various display regions within the display may be fixed sizes and shapes and or the sizes, shapes, and orientations may change during use. In some implementations, the boundaries may be resized, reshaped, or reoriented in response to information provided by one or more sensors. For example, the boundaries may be determined and or altered based at least in part on the orientation of the wearable display device 100. In some implementations, the boundaries may be determined and or altered based at least in part on the amount content that is to be displayed. For example, the size of the private display region 110b may increase if a user wishes to view a large amount of private information or may shrink and/or may no longer be displayed if the user wishes to display less private content or does not wish to display any private content.[0116] The relative privacy level of content may be user selected. In other implementations, a processor may be configured to determine a privacy level of the content. The privacy level may be based in part on one or more of, for example, user input, the application that wishes to display content, the file extension, the type of media. In some implementations, the processor may compare one or more aspects of the display data from one or more applications and determine the optimal sub-regions for displaying the data based at least in part on the privacy level of the content.[0117] As shown in FIGS. 12A-12D, the wearable display device 100 may interact with a complementary device 200. As illustrated, and similar to the implementation shown in FIGS. 9A-9B, the wearable display device 100 includes a private display region 110b generally located on the palm up facing side of the wearable display device 100, a public display region 110a generally located on the palm down facing side of the wearable display device 100, and an inactive region 170 located along an section of the wearable display device 100 extending along the underside of the user's forearm.[0118] FIGS. 12A-12D illustrate an example implementation of the wearable display device 100 in use with a complementary device 200. As shown, the complementary device 200 may display multiple types of content at once. The complementary device 200 may include one or more applications that are used to display content. In FIG. 12A, the complementary device is shown as displaying weather content 103 and email content 125. The weather content 103 may have a first associated privacy level and the email content 125 may have a second privacy level. The privacy level may be predetermined or determined by a processor. The processor may be located within the complementary device 200 and/or within the wearable display device 100.[0119] Continuing with FIG. 12A, the user may select the weather content 103 for display on the wearable display device 100. Thus, as shown, the user may select the weather content 103 by dragging and dropping the weather content 103 onto the wearable display device 100. The processor may direct the weather content 103 onto the public display region 110a as shown in FIG. 12B based at least in part on the privacy level associated with the weather content 103.[0120] Turning to FIG. 12C, the user may select the email content 125 for display on the wearable display device 100. Thus, as shown, the user may select the email content 125 by dragging and dropping the email content 125 onto the wearable display device 100. The processor may direct the email content 125 onto the private display region 110b as shown in FIG. 12D based at least in part on the privacy level associated with the email content 125.[0121] FIG. 13 is a flow diagram illustrating an example method 1300 for displaying content in display regions of a wearable display device 100. The method 1300 may include a first block 1301 in which at least a first display region and at least a second display region are designated on a wearable display device such as the wearable display device 100 of FIG. 1. The first and second regions may be sub- regions within a larger display area, such as the display area 110 of FIG. 1. The first display region and the second display region may be designated based at least in part on the orientation of the wearable display device in space. In some implementations, the first display region is located in a location on the wearable display device that cannot be seen by a person who is not wearing the device.[0122] In some implementations, the method 1300 may optionally include a determination of the position of the wearable display device, such as a determination as to how the wearable display device is oriented in space. The designation of the first and second display areas may be based at least on part on the orientation of the wearable display, and the method 1300 may in some further implementations optionally include periodically adjusting the location and/or boundaries of the first and second display regions during use based at least on part on determinations of the orientation of the wearable display. For example, the method 1300 may optionally include designating a region of a display area that faces a user's body as the first display region and designating a region of a display area that faces away from a user's body as the second display region. In some implementations, the designation of the first and second display areas may be based at least on part on the location and degree of deformation of the wearable display, and the method 1300 may in some further implementations method 1300 may optionally include adjusting the location and/or boundaries of the first and second display regions during use based at least on part on the location and degree of deformation of the wearable display. [0123] The method 1300 may then move to block 1303, at which a privacy level associated with the image data to be displayed on the wearable display device is determined. Although illustrated in FIG. 13 as occurring after the designation of at least a first display region and at least a second display region at block 1301, the determination of a privacy level associated with the image data to be displayed can be determined prior to or simultaneously with the designation of the first display region and the second display region.[0124] The method 1300 may then move to block 1305 at which the image data is displayed on at least one of the first or second display regions, depending at least in part on the privacy level associated with the image data to be displayed on the wearable display device. Some or all of the blocks of method 1300 may be performed repeatedly during use of the wearable display device, and may be triggered, for example, at preset intervals, by user input, or by movement of the wearable display device.[0125] With reference now to FIGS. 14A-16B, in some implementations, the wearable display devices 100 disclosed herein include a plurality of deformation sensors configured to determine and/or monitor the state of the flexible display. In some implementations, the sensors may be configured to detect physical deformation of the wearable display device 100. For example, the sensors may be configured to detect deformation or distortion of the display, such as crimps, folds, wrinkles, and/or stretched areas of the display. The deformation sensors may in some implementations be pressure sensors, although other appropriate types of sensors may also be used. For example, the sensors could include force collection sensors, piezoresistive strain gauges, capacitive type sensors, electromagnetic sensors, piezoelectric sensors, optical sensors, potentiometric sensors, and the like.[0126] In response to sensor output, at least one characteristic of the display may be changed. The characteristic of the display may include the brightness, size, shape, resolution, and the like. In other words, the wearable display device 100 may be able to identify deformation of the display or determine the physical shape of the display and adjust the displayed content accordingly.[0127] As shown, for example, in FIGS. 14A-14C, the size of the text or other display elements may be increased when the wearable display device 100 is deformed. FIG. 14A illustrates a plurality of deformation sensors 405 embedded within a grid-like pattern within the display 110. The deformation sensors 405 may be in electrical communication with one or more processors. As shown in FIG. 14A, text content 107 is being displayed on the display 110.[0128] While the deformation sensors 405 are illustrated in a grid-like pattern, any suitable arrangement of deformation sensors 405 may be employed. More or less sensors than shown may also be included. In some implementations, the deformation sensors 405 include one or more pressure sensors. In some implementations, the wearable display device 100 includes a pressure membrane disposed within at least a substantial portion of the wearable display device 100. Information from the pressure membrane may be used at least in part to determine the relative level of folding or wrinkling across the wearable display device 100. For example, wherever there is a fold or wrinkle, the local pressure will increase due to the structure of the membrane pushing against itself. In this way, the location and degree of relative wrinkling of one or more sections of the wearable display device 100 may be determined.[0129] Turning the FIG. 14B, the wearable display device 100 is shown in a deformed state. That is to say, the wearable display device 100 includes a plurality of folds or wrinkles which may tend to obscure and/or impair the readability of the text content 107. The level of folding and/or wrinkling may be determined at least in part by one or more of the plurality of deformation sensors 405. In response to the information provided by the deformation sensors 405, the processor may change at least one characteristic of the displayed text content 107. In some implementations, the processor is configured to adjust at least one image characteristic of the displayed content in response to signals indicating that the relative deformation has exceeded one or more threshold levels. For example, as shown in FIG. 14C, in response to the deformation, the size of the text content is increased. The increase in text size may allow for the text content to be read by a user even when the wearable display device 100 is in a deformed state. In another example, in response to the deformation, the size of the icons displayed on the wearable display device 100 is increased. In another example, in response to the deformation, the image resolution displayed on the wearable display device 100 is adjusted.[0130] In another implementation, shown for example in FIGS. 15A-15B, the processor may resize and/or reshape displayed content in response to information received from one or more deformation sensors 405. FIG. 15A illustrates the side- by-side display of video content 101 and weather content 103 on the display area 110 of the wearable display device 100. FIG. 15B is the same as FIG. 15A expect that the distal and proximal ends of the wearable display device 100 are deformed. In response, the video content 101 and weather content 103 are moved and resized such that they are located in an area of the display 110 that is less deformed. In this way, the video content 101 and weather content 103 can be more easily viewed by a user. That is to say, as wrinkles obstruct one or more portions of the display, the effective resolution may become less, and in response, the wearable display device 100 may automatically re-scale content (e.g., photos and videos) to fit within the remaining pixels rather than be cropped or obscured.[0131] In another implementation, as shown for example in FIGS. 16A-16B, the wearable display device 100 may include sensors 300 configured to detect light. In some implementations the sensors may be configured to detect ambient light. Thus, when the wearable display is partially or fully covered, by a shirt sleeve for example, the regions of the display that do not detect incident light can be deactivated in order to conserve power. In other words, a user may wish to cover a portion or the entirety of the wearable display, with for example a sleeve of an article of clothing, when the user is not using the wearable display device 100. These covered portions of the wearable display device 100 may be deactivated and/or powered off automatically. That is to say, less than the entire wearable display device 100 may be driven when it is determined that less than the entire wearable display device 100 display is visible to the user.[0132] In particular, FIG. 16A illustrates a wearable display device 100 which includes sensors 300 disposed at various locations across the surface of the wearable display device 100 and configured to detect light. The sensors 300 may in some implementations include photodiodes, but may also include any other components suitable for detection of light. In FIG. 16A, the wearable display device 100 is not covered by the user's sleeve 301. In FIG. 16B, the user's sleeve has been pulled down the user's forearm so as to partially obscure the wearable display device 100. An obscured portion 302, may be deactivated or powered off automatically, or may be driven at a lower image quality, such as a lower brightness, when the portion 302 of the wearable display device 100 is obscured. The unobscured portion 303 may be driven as normal, or may be adjusted to compensate for the deactivation of obscured portion 302.[0133] The portion of the wearable display device 100 which remains active can be reorganized to display all or a portion of the content previously displayed on the deactivated portion. For example, the content can be reduced in size and rearranged so that all of the image content previously being displayed on the wearable display device 100 is still being displayed on the smaller active area of the wearable display device 100. In FIG. 15B it can be seen that the video content 101 is no longer being displayed, and the weather content 103 has been resized to fit within the active area of the devices. In other implementations, content having a higher priority can be moved to the active area of the display, such as the applications or other image data with which the user most recently interacted.[0134] The deactivated portion of the wearable display device 100 need not correspond exactly with the obscured portion 302. When discrete portions of the wearable display device 100 are driven as discrete within the wearable display device 100, a partially obscured display region can remain active, be resized, or be deactivated. The treatment of a partially-obscured display region can be based, for example, on the degree to which that display region is obscured, or on user preferences.[0135] In another implementation, the wearable display device 100 may include one or more proximity sensors. The proximity sensors may be configured to determine if the wearable display device 100 is covered by, for example, an article of clothing or otherwise obscured. As such, when the proximity sensors detect an object in close proximity to at least a portion of the display, that portion of the display may be deactivated in order to reduce power consumption.[0136] It is also contemplated that the content displayed on the wearable display device 100 may be reorganized, resized, and/or reconfigured in response to information from the light sensors and/or proximity sensors. For example, displayed content from two applications may be displayed side-by-side on the wearable display device 100. A portion of the wearable display device 100 may be covered such that one of the two applications is covered as well. In turn, the displayed content may be resized to fit side-by-side on the uncovered portion of the wearable display device 100. In another implementation, the displayed content may be reorganized such that the content displayed by the two applications is shown as one content window on top of a second content window rather than side-by side in order to fit within the uncovered portion of the wearable display.[0137] FIG. 17 is a flow diagram illustrating another example method 1700 for displaying content in display regions of a wearable display device 100. The method 1700 may include a first block 1701 at which content is displayed on a flexible display that may be subject to wrinkling or similar deformation. The method 1700 can then move to block 1703 at which electrical signals are received from one or more deformation sensors coupled to the flexible display. The method 1700 can then move to block 1705 at which the displayed content is altered based at least in part on the received electrical signals. Altering the displayed content may include increasing a font size of text within the displayed content, reducing an area in which of the displayed content is displayed, and/or deactivating a portion of the flexible display.[0138] Other implementations may utilize various combinations of content analysis, predefined preferences, or user input to reorganize displayed image data across multiple regions of a display. For example, in some implementations, image data being displayed in a particular display region may be expanded automatically or in response to user input to cover multiple display regions, or a display region may be dynamically subdivided into two sub-regions to effectively display image content. A variety of other combinations of the methods and components discussed herein are also possible.[0139] As used herein, a phrase referring to "at least one of a list of items refers to any combination of those items, including single members. As an example, "at least one of: a, b, or c" is intended to cover: a, b, c, a-b, a-c, b-c, and a-b-c.[0140] The various illustrative logics, logical blocks, modules, circuits and algorithm steps described in connection with the implementations disclosed herein may be implemented as electronic hardware, computer software, or combinations of both. The interchangeability of hardware and software has been described generally, in terms of functionality, and illustrated in the various illustrative components, blocks, modules, circuits and steps described above. Whether such functionality is implemented in hardware or software depends upon the particular application and design constraints imposed on the overall system.[0141] The hardware and data processing apparatus used to implement the various illustrative logics, logical blocks, modules and circuits described in connection with the aspects disclosed herein may be implemented or performed with a general purpose single- or multi-chip processor, a digital signal processor (DSP), an application specific integrated circuit (ASIC), a field programmable gate array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or any combination thereof designed to perform the functions described herein. A general purpose processor may be a microprocessor, or, any conventional processor, controller, microcontroller, or state machine. A processor also may be implemented as a combination of computing devices, such as a combination of a DSP and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration. In some implementations, particular steps and methods may be performed by circuitry that is specific to a given function.[0142] In one or more aspects, the functions described may be implemented in hardware, digital electronic circuitry, computer software, firmware, including the structures disclosed in this specification and their structural equivalents thereof, or in any combination thereof. Implementations of the subject matter described in this specification also can be implemented as one or more computer programs, i.e., one or more modules of computer program instructions, encoded on a computer storage media for execution by, or to control the operation of, data processing apparatus.[0143] If implemented in software, the functions may be stored on or transmitted over as one or more instructions or code on a computer-readable medium. The steps of a method or algorithm disclosed herein may be implemented in a processor- executable software module which may reside on a computer-readable medium. Computer-readable media includes both computer storage media and communication media including any medium that can be enabled to transfer a computer program from one place to another. A storage media may be any available media that may be accessed by a computer. By way of example, and not limitation, such computer- readable media may include RAM, ROM, EEPROM, CD-ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other medium that may be used to store desired program code in the form of instructions or data structures and that may be accessed by a computer. Also, any connection can be properly termed a computer-readable medium. Disk and disc, as used herein, includes compact disc (CD), laser disc, optical disc, digital versatile disc (DVD), floppy disk, and Blu-ray disc where disks usually reproduce data magnetically, while discs reproduce data optically with lasers. Combinations of the above also may be included within the scope of computer-readable media. Additionally, the operations of a method or algorithm may reside as one or any combination or set of codes and instructions on a machine readable medium and computer-readable medium, which may be incorporated into a computer program product. [0144] Various modifications to the implementations described in this disclosure may be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other implementations without departing from the spirit or scope of this disclosure. Thus, the claims are not intended to be limited to the implementations shown herein, but are to be accorded the widest scope consistent with this disclosure, the principles and the novel features disclosed herein. Additionally, a person having ordinary skill in the art will readily appreciate, relative terms such as "upper" and "lower" are sometimes used for ease of describing the figures, and indicate relative positions corresponding to the orientation of the figure on a properly oriented page, and may not reflect the proper orientation of a particular component as implemented or during use.[0145] Certain features that are described in this specification in the context of separate implementations also can be implemented in combination in a single implementation. Conversely, various features that are described in the context of a single implementation also can be implemented in multiple implementations separately or in any suitable subcombination. Moreover, although features may be described above as acting in certain combinations and even initially claimed as such, one or more features from a claimed combination can in some cases be excised from the combination, and the claimed combination may be directed to a subcombination or variation of a subcombination.[0146] Similarly, while operations are depicted in the drawings in a particular order, a person having ordinary skill in the art will readily recognize that such operations need not be performed in the particular order shown or in sequential order, or that all illustrated operations be performed, to achieve desirable results. Further, the drawings may schematically depict one more example processes in the form of a flow diagram. However, other operations that are not depicted can be incorporated in the example processes that are schematically illustrated. For example, one or more additional operations can be performed before, after, simultaneously, or between any of the illustrated operations. In certain circumstances, multitasking and parallel processing may be advantageous. Moreover, the separation of various system components in the implementations described above should not be understood as requiring such separation in all implementations, and it should be understood that the described program components and systems can generally be integrated together in a single software product or packaged into multiple software products. Additionally, other implementations are within the scope of the following claims. In some cases, the actions recited in the claims can be performed in a different order and still achieve desirable results. |
For an application specifying a software portion for implementation within a data processing engine (DPE) array of a device and a hardware portion having High-Level Synthesis (HLS) kernels for implementation within programmable logic (PL) of the device, a first interface solution is generated that maps logical resources used by the software portion to hardware resources of an interface block coupling the DPE array and the PL. A connection graph specifying connectivity among the HLS kernels and nodes of the software portion to be implemented in the DPE array; and, a block diagram based on the connection graph and the HLS kernels are generated. The block diagram is synthesizable. An implementation flow is performed on the block diagram based on the first interface solution. The software portion of the application is compiled for implementation in one or more DPEs of the DPE array. |
CLAIMSWhat is claimed is:1. A method, comprising:for an application specifying a software portion for implementation within a data processing engine (DPE) array of a device and a hardware portion having high-level synthesis (HLS) kernels for implementation within programmable logic of the device, generating, using a processor, a first interface solution mapping logical resources used by the software portion to hardware resources of an interface circuit block coupling the DPE array and the programmable logic;generating, using the processor, a connection graph specifying connectivity among the HLS kernels and nodes of the software portion to be implemented in the DPE array;generating, using the processor, a block diagram based on the connection graph and the HLS kernels, wherein the block diagram is synthesizable;performing, using the processor, an implementation flow on the block diagram based on the first interface solution; andcompiling, using the processor, the software portion of the application for implementation in one or more DPEs of the DPE array.2. The method of claim 1 , wherein the generating the block diagram comprises: performing HLS on the HLS kernels to generate synthesizable versions of the HLS kernels; andconstructing the block diagram using the synthesizable versions of the HLS kernels.3. The method of claim 1 , wherein the generating the block diagram is performed based on a description of an architecture of a System-on-Chip in which the application is to be implemented.4. The method of claim 3, wherein the generating the block diagram further comprises connecting the block diagram with a base platform.5. The method of claim 1 , further comprising:
during the implementation flow, executing a hardware compiler that builds the block diagram and performs the implementation flow by exchanging design data with a DPE compiler configured to compile the software portion.6. The method of claim 5, further comprising:the hardware compiler exchanging further design data with a Network-on- Chip (NoC) compiler; andthe hardware compiler receiving a first NoC solution configured to implement routes through a NoC of the device that couples the DPE array to theprogrammable logic of the device.7. The method of claim 1 , further comprising:in response to a hardware compiler configured to build the block diagram and perform the implementation flow determining that an implementation of the block diagram does not meet a design metric for the hardware portion, providing a constraint for the interface circuit block to a DPE compiler configured to compile the software portion; andthe hardware compiler receiving, from the DPE compiler, a second interface solution generated by the DPE compiler based on the constraint.8. The method of claim 7, wherein the performing the implementation flow is performed based on the second interface solution.9. A system, comprising:a processor configured to initiate operations including:for an application specifying a software portion for implementation within a data processing engine (DPE) array of a device and a hardware portion having high-level synthesis (HLS) kernels for implementation within programmable logic of the device, generating a first interface solution mapping logical resources used by the software portion to hardware resources of an interface circuit block coupling the DPE array and the programmable logic;generating a connection graph specifying connectivity among the HLS kernels and nodes of the software portion to be implemented in the DPE array;
generating a block diagram based on the connection graph and the HLS kernels, wherein the block diagram is synthesizable;performing an implementation flow on the block diagram based on the first interface solution; andcompiling the software portion of the application for implementation in one or more DPEs of the DPE array.10. The system of claim 9, wherein the generating the block diagram comprises: performing HLS on the HLS kernels to generate synthesizable versions of the HLS kernels; andconstructing the block diagram using the synthesizable versions of the HLS kernels.1 1. The system of claim 9, wherein the generating the block diagram is performed based on a description of an architecture of a System-on-Chip in which the application is to be implemented.12. The system of claim 1 1 , wherein the generating the block diagram further comprises connecting the block diagram with a base platform.13. The system of claim 9, wherein the processor is configured to initiate operations further comprising:during the implementation flow, executing a hardware compiler that builds the block diagram and performs the implementation flow by exchanging design data with a DPE compiler configured to compile the software portion.14. The system of claim 13, wherein the processor is configured to initiate operations further comprising:the hardware compiler exchanging further design data with a Network-on- Chip (NoC) compiler; andthe hardware compiler receiving a first NoC solution configured to implement routes through a NoC of the device that couples the DPE array to theprogrammable logic of the device.15. The system of claim 9, wherein the processor is configured to initiate operations further comprising:in response to a hardware compiler configured to build the block diagram and perform the implementation flow determining that an implementation of the block diagram does not meet a design metric for the hardware portion, providing a constraint for the interface circuit block to a DPE compiler configured to compile the software portion; andthe hardware compiler receiving, from the DPE compiler, a second interface solution generated by the DPE compiler based on the constraint. |
HARDWARE-SOFTWARE DESIGN FLOW WITH HIGH-LEVEL SYNTHESIS FOR HETEROGENEOUS AND PROGRAMMABLE DEVICESRESERVATION OF RIGHTS IN COPYRIGHTED MATERIAL[0001] A portion of the disclosure of this patent document contains material which is subject to copyright protection. The copyright owner has no objection to the facsimile reproduction by anyone of the patent document or the patent disclosure, as it appears in the Patent and Trademark Office patent file or records, but otherwise reserves all copyright rights whatsoever.TECHNICAL FIELD[0002] This disclosure relates to integrated circuits (ICs) and, more particularly, to implementing applications that include hardware and software portions within heterogeneous and programmable ICs.BACKGROUND[0003] A programmable integrated circuit (IC) refers to a type of IC that includes programmable logic. An example of a programmable IC is a field programmable gate array (FPGA). An FPGA is characterized by the inclusion of programmable circuit blocks. Examples of programmable circuit blocks include, but are not limited to, input/output blocks (lOBs), configurable logic blocks (CLBs), dedicated random access memory blocks (BRAM), digital signal processing blocks (DSPs), processors, clock managers, and delay lock loops (DLLs).[0004] Modern programmable ICs have evolved to include programmable logic in combination with one or more other subsystems. For example, someprogrammable ICs have evolved into System-on-Chips or "SoCs" that include both programmable logic and a hardwired processor system. Other varieties of programmable ICs include additional and/or different subsystems. The growing heterogeneity of subsystems included in programmable ICs presents challenges for implementing applications within these devices. Traditional design flows for ICs having both hardware and software-based subsystems (e.g., programmable logic circuitry and a processor) have relied on hardware designers first creating a monolithic hardware design for the IC. The hardware design is used as the platform
upon which the software design is then created, compiled, and executed. This approach is often unduly limiting.[0005] In other cases, the software and hardware design processes may be decoupled. Decoupling hardware and software design processes, however, provides no indication of software requirements or the placement of interfaces between the various subsystems in the 1C. As such, the hardware and software design processes may fail to converge on a workable implementation of the application in the 1C.SUMMARY[0006] In one aspect, a method can include, for an application specifying a software portion for implementation within a data processing engine (DPE) array of a device and a hardware portion for implementation within programmable logic (PL) of the device, generating, using a processor, a logical architecture for the application and a first interface solution specifying a mapping of logical resources to hardware of an interface circuit block between the DPE array and theprogrammable logic. The method can include building a block diagram of the hardware portion based on the logical architecture and the first interface solution and performing, using the processor, an implementation flow on the block diagram. The method can include compiling, using the processor, the software portion of the application for implementation in one or more DPEs of the DPE array.[0007] In another aspect, a system includes a processor configured to initiate operations. The operations can include, for an application specifying a software portion for implementation within a DPE array of a device and a hardware portion for implementation within PL of the device, generating a logical architecture for the application and a first interface solution specifying a mapping of logical resources to hardware of an interface circuit block between the DPE array and the PL. The operations can include building a block diagram of the hardware portion based on the logical architecture and the first interface solution, performing animplementation flow on the block diagram, and compiling the software portion of the application for implementation in one or more DPEs of the DPE array.[0008] In another aspect, a computer program product includes a computer readable storage medium having program code stored thereon. The program code is executable by computer hardware to initiate operations. The operations can
include, for an application specifying a software portion for implementation within a DPE array of a device and a hardware portion for implementation within PL of the device, generating a logical architecture for the application and a first interface solution specifying a mapping of logical resources to hardware of an interface circuit block between the DPE array and the PL. The operations can include building a block diagram of the hardware portion based on the logical architecture and the first interface solution, performing an implementation flow on the block diagram, and compiling the software portion of the application for implementation in one or more DPEs of the DPE array.[0009] In another aspect, a method can include, for an application having a software portion for implementation in a DPE array of a device and a hardware portion for implementation in PL of the device, performing, using a processor executing a hardware compiler, an implementation flow on the hardware portion based on an interface block solution that maps logical resources used by the software portion to hardware of an interface block coupling the DPE array to the PL. The method can include, in response to not meeting a design metric during the implementation flow, providing, using the processor executing the hardware compiler, an interface block constraint to a DPE compiler. The method can also include, in response to receiving the interface block constraint, generating, using the processor executing the DPE compiler, an updated interface block solution and providing the updated interface block solution from the DPE compiler to the hardware compiler.[0010] In another aspect, a system includes a processor configured to initiate operations. The operations can include, for an application having a software portion for implementation in a DPE array of a device and a hardware portion for implementation in PL of a device, performing, using a hardware compiler, an implementation flow on the hardware portion based on an interface block solution that maps logical resources used by the software portion to hardware of an interface block coupling the DPE array to the PL. The operations can include, in response to not meeting a design metric during the implementation flow, providing, using the hardware compiler, an interface block constraint to a DPE compiler. The operations further can include, in response to receiving the interface block constraint, generating, using the DPE compiler, an updated interface block solution and providing the updated interface block solution from the DPE compiler to the
hardware compiler.[0011] In another aspect, a computer program product includes a computer readable storage medium having program code stored thereon. The program code is executable by computer hardware to initiate operations. The operations can include, for an application having a software portion for implementation in a DPE array of a device and a hardware portion for implementation in PL of a device, performing, using a hardware compiler, an implementation flow on the hardware portion based on an interface block solution that maps logical resources used by the software portion to hardware of an interface block coupling the DPE array to the PL. The operations can include, in response to not meeting a design metric during the implementation flow, providing, using the hardware compiler, an interface block constraint to a DPE compiler. The operations further can include, in response to receiving the interface block constraint, generating, using the DPE compiler, an updated interface block solution and providing the updated interface block solution from the DPE compiler to the hardware compiler.[0012] In another aspect, a method can include, for an application specifying a software portion for implementation within a DPE array of a device and a hardware portion having HLS kernels for implementation within PL of the device, generating, using a processor, a first interface solution mapping logical resources used by the software portion to hardware resources of an interface block coupling the DPE array and the PL. The method can include generating, using the processor, a connection graph specifying connectivity among the HLS kernels and nodes of the software portion to be implemented in the DPE array and generating, using the processor, a block diagram based on the connection graph and the HLS kernels, wherein the block diagram is synthesizable. The method further can include performing, using the processor, an implementation flow on the block diagram based on the first interface solution and compiling, using the processor, the software portion of the application for implementation in one or more DPEs of the DPE array.[0013] In another aspect, a system includes a processor configured to initiate operations. The operations can include, for an application specifying a software portion for implementation within a DPE array of a device and a hardware portion having HLS kernels for implementation within PL of the device, generating a first interface solution mapping logical resources used by the software portion to
hardware resources of an interface block coupling the DPE array and the PL. The operations can include generating a connection graph specifying connectivity among the HLS kernels and nodes of the software portion to be implemented in the DPE array and generating a block diagram based on the connection graph and the HLS kernels, wherein the block diagram is synthesizable. The operations further can include performing an implementation flow on the block diagram based on the first interface solution and compiling the software portion of the application for implementation in one or more DPEs of the DPE array.[0014] In another aspect, a computer program product includes a computer readable storage medium having program code stored thereon. The program code is executable by computer hardware to initiate operations. The operations can include, for an application specifying a software portion for implementation within a DPE array of a device and a hardware portion having HLS kernels forimplementation within PL of the device, generating a first interface solution mapping logical resources used by the software portion to hardware resources of an interface block coupling the DPE array and the PL. The operations can include generating a connection graph specifying connectivity among the HLS kernels and nodes of the software portion to be implemented in the DPE array and generating a block diagram based on the connection graph and the HLS kernels, wherein the block diagram is synthesizable. The operations further can include performing an implementation flow on the block diagram based on the first interface solution and compiling the software portion of the application for implementation in one or more DPEs of the DPE array.[0015] This Summary section is provided merely to introduce certain concepts and not to identify any key or essential features of the claimed subject matter.Other features of the inventive arrangements will be apparent from theaccompanying drawings and from the following detailed description.BRIEF DESCRIPTION OF THE DRAWINGS[0016] The inventive arrangements are illustrated by way of example in the accompanying drawings. The drawings, however, should not be construed to be limiting of the inventive arrangements to only the particular implementations shown. Various aspects and advantages will become apparent upon review of the following detailed description and upon reference to the drawings.
[0017] FIG. 1 illustrates an example of a computing node for use with one or more embodiments described herein.[0018] FIG. 2 illustrates an example architecture for a System-on-Chip (SoC) type of integrated circuit (IC).[0019] FIG. 3 illustrates an example architecture for a data processing engine (DPE) of the DPE array of FIG. 2.[0020] FIG. 4 illustrates further aspects of the example architecture of FIG. 3.[0021] FIG. 5 illustrates another example architecture for a DPE array.[0022] FIG. 6 illustrates an example architecture for tiles of the SoC interface block of the DPE array.[0023] FIG. 7 illustrates an example implementation of the Network-on-Chip (NoC) of FIG. 1.[0024] FIG. 8 is a block diagram depicting connections between endpoint circuits in the SoC of FIG. 1 through the NoC.[0025] FIG. 9 is a block diagram depicting the NoC according to another example.[0026] FIG. 10 illustrates an example method of programming the NoC.[0027] FIG. 1 1 illustrates another example method of programming the NoC.[0028] FIG. 12 illustrates an example data path through the NoC between endpoint circuits.[0029] FIG. 13 illustrates an example method of processing read/write requests and responses relating to the NoC.[0030] FIG. 14 illustrates an example implementation of a NoC master unit.[0031] FIG. 15 illustrates an example implementation of an NoC slave unit.[0032] FIG. 16 illustrates an example software architecture that is executable by the system described in connection with FIG. 1 .[0033] FIGs. 17A and 17B illustrate an example of an application mapped onto an SoC using a system as described in connection with FIG. 1.[0034] FIG. 18 illustrates an example implementation of another application that has been mapped onto an SoC.[0035] FIG. 19 illustrates another example software architecture executable by the system described in connection with FIG. 1 .[0036] FIG. 20 illustrates an example method of performing a design flow to implement an application in an SoC.
[0037] FIG. 21 illustrates another example method of performing a design flow to implement an application in an SoC.[0038] FIG. 22 illustrates an example method of communication between a hardware compiler and a DPE compiler.[0039] FIG. 23 illustrates an example method of handling SoC interface block solutions.[0040] FIG. 24 illustrates another example of an application for implementation in an SoC.[0041] FIG. 25 illustrates an example of an SoC interface block solution generated by the DPE compiler.[0042] FIG. 26 illustrates an example of routable SoC interface block constraints received by the DPE compiler.[0043] FIG. 27 illustrates an example of un-routable SoC interface block constraints.[0044] FIG. 28 illustrates an example where the DPE compiler ignores the soft type SoC interface block constraints from FIG. 27.[0045] FIG. 29 illustrates another example of un-routable SoC interface block constraints.[0046] FIG. 30 illustrates an example mapping of the DPE nodes of FIG. 29.[0047] FIG. 31 illustrates another example of un-routable SoC interface block constraints.[0048] FIG. 32 illustrates an example mapping of the DPE nodes of FIG. 31.[0049] FIG. 33 illustrates another example software architecture executable by the system of FIG. 1 .[0050] FIG. 34 illustrates another example method of performing a design flow to implement an application in an SoC.[0051] FIG. 35 illustrates another example method of performing a design flow to implement an application in an SoC.DETAILED DESCRIPTION[0052] While the disclosure concludes with claims defining novel features, it is believed that the various features described within this disclosure will be better understood from a consideration of the description in conjunction with the drawings. The process(es), machine(s), manufacture(s) and any variations thereof described
herein are provided for purposes of illustration. Specific structural and functional details described within this disclosure are not to be interpreted as limiting, but merely as a basis for the claims and as a representative basis for teaching one skilled in the art to variously employ the features described in virtually any appropriately detailed structure. Further, the terms and phrases used within this disclosure are not intended to be limiting, but rather to provide an understandable description of the features described.[0053] This disclosure relates to integrated circuits (ICs) and, more particularly, to implementing applications that include hardware and software portions within heterogeneous and programmable ICs. An example of a heterogeneous and programmable 1C is a device, e.g., an integrated circuit, that includesprogrammable circuitry referred to herein as "programmable logic" or "PL" and a plurality of hardwired and programmable data processing engines (DPEs). The plurality of DPEs may be arranged in an array that is communicatively linked to the PL of the 1C through a System-on-Chip (SoC) interface block. As defined within this disclosure, a DPE is a hardwired and programmable circuit block that includes a core capable of executing program code and a memory module coupled to the core. The DPEs are capable of communicating with one another as described in greater detail within this disclosure.[0054] An application that is intended for implementation in a device as described includes a hardware portion that is implemented using the PL of the device and a software portion that is implemented in, and executed by, the DPE array of the device. The device may also include a hardwired processor system or "PS" capable of executing further program code, e.g., another software portion of the application. As an example, the PS includes a central processing unit or "CPU" or other hardwired processor capable of executing program code. As such, the application may also include a further software portion that is intended for execution by the CPU of the PS.[0055] In accordance with the inventive arrangements described within this disclosure, design flows are provided that may be performed by a data processing system. The design flows are capable of implementing both the hardware and the software portions of an application within a heterogeneous and programmable 1C that includes a PL, a DPE array, and/or a PS. The 1C may also include a Network- on-Chip (NoC) that is programmable.
[0056] In some implementations, the application is specified as a data flow graph that includes a plurality of interconnected nodes. Nodes of the data flow graph are designated for implementation within the DPE array or within the PL. A node implemented in a DPE, for example, is ultimately mapped to a particular DPE in the DPE array. Object code that is executed by each DPE of the array that is used for the application is generated to implement the node(s). A nodeimplemented in the PL, for example, may be synthesized and implemented in the PL or implemented using a pre-built core (e.g., a Register Transfer Level or "RTL" core).[0057] The inventive arrangements provide example design flows capable of coordinating the building and integration of the different portions of the application for implementation in the different heterogeneous subsystems of the 1C. Different stages within the example design flows are targeted to particular subsystems. For example, one or more stages of the design flows are targeted to implementing the hardware portion of the application in the PL, while one or more other stages of the design flows are targeted to implementing the software portion of the application in the DPE array. Still, one or more other stages of the design flows are targeted to implementing another software portion of the application in the PS. Still other stages of the design flows are targeted to implementing routes or data transfers among different subsystems and/or circuit blocks through the NoC.[0058] The different stages of the example design flows corresponding to the different subsystems can be performed by different compilers that are subsystem specific. For example, the software portions may be implemented using a DPE compiler and/or a PS compiler. The hardware portion to be implemented in the PL may be implemented by a hardware compiler. Routes for the NoC may be implemented by a NoC compiler. The various compilers are capable of communicating and interacting with one another while implementing the respective subsystems specified by the application in order to converge to a solution where the application is viably implemented in the 1C. For example, the compilers are capable of exchanging design data during operation to converge to a solution where the design metrics specified for the application are met. Further, the solution (e.g., implementation of the application in the device) that is achieved is one where the various portions of the application are mapped to respective subsystems in the
device and the interfaces between the different subsystems are consistent and mutually agreed upon.[0059] Using the example design flows described within this disclosure, a system is able to implement an application within a heterogeneous andprogramable IC in less time (e.g., less runtime) than would otherwise be the case, e.g., where all portions of the application are implemented on the device jointly. Further, the example design flows described within this disclosure achieve feasibility and quality for the resulting implementation of the application in the heterogeneous and programmable IC (e.g., closure of design metrics such as timing, area, power, etc.) that is often superior to results obtained using other conventional techniques where each portion of the application is mapped completely independently and then stitched or combined together. The example design flows achieve these results, at least in part, through the loosely-coupled joint convergence techniques described herein that rely on shared interface constraints among the different subsystems.[0060] Further aspects of the inventive arrangements are described below in greater detail with reference to the figures. For purposes of simplicity and clarity of illustration, elements shown in the figures have not necessarily been drawn to scale. For example, the dimensions of some of the elements may be exaggerated relative to other elements for clarity. Further, where considered appropriate, reference numbers are repeated among the figures to indicate corresponding, analogous, or like features.[0061] FIG. 1 illustrates an example of a computing node 100. Computing node 100 may include a host data processing system (host system) 102 and a hardware acceleration board 104. Computing node 100 is only one example implementation of a computing environment that may be used with a hardware acceleration board. In this regard, computing node 100 may be used in a standalone capacity, as a bare metal server, as part of a computing cluster, or as a cloud computing node within a cloud computing environment. FIG. 1 is not intended to suggest any limitation as to the scope of use or functionality of the examples described herein. Computing node 100 is an example of a system and/or computer hardware that is capable of performing the various operations described within this disclosure relating to implementing an application within an SoC 200. For example, computing
node 100 may be used to implement an Electronic Design Automation (EDA) system.[0062] Host system 102 is operational with numerous other general-purpose or special-purpose computing system environments or configurations. Examples of computing systems, environments, and/or configurations that may be suitable for use with host system 102 include, but are not limited to, personal computer systems, server computer systems, thin clients, thick clients, hand-held or laptop devices, multiprocessor systems, microprocessor-based systems, set top boxes, programmable consumer electronics, network PCs, minicomputer systems, mainframe computer systems, and distributed cloud computing environments that include any of the above systems or devices, and the like.[0063] As illustrated, host system 102 is shown in the form of a computing device, e.g., a computer or server. Host system 102 can be practiced as a standalone device, in a cluster, or in a distributed cloud computing environment where tasks are performed by remote processing devices that are linked through a communications network. In a distributed cloud computing environment, program modules may be located in both local and remote computer system storage media including memory storage devices. The components of host system 102 may include, but are not limited to, one or more processors 106 (e.g., central processing units), a memory 108, and a bus 1 10 that couples various system components including memory 108 to processor 106. Processor(s) 106 may include any of a variety of processors that are capable of executing program code. Example processor types include, but are not limited to, processors having an x86 type of architecture (IA-32, IA-64, etc.), Power Architecture, ARM processors, and the like.[0064] Bus 1 10 represents one or more of any of several types ofcommunication bus structures, including a memory bus or memory controller, a peripheral bus, an accelerated graphics port, and a processor or local bus using any of a variety of available bus architectures. By way of example, and not limitation, such architectures include Industry Standard Architecture (ISA) bus, Micro Channel Architecture (MCA) bus, Enhanced ISA (EISA) bus, VideoElectronics Standards Association (VESA) local bus, Peripheral Component Interconnect (PCI) bus, and PCI Express (PCIe) bus.[0065] Host system 102 typically includes a variety of computer readable media. Such media may be any available media that is accessible by host system 102 and
may include any combination of volatile media, non-volatile media, removable media, and/or non-removable media.[0066] Memory 108 may include computer readable media in the form of volatile memory, such as random-access memory (RAM) 1 12 and/or cache memory 1 14. Host system 102 may also include other removable/non-removable, volatile/non volatile computer system storage media. By way of example, storage system 1 16 may be provided for reading from and writing to a non-removable, non-volatile magnetic media (not shown and typically called a "hard drive"). Although not shown, a magnetic disk drive for reading from and writing to a removable, non volatile magnetic disk (e.g., a "floppy disk"), and an optical disk drive for reading from or writing to a removable, non-volatile optical disk such as a CD-ROM, DVD- ROM or other optical media can be provided. In such instances, each may be connected to bus 1 10 by one or more data media interfaces. As will be further depicted and described below, memory 108 may include at least one computer program product having a set (e.g., at least one) of program modules (e.g., program code) that are configured to carry out the functions and/or operations described within this disclosure.[0067] Program/utility 1 18, having a set (at least one) of program modules 120, may be stored in memory 108 by way of example, and not limitation, as well as an operating system, one or more application programs, other program modules, and program data. Program modules 120 generally carry out the functions and/or methodologies of embodiments of the invention as described herein. For example, program modules 120 may include one or more applications and a driver or daemon for communicating with hardware acceleration board 104 and/or SoC 200.[0068] Program/utility 1 18 is executable by processor 106. Program/utility 1 18 and any data items used, generated, and/or operated upon by processor 106 are functional data structures that impart functionality when employed by processor 106. As defined within this disclosure, a "data structure" is a physicalimplementation of a data model's organization of data within a physical memory. As such, a data structure is formed of specific electrical or magnetic structural elements in a memory. A data structure imposes physical organization on the data stored in the memory as used by an application program executed using a processor.
[0069] Host system 102 may include one or more Input/Output (I/O) interfaces 128 communicatively linked to bus 1 10. I/O interface(s) 128 allow host system 102 to communicate with external devices, couple to external devices that allow user(s) to interact with host system 102, couple to external devices that allow host system 102 to communicate with other computing devices, and the like. For example, host system 102 may be communicatively linked to a display 130 and to hardware acceleration board 104 through I/O interface(s) 128. Host system 102 may be coupled to other external devices such as a keyboard (not shown) via I/O interface(s) 128. Examples of I/O interfaces 128 may include, but are not limited to, network cards, modems, network adapters, hardware controllers, etc.[0070] In an example implementation, the I/O interface 128 through which host system 102 communicates with hardware acceleration board 104 is a PCIe adapter. Hardware acceleration board 104 may be implemented as a circuit board, e.g., a card, that couples to host system 102. Hardware acceleration board 104 may, for example, be inserted into a card slot, e.g., an available bus and/or PCIe slot of host system 102.[0071] Hardware acceleration board 104 includes an SoC 200. The SoC 200 is a heterogeneous and programmable IC and, as such, has a plurality ofheterogeneous subsystems. An example architecture for the SoC 200 is described in greater detail in connection with FIG. 2. Hardware acceleration board 104 also includes volatile memory 134 coupled to SoC 200 and a non-volatile memory 136 also coupled to the SoC 200. Volatile memory 134 may be implemented as a RAM and is considered a "local memory" of SoC 200, whereas memory 108, being within host system 102, is not considered local to SoC 200, but rather local to host system 102. In some implementations, volatile memory 134 may include multiple gigabytes of RAM, e.g., 64 GB of RAM. An example of non-volatile memory 136 includes flash memory.[0072] In the example of FIG. 1 , computing node 100 is capable of operating on an application for SoC 200 and implementing the application within SoC 200. The application may include hardware and software portions corresponding to the different heterogeneous subsystems available in SoC 200. In general, computing node 100 is capable of mapping the application onto the SoC 200 for execution by the SoC 200.
[0073] FIG. 2 illustrates an example architecture for SoC 200. SoC 200 is an example of a programmable IC and an integrated programmable device platform. In the example of FIG. 2, the various, different subsystems or regions of the SoC 200 illustrated may be implemented on a single die provided within a single integrated package. In other examples, the different subsystems may be implemented on a plurality of interconnected dies provided as a single, integrated package.[0074] In the example, the SoC 200 includes a plurality of regions having circuitry with different functionalities. In the example, the SoC 200 optionally includes a data processing engine (DPE) array 202. SoC 200 includesprogrammable logic (PL) regions 214 (hereafter PL region(s) or PL), a processing system (PS) 212, a Network-on-Chip (NoC) 208, and one or more hardwired circuit blocks 210. DPE array 202 is implemented as a plurality of interconnected, hardwired, and programmable processors having an interface to the other regions of the SoC 200.[0075] PL 214 is circuitry that may be programmed to perform specified functions. As an example, PL 214 may be implemented as field programmable gate array type of circuitry. PL 214 can include an array of programmable circuit blocks. Examples of programmable circuit blocks within PL 214 include, but are not limited to, configurable logic blocks (CLBs), dedicated random access memory blocks (BRAM and/or UltraRAM or URAM), digital signal processing blocks (DSPs), clock managers, and/or delay lock loops (DLLs).[0076] Each programmable circuit block within PL 214 typically includes both programmable interconnect circuitry and programmable logic circuitry. The programmable interconnect circuitry typically includes a large number of interconnect wires of varying lengths interconnected by programmable interconnect points (PIPs). Typically, the interconnect wires are configured (e.g., on a per wire basis) to provide connectivity on a per-bit basis (e.g., where each wire conveys a single bit of information). The programmable logic circuitry implements the logic of a user design using programmable elements that may include, for example, look-up tables, registers, arithmetic logic, and so forth. The programmable interconnect and programmable logic circuitries may be programmed by loading configuration data into internal configuration memory cells that define how the programmable elements are configured and operate.
[0077] The PS 212 is implemented as hardwired circuitry that is fabricated as part of the SoC 200. The PS 212 may be implemented as, or include, any of a variety of different processor types each capable of executing program code. For example, PS 212 may be implemented as an individual processor, e.g., a single core capable of executing program code. In another example, PS 212 may be implemented as a multi-core processor. In still another example, PS 212 may include one or more cores, modules, co-processors, interfaces, and/or other resources. PS 212 may be implemented using any of a variety of different types of architectures. Example architectures that may be used to implement PS 212 may include, but are not limited to, an ARM processor architecture, an x86 processor architecture, a GPU architecture, a mobile processor architecture, a DSP architecture, other suitable architectures capable of executing computer-readable instructions or program code, and/or a combination of different processors and/or processor architectures.[0078] NoC 208 includes an interconnecting network for sharing data between endpoint circuits in SoC 200. The endpoint circuits can be disposed in DPE array 202, PL regions 214, PS 212, and/or in hardwired circuit blocks 210. NoC 208 can include high-speed data paths with dedicated switching. In an example, NoC 208 includes horizontal paths, vertical paths, or both horizontal and vertical paths. The arrangement and number of regions shown in FIG. 1 is merely an example. The NoC 208 is an example of the common infrastructure that is available within the SoC 200 to connect selected components and/or subsystems.[0079] NoC 208 provides connectivity to PL 214, PS 212, and to selected ones of the hardwired circuit blocks 210. NoC 208 is programmable. In the case of a programmable NoC used with other programmable circuitry, the nets and/or data transfers that are to be routed through NoC 208 are unknown until a user circuit design is created for implementation within the SoC 200. NoC 208 may be programmed by loading configuration data into internal configuration registers that define how elements within NoC 208 such as switches and interfaces are configured and operate to pass data from switch to switch and among the NoC interfaces.[0080] NoC 208 is fabricated as part of the SoC 200 and while not physically modifiable, may be programmed to establish connectivity between different master circuits and different slave circuits of a user circuit design. NoC 208, for example,
may include a plurality of programmable switches that are capable of establishing a packet switched network connecting user specified master circuits and slave circuits. In this regard, NoC 208 is capable of adapting to different circuit designs, where each different circuit design has different combinations of master circuits and slave circuits implemented at different locations in the SoC 200 that may be coupled by NoC 208. NoC 208 may be programmed to route data, e.g., application data and/or configuration data, among the master and slave circuits of the user circuit design. For example, NoC 208 may be programmed to couple different user- specified circuitry implemented within PL 214 with PS 212, and/or DPE array 202, with different hardwired circuit blocks, and/or with different circuits and/or systems external to the SoC 200.[0081] The hardwired circuit blocks 210 may include input/output (I/O) blocks, and/or transceivers for sending and receiving signals to circuits and/or systems external to SoC 200, memory controllers, or the like. Examples of different I/O blocks may include single-ended and pseudo differential I/Os and high-speed differentially clocked transceivers. Further, the hardwired circuit blocks 210 may be implemented to perform specific functions. Additional examples of hardwired circuit blocks 210 include, but are not limited to, cryptographic engines, digital-to-analog converters, analog-to-digital converters, and the like. The hardwired circuit blocks 210 within the SoC 200 may be referred to herein from time-to-time as application- specific blocks.[0082] In the example of FIG. 2, PL 214 is shown in two separate regions. In another example, PL 214 may be implemented as a unified region ofprogrammable circuitry. In still another example, PL 214 may be implemented as more than two different regions of programmable circuitry. The particular organization of PL 214 is not intended as a limitation. In this regard, SoC 200 includes one or more PL regions 214, PS 212, and NoC 208.[0083] In other example implementations, the SoC 200 may include two or more DPE arrays 202 located in different regions of the IC. In still other examples, the SoC 200 may be implemented as a multi-die IC. In that case, each subsystem may be implemented on a different die. The different dies may be communicatively linked using any of a variety of available multi-die IC technologies such stacking the dies side-by-side on an interposer, using a stacked-die architecture where the IC is implemented as a Multi-Chip Module (MCM), or the like. In the multi-die IC
example, it should be appreciated that each die may include single subsystem, two or more subsystems, a subsystem and another partial subsystem, or any combination thereof.[0084] DPE array 202 is implemented as a two-dimensional array of DPEs 204 that includes SoC interface block 206. DPE array 202 may be implemented using any of a variety of different architectures to be described herein in greater detail below. For purposes of illustration and not limitation, FIG. 2 illustrates DPEs 204 arranged in aligned rows and aligned columns. In other embodiments, however, DPEs 204 may be arranged where DPEs in selected rows and/or columns are horizontally inverted or flipped relative to DPEs in adjacent rows and/or columns. In one or more other embodiments, rows and/or columns of DPEs may be offset relative to adjacent rows and/or columns. One or more or all DPEs 204 may be implemented to include a one or more cores each capable of executing program code. The number of DPEs 204, particular arrangement of DPEs 204, and/or orientation of DPEs 204 is not intended to be limiting.[0085] SoC interface block 206 is capable of coupling DPEs 204 to one or more other subsystems of SoC 200. In one or more embodiments, SoC interface block 206 is coupled to adjacent DPEs 204. For example, SoC interface block 206 may be directly coupled to each DPE 204 in the bottom row of DPEs in DPE array 202.In illustration, SoC interface block 206 may be directly connected to DPE 204-1 , 204-2, 204-3, 204-4, 204-5, 204-6, 204-7, 204-8, 204-9, and 204-10.[0086] FIG. 2 is provided for purposes of illustration. In other embodiments, SoC interface block 206 may be located at the top of DPE array 202, to the left of DPE array 202 (e.g., as a column), to the right of DPE array 202 (e.g., as a column), or at multiple locations in and around DPE array 202 (e.g., as one or more intervening rows and/or columns within DPE array 202). Depending on the layout and location of SoC interface block 206, the particular DPEs coupled to SoC interface block 206 may vary.[0087] For purposes of illustration, if SoC interface block 206 is located to the left of DPEs 204, SoC interface block 206 may be directly coupled to the left column of DPEs including DPE 204-1 , DPE 204-1 1 , DPE 204-21 , and DPE 204-31. If SoC interface block 206 is located to the right of DPEs 204, SoC interface block 206 may be directly coupled to the right column of DPEs including DPE 204-10, DPE 204-20, DPE 204-30, and DPE 204-40. If SoC interface block 206 is located
at the top of DPEs 204, SoC interface block 206 may be coupled to the top row of DPEs including DPE 204-31 , DPE 204-32, DPE 204-33, DPE 204-34, DPE 204-35, DPE 204-36, DPE 204-37, DPE 204-38, DPE 204-39, and DPE 204-40. If SoC interface block 206 is located at multiple locations, the particular DPEs that are directly connected to SoC interface block 206 may vary. For example, if SoC interface block is implemented as a row and/or column within DPE array 202, the DPEs that are directly coupled to SoC interface block 206 may be those that are adjacent to SoC interface block 206 on one or more or each side of SoC interface block 206.[0088] DPEs 204 are interconnected by DPE interconnects (not shown), which, when taken collectively, form a DPE interconnect network. As such, SoC interface block 206 is capable of communicating with any DPE 204 of DPE array 202 by communicating with one or more selected DPEs 204 of DPE array 202 directly connected to SoC interface block 206 and utilizing the DPE interconnect network formed of DPE interconnects implemented within each respective DPE 204.[0089] SoC interface block 206 is capable of coupling each DPE 204 within DPE array 202 with one or more other subsystems of SoC 200. For example, SoC interface block 206 is capable of coupling to DPE array 202 to the NoC 208 and PL 214. As such, the DPE array 202 is capable of communicating with circuit blocks implemented in PL 214, the PS 212, and/or any of the hardwired circuit blocks 210. For example, SoC interface block 206 is capable of establishing connections between selected DPEs 204 and PL 214. SoC interface block 206 is also capable of establishing connections between selected DPEs 204 and NoC 208. Through NoC 208, the selected DPEs 204 are capable of communicating with PS 212 and/or hardwired circuit blocks 210. Selected DPEs 204 are capable ofcommunicating with hardwired circuit blocks 210 via SoC interface block 206 and PL 214. In particular embodiments, SoC interface block 206 may be coupled directly to one or more subsystems of SoC 200. For example, SoC interface block 206 may be coupled directly to PS 212 and/or to hardwired circuit blocks 210.[0090] In one or more embodiments, DPE array 202 includes a single clock domain. Other subsystems such as NoC 208, PL 214, PS 212, and the various hardwired circuit blocks 210 may be in one or more separate or different clock domain(s). Still, DPE array 202 may include additional clocks that may be used for interfacing with other ones of the subsystems. In particular embodiments, SoC
interface block 206 includes a clock signal generator that is capable of generating one or more clock signals that may be provided or distributed to DPEs 204 of DPE array 202.[0091] DPE array 202 may be programmed by loading configuration data into internal configuration memory cells (also referred to herein as "configuration registers") that define connectivity among DPEs 204 and SoC interface block 206 and how DPEs 204 and SoC interface block 206 operate. For example, for a particular DPE 204 or group of DPEs 204 to communicate with a subsystem, the DPE(s) 204 and SoC interface block 206 are programmed to do so. Similarly, for one or more particular DPEs 204 to communicate with one or more other DPEs 204, the DPEs are programmed to do so. DPE(s) 204 and SoC interface block 206 may be programmed by loading configuration data into configuration registers within DPE(s) 204 and SoC interface block 206, respectively. In another example, the clock signal generator, being part of SoC interface block 206, may be programmable using configuration data to vary the clock frequencies provided to DPE array 202.[0092] FIG. 3 illustrates an example architecture for a DPE 204 of DPE array 202 of FIG. 2. In the example of FIG. 3, DPE 204 includes a core 302, a memory module 304, and DPE interconnect 306. Each DPE 204 is implemented as a hardwired and programmable circuit block on the SoC 200.[0093] Core 302 provides the data processing capabilities of DPE 204. Core 302 may be implemented as any of a variety of different processing circuits. In the example of FIG. 3, core 302 includes an optional program memory 308. In an example implementation, core 302 is implemented as a processor that is capable of executing program code, e.g., computer readable instructions. In that case, program memory 308 is included and is capable of storing instructions that are executed by core 302. Core 302, for example, may be implemented as a CPU, a GPU, a DSP, a vector processor, or other type of processor that is capable of executing instructions. Core 302 may be implemented using any of the various CPU and/or processor architectures described herein. In another example, core 302 is implemented as a very long instruction word (VLIW) vector processor or DSP.[0094] In particular implementations, program memory 308 is implemented as a dedicated program memory that is private to core 302 (e.g., accessed exclusively
by core 302). Program memory 308 may only be used by the core of the same DPE 204. Thus, program memory 308 may only be accessed by core 302 and is not shared with any other DPE or component of another DPE. Program memory 308 may include a single port for read and write operations. Program memory 308 may support program compression and is addressable using the memory mapped network portion of DPE interconnect 306 described in greater detail below. Via the memory mapped network of DPE interconnect 306, for example, program memory 308 may be loaded with program code that may be executed by core 302.[0095] Core 302 may include configuration registers 324. Configuration registers 324 may be loaded with configuration data to control operation of core 302. In one or more embodiments, core 302 may be activated and/or deactivated based upon configuration data loaded into configuration registers 324. In the example of FIG. 3, configuration registers 324 are addressable (e.g., may be read and/or written) via the memory mapped network of DPE interconnect 306 described in greater detail below.[0096] In one or more embodiments, memory module 304 is capable of storing data that is used by and/or generated by core 302. For example, memory module 304 is capable of storing application data. Memory module 304 may include a read/write memory such as a random-access memory (RAM). Accordingly, memory module 304 is capable of storing data that may be read and consumed by core 302. Memory module 304 is also capable of storing data (e.g., results) that are written by core 302.[0097] In one or more other embodiments, memory module 304 is capable of storing data, e.g., application data, that may be used by and/or generated by one or more other cores of other DPEs within the DPE array. One or more other cores of DPEs may also read from and/or write to memory module 304. In particular embodiments, the other cores that may read from and/or write to memory module 304 may be cores of one or more neighboring DPEs. Another DPE that shares a border or boundary with DPE 204 (e.g., that is adjacent) is said to be a"neighboring" DPE relative to DPE 204. By allowing core 302 and one or more other cores from neighboring DPEs to read and/or write to memory module 304, memory module 304 implements a shared memory that supports communication among the different DPEs and/or cores capable of accessing memory module 304.
[0098] Referring to FIG. 2, for example, DPEs 204-14, 204-16, 204-5, and 204- 25 are considered neighboring DPEs of DPE 204-15. In one example, the core within each of DPEs 204-16, 204-5, and 204-25 is capable of reading and writing to the memory module within DPE 204-15. In particular embodiments, only those neighboring DPEs that are adjacent to the memory module may access the memory module of DPE 204-15. For example, DPE 204-14, while adjacent to DPE 204-15, may not be adjacent to the memory module of DPE 204-15 since the core of DPE 204-15 may be located between the core of DPE 204-14 and the memory module of DPE 204-15. As such, in particular embodiments, the core of DPE 204- 14 may not access the memory module of DPE 204-15.[0099] In particular embodiments, whether a core of a DPE is able to access the memory module of another DPE depends upon the number of memory interfaces included in the memory module and whether such cores are connected to an available one of the memory interfaces of the memory module. In the example above, the memory module of DPE 204-15 includes four memory interfaces, where the core of each of DPEs 204-16, 204-5, and 204-25 is connected to such a memory interface. Core 302 within DPE 204-15 itself is connected to the fourth memory interface. Each memory interface may include one or more read and/or write channels. In particular embodiments, each memory interface includes multiple read channels and multiple write channels so that the particular core attached thereto is capable of reading and/or writing to multiple banks within memory module 304 concurrently.[00100] In other examples, more than four memory interfaces may be available. Such other memory interfaces may be used to allow DPEs on a diagonal to DPE 204-15 to access the memory module of DPE 204-15. For example, if the cores in DPEs such as DPEs 204-14, 204-24, 204-26, 204-4, and/or 204-6 are also coupled to an available memory interface of the memory module in DPE 204-15, such other DPEs would also be capable of accessing the memory module of DPE 204-15.[00101] Memory module 304 may include configuration registers 336.Configuration registers 336 may be loaded with configuration data to control operation of memory module 304. In the example of FIG. 3, configuration registers 336 (and 324) are addressable (e.g., may be read and/or written) via the memory mapped network of DPE interconnect 306 described in greater detail below.
[00102] In the example of FIG. 3, DPE interconnect 306 is specific to DPE 204. DPE interconnect 306 facilitates various operations including communication between DPE 204 and one or more other DPEs of DPE array 202 and/or communication with other subsystems of the SoC 200. DPE interconnect 306 further enables configuration, control, and debugging of DPE 204.[00103] In particular embodiments, DPE interconnect 306 is implemented as an on-chip interconnect. An example of an on-chip interconnect is an Advanced Microcontroller Bus Architecture (AMBA) extensible Interface (AXI) bus (e.g., or switch). An AMBA AXI bus is an embedded microcontroller bus interface for use in establishing on-chip connections between circuit blocks and/or systems. An AXI bus is provided herein as an example of interconnect circuitry that may be used with the inventive arrangements described within this disclosure and, as such, is not intended as a limitation. Other examples of interconnect circuitry may include other types of buses, crossbars, and/or other types of switches.[00104] In one or more embodiments, DPE interconnect 306 includes two different networks. The first network is capable of exchanging data with other DPEs of DPE array 202 and/or other subsystems of the SoC 200. For example, the first network is capable of exchanging application data. The second network is capable of exchanging data such as configuration, control, and/or debugging data for the DPE(s).[00105] In the example of FIG. 3, the first network of DPE interconnect 306 is formed of stream switch 326 and one or more stream interfaces (not shown). For example, stream switch 326 includes a stream interface for connecting to each of core 302, memory module 304, memory mapped switch 332, a DPE above, a DPE to the left, a DPE to the right, and a DPE below. Each stream interface may include one or more masters and one or more slaves.[00106] Stream switch 326 is capable of allowing non-neighboring DPEs and/or DPEs that are not coupled to a memory interface of memory module 304 to communicate with core 302 and/or memory module 304 via the DPE interconnect network formed by the DPE interconnects of the respective DPEs 204 of DPE array 202.[00107] Referring again to FIG. 2 and using DPE 204-15 as a point of reference, stream switch 326 is coupled to, and capable of, communicating with another stream switch located in the DPE interconnect of DPE 204-14. Stream switch 326
is coupled to, and capable of, communicating with another stream switch located in the DPE interconnect of DPE 204-25. Stream switch 326 is coupled to, and capable of, communicating with another stream switch located in the DPE interconnect of DPE 204-16. Stream switch 326 is coupled to, and capable of, communicating with another stream switch located in the DPE interconnect of DPE 204-5. As such, core 302 and/or memory module 304 are also capable of communicating with any of the DPEs within DPE array 202 via the DPE interconnects in the DPEs.[00108] Stream switch 326 may also be used to interface to subsystems such as PL 214 and/or NoC 208. In general, stream switch 326 is programmed to operate as a circuit-switching stream interconnect or a packet-switched streaminterconnect. A circuit-switching stream interconnect is capable of implementing point-to-point, dedicated streams that are suitable for high-bandwidthcommunication among DPEs. A packet-switching stream interconnect allows streams to be shared to time-multiplex multiple logical streams onto one physical stream for medium bandwidth communication.[00109] Stream switch 326 may include configuration registers (abbreviated as "CR" in FIG. 3) 334. Configuration data may be written to configuration registers 334 by way of the memory mapped network of DPE interconnect 306. The configuration data loaded into configuration registers 334 dictates which other DPEs and/or subsystems (e.g., NoC 208, PL 214, and/or PS 212) DPE 204 will communicate with and whether such communications are established as circuit- switched point-to-point connections or as packet-switched connections.[00110] The second network of DPE interconnect 306 is formed of memory mapped switch 332. Memory mapped switch 332 includes a plurality of memory mapped interfaces (not shown). Each memory mapped interface may include one or more masters and one or more slaves. For example, memory mapped switch 332 includes a memory mapped interface for connecting to each of core 302, memory module 304, the memory mapped switch in the DPE above DPE 204, and the memory mapped switch in the DPE below DPE 204.[00111] Memory mapped switch 332 is used to convey configuration, control, and debugging data for DPE 204. In the example of FIG. 3, memory mapped switch 332 is capable of receiving configuration data that is used to configure DPE 204.Memory mapped switch 332 may receive configuration data from a DPE located below DPE 204 and/or from SoC interface block 206. Memory mapped switch 332
is capable of forwarding received configuration data to one or more other DPEs above DPE 204, to core 302 (e.g., to program memory 308 and/or to configuration registers 324), to memory module 304 (e.g., to memory within memory module 304 and/or to configuration registers 336), and/or to configuration registers 334 within stream switch 326.[00112] DPE interconnect 306 is coupled to the DPE interconnect of each neighboring DPE and/or SoC interface block 206 depending upon the location of DPE 204. Taken collectively, DPE interconnects of DPEs 204 form a DPE interconnect network (which may include the stream network and/or the memory mapped network). The configuration registers of the stream switches of each DPE may be programmed by loading configuration data through the memory mapped switches. Through configuration, the stream switches and/or stream interfaces are programmed to establish connections, whether packet-switched or circuit-switched, with other endpoints, whether in one or more other DPEs 204 and/or in SoC interface block 206.[00113] In one or more embodiments, DPE array 202 is mapped to the address space of a processor system such as PS 212. Accordingly, any configuration registers and/or memories within DPE 204 may be accessed via a memory mapped interface. For example, memory in memory module 304, program memory 308, configuration registers 324 in core 302, configuration registers 336 in memory module 304, and/or configuration registers 334 may be read and/or written via memory mapped switch 332.[00114] In the example of FIG. 3, memory mapped switch 332 is capable of receiving configuration data for DPE 204. The configuration data may include program code that is loaded into program memory 308 (if included), configuration data for loading into configuration registers 324, 334, and/or 336, and/or data to be loaded into memory (e.g., memory banks) of memory module 304. In the example of FIG. 3, configuration registers 324, 334, and 336 are shown as being located within the particular circuit structures that the configuration registers are intended to control, e.g., core 302, stream switch 326, and memory module 304. The example of FIG. 3 is for purposes of illustration only and illustrates that elements within core 302, memory module 304, and/or stream switch 326 may be programmed by way of loading configuration data into the corresponding configuration registers. In other embodiments, the configuration registers may be consolidated within a particular
region of DPE 204 despite controlling operation of components distributed throughout DPE 204.[00115] Accordingly, stream switch 326 may be programmed by loading configuration data into configuration registers 334. The configuration data programs stream switch 326 to operate in a circuit-switching mode between two different DPEs and/or other subsystems or in a packet-switching mode between selected DPEs and/or other subsystems. Thus, connections established by stream switch 326 to other stream interfaces and/or switches are programmed by loading suitable configuration data into configuration registers 334 to establish actual connections or application data paths within DPE 204, with other DPEs, and/or with other subsystems of IC 300.[00116] FIG. 4 illustrates further aspects of the example architecture of FIG. 3. In the example of FIG. 4, details relating to DPE interconnect 306 are not shown. FIG. 4 illustrates connectivity of core 302 with other DPEs through shared memory. FIG. 4 also illustrates additional aspects of memory module 304. For purposes of illustration, FIG. 4 refers to DPE 204-15.[00117] As pictured, memory module 304 includes a plurality of memory interfaces 402, 404, 406, and 408. Within FIG. 4, memory interfaces 402 and 408 are abbreviated as "Ml." Memory module 304 further includes a plurality of memory banks 412-1 to 412-N. In particular embodiments, memory module 304 includes eight memory banks. In other embodiments, memory module 304 may include fewer or more memory banks 412. In one or more embodiments, each memory bank 412 is single-ported thereby allowing up to one access to each memory bank each clock cycle. In the case where memory module 304 includes eight memory banks 412, such a configuration supports eight parallel accesses each clock cycle. In other embodiments, each memory bank 412 is dual-ported or multi-ported thereby allowing a larger number of parallel accesses each clock cycle.[00118] In the example of FIG. 4, each of memory banks 412-1 through 412-N has a respective arbiter 414-1 through 414-N. Each arbiter 414 is capable of generating a stall signal in response to detecting conflicts. Each arbiter 414 may include arbitration logic. Further, each arbiter 414 may include a crossbar.Accordingly, any master is capable of writing to any particular one or more of memory banks 412. As noted in connection with FIG. 3, memory module 304 is connected to memory mapped switch 332 thereby facilitating reading and writing of
data to memory bank 412. As such, the particular data stored in memory module 304 may be controlled, e.g., written, as part of a configuration, control, and/or debugging process through memory mapped switch 332.[00119] Memory module 304 further includes a direct memory access (DMA) engine 416. In one or more embodiments, DMA engine 416 includes at least two interfaces. For example, one or more interfaces are capable of receiving input data streams from DPE interconnect 306 and writing the received data to memory banks 412. One or more other interfaces are capable of reading data from memory banks 412 and sending the data out via a stream interface (e.g., a stream switch) of DPE interconnect 306. For example, DMA engine 416 may include stream interface for accessing stream switch 326 of FIG. 3.[00120] Memory module 304 is capable of operating as a shared memory that may be accessed by a plurality of different DPEs. In the example of FIG. 4, memory interface 402 is coupled to core 302 via core interface 428 included in core 302. Memory interface 402 provides core 302 with access to memory banks 412 through arbiters 414. Memory interface 404 is coupled to the core of DPE 204-25. Memory interface 404 provides the core of DPE 204-25 with access to memory banks 412. Memory interface 406 is coupled to the core of DPE 204-16. Memory interface 406 provides the core of DPE 204-16 with access to memory banks 412. Memory interface 408 is coupled to the core of DPE 204-5. Memory interface 408 provides the core of DPE 204-5 with access to memory banks 412. Accordingly, in the example of FIG. 4, each DPE that has a shared boundary with memory module 304 of DPE 204-15 is capable of reading and writing to memory banks 412. In the example of FIG. 4, the core of DPE 204-14 does not have direct access to memory module 304 of DPE 204-15.[00121] Core 302 is capable of accessing memory modules of other neighboring DPEs via core interfaces 430, 432, and 434. In the example of FIG. 4, core interface 434 is coupled to a memory interface of DPE 204-25. Accordingly, core 302 is capable of accessing the memory module of DPE 204-25 via core interface 434 and the memory interface contained within the memory module of DPE 204-25. Core interface 432 is coupled to a memory interface of DPE 204-14. Accordingly, core 302 is capable of accessing the memory module of DPE 204-14 via core interface 432 and the memory interface contained within the memory module of DPE 204-14. Core interface 430 is coupled to a memory interface within DPE 204-
5. Accordingly, core 302 is capable of accessing the memory module of DPE 204-5 via core interface 430 and the memory interface contained within the memory module of DPE 204-5. As discussed, core 302 is capable of accessing memory module 304 within DPE 204-15 via core interface 428 and memory interface 402.[00122] In the example of FIG. 4, core 302 is capable of reading and writing to any of the memory modules of DPEs that share a boundary with core 302 in DPE 204-15 (e.g., DPEs 204-25, 204-14, and 204-5). In one or more embodiments, core 302 is capable of viewing the memory modules within DPEs 204-25, 204-15, 204- 14, and 204-5 as a single, contiguous memory (e.g., as a single address space). As such, the process of core 302 reading and/or writing to memory modules of such DPEs is the same as core 302 reading and/or writing to memory module 304. Core 302 is capable of generating addresses for reads and writes presuming this contiguous memory model. Core 302 is capable of directing the read and/or write requests to the appropriate core interface 428, 430, 432, and/or 434 based upon the addresses that are generated.[00123] As noted, core 302 is capable of mapping read and/or write operations in the correct direction through core interface 428, 430, 432, and/or 434 based upon the addresses of such operations. When core 302 generates an address for a memory access, core 302 is capable of decoding the address to determine the direction (e.g., the particular DPE to be accessed) and forwards the memory operation to the correct core interface in the determined direction.[00124] Accordingly, core 302 is capable of communicating with the core of DPE 204-25 via a shared memory which may be the memory module within DPE 204-25 and/or memory module 304 of DPE 204-15. Core 302 is capable of communicating with the core of DPE 204-14 via a shared memory which is the memory module within DPE 204-14. Core 302 is capable of communicating with the core of DPE 204-5 via a shared memory which may be the memory module within DPE 204-5 and/or memory module 304 of DPE 204-15. Further, core 302 is capable of communicating with the core of DPE 204-16 via a shared memory which is memory module 304 within DPE 204-15.[00125] As discussed, DMA engine 416 may include one or more stream-to- memory interfaces. Through DMA engine 416, application data may be received from other sources within the SoC 200 and stored in memory module 304. For example, data may be received from other DPEs that do and/or do not share a
boundary with DPE 204-15 by way of stream switch 326. Data may also be received from other subsystems of the SoC (e.g., NoC 208, hardwired circuit blocks 210, PL 214, and/or PS 212) by way of SoC interface block 206 through the stream switches of the DPEs. DMA engine 416 is capable of receiving such data from the stream switches and writing the data to an appropriate memory bank or memory banks 412 within memory module 304.[00126] DMA engine 416 may include one or more memory-to-stream interfaces. Through DMA engine 416, data may be read from memory bank or memory banks 412 of memory module 304 and sent to other destinations via the stream interfaces. For example, DMA engine 416 is capable of reading data from memory module 304 and sending such data to other DPEs that do and/or do not share a boundary with DPE 204-15 by way of the stream switches. DMA engine 416 is also capable of sending such data to other subsystems (e.g., NoC 208, hardwired circuit blocks 210, PL 214, and/or PS 212) by way of the stream switches and SoC interface block 206.[00127] In one or more embodiments, DMA engine 416 is programmed by memory mapped switch 332 within DPE 204-15. For example, DMA engine 416 may be controlled by configuration registers 336. Configuration registers 336 may be written using memory mapped switch 332 of DPE interconnect 306. In particular embodiments, DMA engine 416 may be controlled by the stream switch 326 within DPE 204-15. For example, DMA engine 416 may include control registers that may be written by stream switch 326 connected thereto. Streams received via stream switch 326 within DPE interconnect 306 may be connected to DMA engine 416 in memory module 304 and/or directly to core 302 depending upon the configuration data loaded into configuration registers 324, 334, and/or 336. Streams may be sent from DMA engine 416 (e.g., memory module 304) and/or core 302 depending upon the configuration data loaded into configuration registers 324, 334, and/or 336.[00128] Memory module 304 further may include hardware synchronization circuitry 420 (abbreviated as "FISC" in FIG. 4). In general, hardwaresynchronization circuitry 420 is capable of synchronizing operation of different cores (e.g., cores of neighboring DPEs), core 302 of FIG. 4, DMA engine 416, and other external masters (e.g., PS 212) that may communicate via DPE interconnect 306. As an illustrative and non-limiting example, hardware synchronization circuitry 420 is capable of synchronizing two different cores, stream switches, memory
mapped interfaces, and/or DMAs in DPE 204-15 and/or different DPEs accessing the same, e.g., a shared, buffer in memory module 304.[00129] In the case where two DPEs are not neighbors, the two DPEs do not have access to a common memory module. In that case, application data may be transferred via a data stream (the terms "data stream" and "stream" may be used interchangeably from time-to-time within this disclosure). As such, the local DMA engine is capable of converting the transfer from a local memory-based transfer to a stream-based transfer. In that case, core 302 and DMA engine 416 are capable of synchronizing using hardware synchronization circuitry 420.[00130] PS 212 is capable of communicating with core 302 via memory mapped switch 332. PS 212, for example, is capable of accessing memory module 304 and hardware synchronization circuitry 420 by initiating memory reads and writes. In another embodiment, hardware synchronization circuitry 420 may also send an interrupt to PS 212 when status of a lock changes to avoid polling by PS 212 of hardware synchronization circuitry 420. PS 212 is also capable of communicating with DPE 204-15 via the stream interfaces.[00131] In addition to communicating with neighboring DPEs through shared memory modules and neighboring and/or non-neighboring DPEs via DPE interconnect 306, core 302 may include cascade interfaces. In the example of FIG. 4, core 302 includes cascade interfaces 422 and 424 (abbreviated as "Cl" in FIG. 4). Cascade interfaces 422 and 424 are capable of providing direct communication with other cores. As pictured, cascade interface 422 of core 302 receives an input data stream directly from the core of DPE 204-14. The data stream received via cascade interface 422 may be provided to the data processing circuitry within core 302. Cascade interface 424 of core 302 is capable of sending an output data stream directly to the core of DPE 204-16.[00132] In the example of FIG. 4, each of cascade interface 422 and cascade interface 424 may include a first-in-first-out (FIFO) interface for buffering. In particular embodiments, cascade interfaces 422 and 424 are capable of conveying data streams that may be hundreds of bits in width. The particular bit width of cascade interfaces 422 and 424 is not intended as a limitation. In the example of FIG. 4, cascade interface 424 is coupled to an accumulator register 436(abbreviated as "AC" within FIG. 4) within core 302. Cascade interface 424 is capable of outputting the contents of accumulator register 436 and may do so each
clock cycle. Accumulation register 436 may store data that is generated and/or being operated upon by data processing circuitry within core 302.[00133] In the example of FIG. 4, cascade interfaces 422 and 424 may be programmed based upon configuration data loaded into configuration registers 324. For example, based upon configuration registers 324, cascade interface 422 may be activated or deactivated. Similarly, based upon configuration registers 324, cascade interface 424 may be activated or deactivated. Cascade interface 422 may be activated and/or deactivated independently of cascade interface 424.[00134] In one or more other embodiments, cascade interfaces 422 and 424 are controlled by core 302. For example, core 302 may include instructions to read/write to cascade interfaces 422 and/or 424. In another example, core 302 may include hardwired circuitry that is capable of reading and/or writing to cascade interfaces 422 and/or 424. In particular embodiments, cascade interfaces 422 and 424 may be controlled by an entity outside of core 302.[00135] Within the embodiments described within this disclosure, DPEs 204 do not include cache memories. By omitting cache memories, DPE array 202 is capable of achieving predictable, e.g., deterministic, performance. Further, significant processing overhead is avoided since maintaining coherency among cache memories located in different DPEs is not required.[00136] In accordance with one or more embodiments, cores 302 of DPEs 204 do not have input interrupts. Thus, cores 302 of DPEs 204 are capable of operating uninterrupted. Omitting input interrupts to cores 302 of DPEs 204 also allows DPE array a02 to achieve predictable, e.g., deterministic, performance.[00137] FIG. 5 illustrates another example architecture for a DPE array. In the example of FIG. 5, SoC interface block 206 provides an interface between DPEs 204 and other subsystems of the SoC 200. SoC interface block 206 integrates DPEs into the device. SoC interface block 206 is capable of conveyingconfiguration data to DPEs 204, conveying events from DPEs 204 to other subsystems, conveying events from other subsystems to DPEs 204, generating and conveying interrupts to entities external to DPE array 202, conveying application data between other subsystems and DPEs 204, and/or conveying trace and/or debug data between other subsystems and DPEs 204.[00138] In the example of FIG. 5, SoC interface block 206 includes a plurality of interconnected tiles. For example, SoC interface block 206 includes tiles 502, 504,
506, 508, 510, 512, 514, 516, 518, and 520. In the example of FIG. 5, tiles 502-520 are organized in a row. In other embodiments, tiles may be arranged in a column, in a grid, or in another layout. For example, SoC interface block 206 may be implemented as a column of tiles on the left of DPEs 204, on the right of DPEs 204, between columns of DPEs 204, or the like. In another embodiment, SoC interface block 206 may be located above DPE array 202. SoC interface block 206 may be implemented so that tiles are located in any combination of below DPE array 202, to the left of DPE array 202, to the right of DPE array 202, and/or above DPE array 202. In this regard, FIG. 5 is provided for purposes of illustration and not limitation.[00139] In one or more embodiments, tiles 502-520 have a same architecture. In one or more other embodiments, tiles 502-520 may be implemented with two or more different architectures. In particular embodiments, different architectures may be used to implement tiles within SoC interface block 206 where each different tile architecture supports communication with a different type of subsystem or combination of subsystems of SoC 200.[00140] In the example of FIG. 5, tiles 502-520 are coupled so that data may be propagated from one tile to another. For example, data may be propagated from tile 502 through tiles 504, 506, and on down the line of tiles to tile 520. Similarly, data may be propagated in the reverse direction from tile 520 to tile 502. In one or more embodiments, each of tiles 502-520 is capable of operating as an interface for a plurality of DPEs. For example, each of tiles 502-520 is capable of operating as an interface for a subset of the DPEs 204 of DPE array 202. The subset of DPEs to which each tile provides an interface may be mutually exclusive such that no DPE is provided with an interface by more than one tile of SoC interface block 206.[00141] In one example, each of tiles 502-520 provides an interface for a column of DPEs 204. For purposes of illustration, tile 502 provides an interface to the DPEs of column A. Tile 504 provides an interface to the DPEs of column B, etc. In each case, the tile includes a direct connection to an adjacent DPE in the column of DPEs, which is the bottom DPE in this example. Referring to column A, for example, tile 502 is directly connected to DPE 204-1. Other DPEs within column A may communicate with tile 502 but do so through the DPE interconnects of the intervening DPEs in the same column.[00142] For example, tile 502 is capable of receiving data from another source such as PS 212, PL 214, and/or another hardwired circuit block 210 such as an
application-specific circuit block. Tile 502 is capable of providing those portions of the data addressed to DPEs in column A to such DPEs while sending data addressed to DPEs in other columns (e.g., DPEs for which tile 502 is not an interface) on to tile 504. Tile 504 may perform the same or similar processing where data received from tile 502 that is addressed to DPEs in column B is provided to such DPEs, while sending data addressed to DPEs in other columns on to tile 506, and so on.[00143] In this manner, data may propagate from tile to tile of SoC interface block 206 until reaching the tile that operates as an interface for the DPEs to which the data is addressed (e.g., the "target DPE(s)"). The tile that operates as an interface for the target DPE(s) is capable of directing the data to the target DPE(s) using the memory mapped switches of the DPEs and/or the stream switches of the DPEs.[00144] As noted, the use of columns is an example implementation. In other embodiments, each tile of SoC interface block 206 is capable of providing an interface to a row of DPEs of DPE array 202. Such a configuration may be used in cases where SoC interface block 206 is implemented as a column of tiles, whether on the left, right, or between columns of DPEs 204. In other embodiments, the subset of DPEs to which each tile provides an interface may be any combination of fewer than all DPEs of DPE array 202. For example, DPEs 204 may beapportioned to tiles of SoC interface block 206. The particular physical layout of such DPEs may vary based upon connectivity of the DPEs as established by DPE interconnects. For example, tile 502 may provide an interface to DPEs 204-1 , 204- 2, 204-1 1 , and 204-12. Another tile of SoC interface block 206 may provide an interface to four other DPEs, and so forth.[00145] FIG. 6 illustrates an example architecture for tiles of SoC interface block 206. In the example of FIG. 6, two different types of tiles for SoC interface block 206 are shown. Tile 602 is configured to serve as an interface between DPEs and only PL 214. Tile 610 is configured to serve as an interface between DPEs and NoC 208 and between DPEs and PL 214. SoC interface block 206 may include a combination of tiles using both architectures as illustrated for tile 602 and for tile 610 or, in another example, only tiles having an architecture as illustrated for tile 610.[00146] In the example of FIG. 6, tile 602 includes a stream switch 604 connected to a PL interface 606 and to a DPE such as DPE 204-1 immediately
above. PL interface 606 connects to Boundary Logic Interface (BLI) circuit 620 and BLI circuit 622 each located in PL 214. Tile 610 includes a stream switch 612 connected to NoC and PL interface 614 and to a DPE such as DPE 204-5 immediately above. NoC and PL interface 614 connects to BLI circuits 624 and 626 in the PL 214 and also to NoC Master Unit (NMU) 630 and NoC Slave Unit (NSU) 632 of the NoC 208.[00147] In the example of FIG. 6, each stream interface 604 is capable of outputting six different 32-bit data streams to, and receiving 4 different 32-bit data streams from, the DPE coupled thereto. Each of PL interface 606 and NoC and PL interface 614 is capable of providing 6 different 64-bit data streams to PL 214 by way of BLI 620 and BLI 624, respectively. In general, each of BLIs 620, 622, 624, and 626 provides an interface or connection point within PL 214 to which PL interface 606 and/or NoC and PL interface 614 connect. Each of PL interface 606 and NoC and PL interface 614 is capable of receiving 8 different 64-bit data streams from PL 214 by way of BLI 622 and BLI 624, respectively.[00148] NoC and PL interface 614 is also connected to NoC 208. In the example of FIG. 6, NoC and PL interface 614 connects to one or more NMUs 630 and to one or more NSUs 632. In one example, NoC and PL interface 614 is capable of providing two different 128-bit data streams to NoC 208, wherein each data stream is provided to a different NMU 630. NoC and PL interface 614 is capable of receiving two different 128-bit data streams from NoC 208, where each data stream is received from a different NSU 632.[00149] Stream switches 604 in adjacent tiles are connected. In an example, stream switches 604 in adjacent tiles are capable of communicating by way of four different 32-bit data streams in each of the left and right directions (e.g., so long as a tile is to the right or to the left as the case may be).[00150] Tiles 602 and 610 each may include one or more memory mapped switches to convey configuration data. For purposes of illustration, the memory mapped switches are not shown. The memory mapped switches, for example, are capable of connecting vertically to a memory mapped switch of the DPE immediately above, to memory mapped switches in other adjacent tiles in SoC interface block 206 in the same or similar manner as stream switches 604, to configuration registers in tiles 602 and 610 (not shown), and/or to PL interface 608 or NoC and PL interface 614 as the case may be.
[00151] The various bit widths and numbers of data streams described in connection with the various switches included in the DPEs 204 and/or the tiles 602 and/or 610 of the SoC interface block 206 are provided for purposes of illustration and are not intended to be limiting of the inventive arrangements described within this disclosure.[00152] FIG. 7 illustrates an example implementation of NoC 208. NoC 208 includes NMUs 702, NSUs 704, a network 714, NoC peripheral interconnect (NPI) 710, and registers 712. Each NMU 702 is an ingress circuit that connects an endpoint circuit to the NoC 208. Each NSU 704 is an egress circuit that connects the NoC 208 to an endpoint circuit. The NMUs 702 are connected to the NSUs 704 through the network 714. In an example, the network 714 includes NoC packet switches 706 (NPSs) and routing 708 between the NPSs 706. Each NPS 706 performs switching of NoC packets. The NPSs 706 are connected to each other and to the NMUs 702 and NSUs 704 through the routing 708 to implement a plurality of physical channels. The NPSs 706 also support multiple virtual channels per physical channel.[00153] The NPI 710 includes circuitry to program the NMUs 702, NSUs 704, and NPSs 706. For example, the NMUs 702, NSUs 704, and NPSs 706 can include registers 712 that determine functionality thereof. The NPI 710 includes a peripheral interconnect coupled to the registers 712 for programming thereof to set functionality. The registers 712 in the NoC 208 support interrupts, Quality of Service (QoS), error handling and reporting, transaction control, powermanagement, and address mapping control. The registers 712 can be initialized in a usable state before being reprogrammed, such as by writing to the registers 712 using write requests. Configuration data for the NoC 208 can be stored in a non volatile memory (NVM), e.g., as part of a programming device image (PDI), and provided to the NPI 710 for programming the NoC 208 and/or other endpoint circuits.[00154] The NMUs 702 are traffic ingress points. The NSUs 704 are traffic egress points. Endpoint circuits coupled to the NMUs 702 and NSUs 704 can be hardened circuits (e.g., hardwired circuit blocks 210) or circuits implemented in PL 214. A given endpoint circuit can be coupled to more than one NMU 702 or more than one NSU 704.
[00155] FIG. 8 is a block diagram depicting connections between endpoint circuits in the SoC 200 through the NoC 208 according to an example. In the example, endpoint circuits 802 are connected to endpoint circuits 804 through the NoC 208. The endpoint circuits 802 are master circuits, which are coupled to NMUs 702 of the NoC 208. The endpoint circuits 804 are slave circuits coupled to the NSUs 704 of the NoC 208. Each endpoint circuit 802 and 804 can be a circuit in the PS 212, a circuit in a PL region 214, or a circuit in another subsystem (e.g., hardwired circuit blocks 210).[00156] The network 714 includes a plurality of physical channels 806. The physical channels 806 are implemented by programming the NoC 208. Each physical channel 806 includes one or more NPSs 706 and associated routing 708. An NMU 702 connects with an NSU 704 through at least one physical channel 806. A physical channel 806 can also have one or more virtual channels 808.[00157] Connections through the network 714 use a master-slave arrangement.In an example, the most basic connection over the network 714 includes a single master connected to a single slave. However, in other examples, more complex structures can be implemented.[00158] FIG. 9 is a block diagram depicting the NoC 208 according to another example. In the example, the NoC 208 includes vertical portions 902 (VNoC) and horizontal portion 904 (HNoC). Each VNoC 902 is disposed between PL regions 214. The HNoC 904 is disposed between the PL regions 214 and the I/O banks 910 (e.g., I/O blocks and/or transceivers corresponding to hardwired circuit blocks 210). The NoC 208 is connected to the memory interfaces 908 (e.g., hardwired circuit blocks 210). The PS 212 is coupled to the HNoC 904.[00159] In the example, the PS 212 includes a plurality of NMUs 702 coupled to the HNoC 904. The VNoC 902 includes both NMUs 702 and NSUs 704, which are disposed in the PL regions 214. The memory interfaces 908 include NSUs 704 coupled to the HNoC 904. Both the HNoC 904 and the VNoC 902 include NPSs 706 connected by routing 708. In the VNoC 902, the routing 708 extends vertically. In the HNoC 904, the routing extends horizontally. In each VNoC 902, each NMU 702 is coupled to an NPS 706. Likewise, each NSU 704 is coupled to an NPS 706. NPSs 706 are coupled to each other to form a matrix of switches. Some NPSs 706 in each VNoC 902 are coupled to other NPSs 706 in the HNoC 904.
[00160] Although only a single HNoC 904 is shown, in other examples, the NoC 208 can include more than one HNoC 904. In addition, while two VNoCs 902 are shown, the NoC 208 can include more than two VNoCs 902. Although memory interfaces 908 are shown by way of example, it is to be understood that hardwired circuit blocks 210other hardwired circuit blocks 210 can be used in place of, or in addition to, the memory interfaces 908.[00161] FIG. 10 illustrates an example method 1000 of programming the NoC 208. Though described independently of the other subsystems of the SoC 200, method 1000 may be included and/or used as part of a larger boot or programming, process for SoC 200.[00162] At block 1002, a Platform Management Controller (PMC) implemented in the SoC 200 receives NoC programming data at boot time. The NoC programming data may be a part of a PDI. The PMC is responsible for managing the SoC 200. The PMC is capable of maintaining a safe and secure environment, booting the SoC 200, and managing the SoC 200 during normal operations.[00163] At block 1004, the PMC loads the NoC programming data to the registers 712 through the NPI 710 to create physical channels 806. In an example, the programming data can also include information for configuring routing tables in the NPSs 706. At block 1006, the PMC boots the SoC 200. In this manner, the NoC 208 includes at least configuration information for the physical channels 806 between NMUs 702 and NSUs 704. Remaining configuration information for the NoC 208 can be received during runtime, as described further below. In another example, all or a portion of the configuration information described below as being received during runtime can be received at boot time.[00164] FIG. 1 1 illustrates an example method 1 100 of programming the NoC 208. At block 1 102, the PMC receives NoC programming data during runtime. At block 1 104, the PMC loads the programming data to NoC registers 712 through the NPI 710. In an example, at block 1 106, the PMC configures routing tables in the NPSs 706. At block 1 108, the PMC configures QoS paths over the physical channels 806. At block 1 1 10, the PMC configures address space mappings. At block 1 1 12, the PMC configures ingress/egress interface protocol, width, and frequency. The QoS paths, address space mappings, routing tables, and ingress/egress configuration are discussed further below.
[00165] FIG. 12 illustrates an example data path 1200 through the NoC 208 between endpoint circuits. The data path 1200 includes an endpoint circuit 1202, an AXI master circuit 1204, an NMU 1206, NPSs 1208, an NSU 1210, an AXI slave circuit 1212, and an endpoint circuit 1214. The endpoint circuit 1202 is coupled to the AXI master circuit 1204. The AXI master circuit 1204 is coupled to the NMU 1206. In another example, the AXI master circuit 1204 is part of the NMU 1206.[00166] The NMU 1206 is coupled to an NPS 1208. The NPSs 1208 are coupled to each other to form a chain of NPSs 1208 (e.g., a chain of five NPSs 1208 in the present example). In general, there is at least one NPS 1208 between the NMU 1206 and the NSU 1210. The NSU 1210 is coupled to one of the NPSs 1208. The AXI slave circuit 1212 is coupled to the NSU 1210. In another example, the AXI slave circuit 1212 is part of the NSU 1210. The endpoint circuit 1214 is coupled to the AXI slave circuit 1212.[00167] The endpoint circuits 1202 and 1214 can each be a hardened circuit (e.g., a PS circuit, a hardwired circuit 210, one or more DPEs 204) or a circuit configured in the PL 214. The endpoint circuit 1202 functions as a master circuit and sends read/write requests to the NMU 1206. In the example, the endpoint circuits 1202 and 1214 communicate with the NoC 208 using an AXI protocol.While AXI is described in the example, it is to be understood that the NoC 208 may be configured to receive communications from endpoint circuits using other types of protocols known in the art. For purposes of clarity by example, the NoC 208 is described as supporting the AXI protocol herein. The NMU 1206 relays the request through the set of NPSs 1208 to reach the destination NSU 1210. The NSU 1210 passes the request to the attached AXI slave circuit 1212 for processing and distribution of data to the endpoint circuit 1214. The AXI slave circuit 1212 can send read/write responses back to the NSU 1210. The NSU 1210 can forward the responses to the NMU 1206 through the set of NPSs 1208. The NMU 1206 communicates the responses to the AXI master circuit 1204, which distributes the data to the endpoint circuit 1202.[00168] FIG. 13 illustrates an example method 1300 of processing read/write requests and responses. The method 1300 begins at block 1302, where the endpoint circuit 1202 sends a request (e.g., a read request or a write request) to the NMU 1206 through the AXI master 1204. At block 1304, the NMU 1206 processes the response. In an example, the NMU 1206 performs asynchronous
crossing and rate-matching between the clock domain of the endpoint circuit 1202 and the NoC 208. The NMU 1206 determines a destination address of the NSU 1210 based on the request. The NMU 1206 can perform address remapping in case virtualization is employed. The NMU 1206 also performs AXI conversion of the request. The NMU 1206 further packetizes the request into a stream of packets.[00169] At block 1306, the NMU 1206 sends the packets for the request to the NPSs 1208. Each NPS 1208 performs a table lookup for a target output port based on the destination address and routing information. At block 1308, the NSU 1210 processes the packets of the request. In an example, the NSU 1210 de-packetizes the request, performs AXI conversion, and performs asynchronous crossing and rate-matching from the NoC clock domain to the clock domain of the endpoint circuit 1214. At block 1310, the NSU 1210 sends the request to the endpoint circuit 1214 through the AXI slave circuit 1212. The NSU 1210 can also receive a response from the endpoint circuit 1214 through the AXI slave circuit 1212.[00170] At block 1312, the NSU 1210 processes the response. In an example, the NSU 1210 performs asynchronous cross and rate-matching from the clock domain of the endpoint circuit 1214 and the clock domain of the NoC 208. The NSU 1210 also packetizes the response into a stream of packets. At block 1314, the NSU 1210 sends the packets through the NPSs 1208. Each NPS 1208 performs a table lookup for a target output port based on the destination address and routing information. At block 1316, the NMU 1206 processes the packets. In an example, the NMU 1206 de-packetizes the response, performs AXI conversion, and performs asynchronous crossing and rate-matching from the NoC clock domain to the clock domain of the endpoint circuit 1202. At block 1318, the NMU 1206 sends the response to the endpoint circuit 1202 through the AXI master circuit 1204.[00171] FIG. 14 illustrates an example implementation of an NMU 702. The NMU 702 includes an AXI master interface 1402, packetizing circuitry 1404, an address map 1406, de-packetizing circuitry 1408, QoS circuitry 1410, VC mapping circuitry 1412, and clock management circuitry 1414. The AXI master interface 1402 provides an AXI interface to the NMU 702 for an endpoint circuit. In other examples, a different protocol can be used and thus the NMU 702 can have a different master interface that complies with a selected protocol. The NMU 702 routes inbound traffic to the packetizing circuitry 1404, which generates packets
from the inbound data. The packetizing circuitry 1404 determines a destination ID from the address map 1406, which is used to route the packets. The QoS circuitry 1410 can provide ingress rate control to control the injection rate of packets into the NoC 208. The VC mapping circuitry 1412 manages QoS virtual channels on each physical channel. The NMU 702 can be configured to select which virtual channel the packets are mapped to. The clock management circuitry 1414 performs rate matching and asynchronous data crossing to provide an interface between the AXI clock domain and the NoC clock domain. The de-packetizing circuitry 1408 receives return packets from the NoC 208 and is configured to de-packetize the packets for output by the AXI master interface 1402.[00172] FIG. 15 illustrates an example implementation of an NSU 704. The NSU 704 includes an AXI slave interface 1502, clock management circuitry 1504, packetizing circuitry 1508, de-packetizing circuitry 1506, and QoS circuitry 1510. The AXI slave interface 1502 provides an AXI interface to the NSU 704 for an endpoint circuit. In other examples, a different protocol can be used and thus the NSU 704 can have a different slave interface that complies with a selected protocol. The NSU 704 routes inbound traffic from the NoC 208 to the de- packetizing circuitry 1506, which generates de-packetized data. The clock management circuitry 1504 performs rate matching and asynchronous data crossing to provide an interface between the AXI clock domain and the NoC clock domain. The packetizing circuitry 1508 receives return data from the slave interface 1502 and is configured to packetize the return data for transmission through the NoC 208. The QoS circuitry 1510 can provide ingress rate control to control the injection rate of packets into the NoC 208.[00173] FIG. 16 illustrates an example software architecture that is executable by the system described in connection with FIG. 1 . For example, the architecture of FIG. 16 may be implemented as one or more of the program modules 120 of FIG.1. The software architecture of FIG. 16 includes a DPE compiler 1602, a NoC compiler 1604, and a hardware compiler 1606. FIG. 16 illustrates an example of the various types of design data that may be exchanged among the compilers during operation (e.g., performing a design flow to implement an application in the SoC 200).[00174] The DPE compiler 1602 is capable of generating, from the application, one or more binaries that may be loaded into one or more DPEs and/or subsets of
DPEs 204 of DPE array 202. Each binary may include object code that is executable by the core(s) of the DPE(s), optionally application data, andconfiguration data for the DPEs. The NoC compiler 1604 is capable of generating a binary including the configuration data that is loaded into the NoC 208 to create data paths therein for the application. Hardware compiler 1606 is capable of compiling a hardware portion of the application to generate a configuration bitstream for implementation in the PL 214.[00175] FIG. 16 illustrates an example of how the DPE compiler 1602, the NoC compiler 1604, and the hardware compiler 1606 communicate with one another during operation. The respective compilers communicate in a coordinated manner by exchanging design data to converge to a solution. The solution is animplementation of the application within the SoC 200 that meets design metrics and constraints and includes common interfaces through which the variousheterogeneous subsystems of the SoC 200 communicate.[00176] As defined within this disclosure, the term "design metric" defines an objective or requirement of an application to be implemented in SoC 200. Examples of design metrics include, but are not limited to, a power consumption requirement, a data throughput requirement, a timing requirement, or the like. Design metrics may be provided via user input, a file, or another manner to define higher or system level requirements of the application. As defined within this disclosure, a "design constraint" is a requirement that an EDA tool may or may not follow to achieve a design metric or requirement. Design constraints may be specified as compiler directives and typically specify lower level requirements or suggestions to be followed by the EDA tool (e.g., compiler(s)). Design constraints may be specified by way of user input(s), files containing one or more design constraints, command line input, and the like.[00177] In one aspect, the DPE compiler 1602 is capable of generating a logical architecture and an SoC interface block solution for the application. The DPE compiler 1602, for example, is capable of generating the logical architecture based on high-level, user-defined metrics for the software portion of the application to be implemented in the DPE array 202. Examples of the metrics can include, but are not limited to, data throughput, latency, resource utilization, and powerconsumption. Based on the metrics and the application (e.g., the particular nodes
to be implemented in the DPE array 202), the DPE compiler 1602 is capable of generating the logical architecture.[00178] The logical architecture is a file or data structure that can specify hardware resource block information required by the various portions of the application. For example, the logical architecture can specify the number of DPEs 204 that are needed to implement the software portion of the application, any Intellectual Property (IP) cores needed in the PL 214 to communicate with the DPE array 202, any connections that need to be routed through the NoC 208, and port information for the DPE array 202, the NoC 208 and the IP cores in the PL 214. An IP core is a reusable block or portion of logic, cells, or IC layout design that may be used in a circuit design as a reusable block of circuitry capable of performing a particular function or operation. The IP core may be specified in a format that may be incorporated into a circuit design for implementation within the PL 214. While this disclosure refers to various types of cores, the term "core" without any other modifier is intended to refer to such different types of cores generically.[00179] Example 1 within this disclosure located at the end of the detailed description illustrates an example schema that may be used to specify the logical architecture for the application. Example 1 illustrates various types of information included in the logical architecture for the application. In one aspect, the hardware compiler 1606 is capable of implementing the hardware portion of the application based on, or using, the logical architecture and the SoC interface block solution as opposed to using the application itself.[00180] The port information for the DPE array 202 and the port information for the NoC 208 and the IP cores in the PL 214 may include the logical configuration of the ports, e.g., such as whether each port is a stream data port, a memory mapped port, or a parameter port, and whether the ports are masters or slaves. Other examples of port information for the IP cores include data width of the ports and frequency of operation. Connectivity among the DPE array 202, the NoC 208 and the IP cores in the PL 214 may be specified as logical connections between the ports of the respective hardware resource blocks specified in the logical architecture.[00181] The SoC interface block solution is a data structure or file that specifies a mapping of the connections in and out of the DPE array 202 to the physical data paths (e.g., physical resources) of the SoC interface block 206. For example, the
SoC interface block solution maps the particular logical connections used for data transfers in and out of the DPE array 202 to particular stream channels of the SoC interface block 206, e.g., to particular tiles, stream switches, and/or stream switch interfaces (e.g., ports) of the SoC interface block 206. Example 2 located following Example 1 toward the end of the detailed description illustrates an example schema for the SoC interface block solution for the application.[00182] In one aspect, the DPE compiler 1602 is capable of analyzing or simulating data traffic over the NoC 208 based on the application and the logical architecture. The DPE compiler 1602 is capable of providing the data transfer requirements of the software portion of the application, e.g., the "NoC traffic", to NoC compiler 1604. NoC compiler 1604 is capable of generating a routing for the data paths through the NoC 208 based on the NoC traffic received from the DPE compiler 1602. The result from the NoC compiler 1604, shown as the "NoC solution", may be provided to the DPE compiler 1602.[00183] In one aspect, the NoC solution may be an initial NoC solution that specifies only ingress and/or egress points of the NoC 208 to which nodes of the application that connect to the NoC 208 are to be connected. For example, more detailed routing and/or configuration data for the data paths within the NoC 208 (e.g., between ingress and egress points) may be excluded from the NoC solution for purposes of convergence of the compilers. Example 3 located followingExample 2 toward the end of the detailed description illustrates an example schema for the NoC solution for the application.[00184] The hardware compiler 1606 is capable of operating on the logical architecture to implement the hardware portion of the application in the PL 214. In the event the hardware compiler 1606 is unable to generate an implementation of the hardware portion of the application (e.g., using the logical architecture) that meets established design constraints (e.g., for timing, power, data throughput, etc.), the hardware compiler 1606 is capable of generating one or more SoC interface block constraints and/or receiving one or more user-specified SoC interface block constraints. The hardware compiler 1606 is capable of providing the SoC interface block constraints to the DPE compiler 1602 as requests. The SoC interface block constraints effectively remap one or more portions of the logical architecture to different stream channels of the SoC interface block 206. The SoC interface block constraints provided from the hardware compiler 1606 are more favorable for the
hardware compiler 1606 to generate an implementation of the hardware portion of the application in the PL 214 that meets the design metrics. Example 4 located following Example 3 toward the end of the detailed description illustrates example constraints for the SoC interface block and/or the NoC for the application.[00185] In another aspect, the hardware compiler 1606 is also capable of generating and providing NoC traffic to the NoC compiler 1604 based on the application and the logical architecture. The hardware compiler 1606, for example, may analyze or simulate the hardware portion of the application to determine the data traffic generated by the hardware portion of the design that will be conveyed over the NoC 208 to the PS 212, the DPE array 202, and/or other portions of the SoC 200. The NoC compiler 1604 is capable of generating and/or updating the NoC solution based on the information received from the hardware compiler 1606. The NoC compiler 1604 is capable of providing the NoC solution or an updated version thereof to the hardware compiler 1606 and also to the DPE compiler 1602. In this regard, the DPE compiler 1602 is capable of updating the SoC interface block solution and providing the updated solution to the hardware compiler 1606 in response to receiving a NoC solution or an updated NoC solution from NoC compiler 1604 and/or in response to receiving one or more SoC interface block constraints from the hardware compiler 1606. The DPE compiler 1602 generates the updated SoC interface block solution based on the SoC interface block constraint(s) received from the hardware compiler 1606 and/or from the updated NoC solution from NoC compiler 1604.[00186] It should be appreciated that the data flows among the compilers shown in the example of FIG. 16 are for purposes of illustration only. In this regard, the exchange of information among the compilers may be performed at various stages of the example design flows described within this disclosure. In other aspects, the exchange of design data among the compilers may be performed in an iterative manner so that each compiler may continually refine the implementation of the part of the application handled by that compiler based on received information from the other compilers to converge to a solution.[00187] In one particular example, the hardware compiler 1606, after receiving the logical architecture and the SoC interface block solution from the DPE compiler 1602 and the NoC solution from the NoC compiler 1604, may determine that generating an implementation of the hardware portion of the application that meets
established design metrics is not possible. The initial SoC interface block solution generated by the DPE compiler 1602 is generated based on the DPE compiler's 1602 knowledge of the portion of the application to be implemented in the DPE array 202. Likewise, the initial NoC solution generated by the NoC compiler 1604 is generated based on the initial NoC traffic provided by the DPE compiler 1602 to the NoC compiler 1604. Example 5 located following Example 4 toward the end of the detailed description illustrates an example schema for the NoC traffic for the application. It should be appreciated that while schemas are used in Examples 1 -5, other formatting and/or data structures may be used to specify the information illustrated.[00188] The hardware compiler 1606 attempts to perform an implementation flow on the hardware portion of the application including synthesis (if required), placement, and routing the hardware portion. As such, the initial SoC interface block solution and the initial NoC solution may result in a placement and/or routes within the PL 214 that do not meet established timing constraints. In other cases, the SoC interface block solution and the NoC solution may not have a sufficient number of physical resources such as wires to accommodate the data that must be conveyed resulting in congestion in the PL 214. In such cases, the hardware compiler 1606 is capable of generating one or more different SoC interface block constraints and/or receiving one or more user-specified SoC interface block constraints and providing the SoC interface block constraints to the DPE compiler 1602 as a request for regenerating the SoC interface block solution. Likewise, the hardware compiler 1606 is capable of generating one or more different NoC constraints and/or receiving one or more user-specified NoC constraints and providing the NoC constraints to the NoC compiler 1604 as a request for regenerating the NoC solution. In this manner, the hardware compiler 1606 invokes the DPE compiler 1602 and/or the NoC compiler 1604.[00189] The DPE compiler 1602 is capable of taking the received SoC interface block constraints from the hardware compiler 1606 and updating the SoC interface block solution using the received SoC interface block constraints, if possible, and providing the updated SoC interface block solution back to the hardware compiler 1606. Similarly, the NoC compiler 1604 is capable of taking the received NoC constraints from the hardware compiler 1606 and updating the NoC solution using the received NoC constraints, if possible, and providing the updated NoC solution
back to the hardware compiler 1606. The hardware compiler 1606 may then continue the implementation flow to generate the hardware portion of the application for implementation within the PL 214 using the updated SoC interface block solution received from the DPE compiler 1602 and the updated NoC solution received from the NoC compiler 1604.[00190] In an aspect, the hardware compiler 1606 invoking the DPE compiler 1602 and/or the NoC compiler 1604 by providing one or more SoC interface block constraints and one or more NoC constraints respectively may be part of a validation process. The hardware compiler 1606, for example, is seeking validation from the DPE compiler 1602 and/or the NoC compiler 1604 that the SoC interface block constraints and the NoC constraints provided from the hardware compiler 1606 can be used or integrated into a routable SoC interface block solution and/or NoC solution.[00191] FIG. 17A illustrates an example of an application 1700 mapped onto an SoC 200 using a system as described in connection with FIG. 1. For purposes of illustration, only a subset of the different subsystems of the SoC 200 are shown. Application 1700 includes nodes A, B, C, D, E, and F having the connectivity shown. Example 6 below illustrates example source code that may be used to specify application 1700.Example 6 using namespace cardano; // class library with graph building primitives class radio : cardano::graph { // an example graph classpublic:input_port in;output_port out;kernel a,b,c,d,e,f;radio() { // graph constructora = kernel ::create(polarclip);b = kernel ::create(feedback);c = kernel ::create(equalizer);d = kernel ::create(fir_tap1 1 );
e = kernel ::create(fir_tap7);f = kernel ::create(scale);fabric<fpga>(a) ; fabric<fpga>(f) ;runtime<ratio>(b) = 0.6; runtime<ratio>(c) = 0.2;runtime<ratio>(d) = 0.8; runtime<ratio>(e) = 0.1 ;connect<stream, window<64,8> > ( a.out[0], b.in[0] );connect<window<32> > ( b.out[0], c.in[0] );connect<window<32, 24> > ( c.out[0], d.in[0] );connect<window<32, 16> > ( d.out[1], e.in[0] );connect<window<32, 8> > ( e.out[0], async(b.in[1]) );connect<window<16>, stream > ( d.out[0], f.in[0] );connect<stream> ( in, a.in[0] );connect<stream> ( f.out[0], out );}}radio mygraph; //top level testbenchsimulation::platform<1 ,1 > platform in.txt”,“out.txt”);connecto net0(platform.src[0], mygraph. in);connecto net1 (platform. sink[0], mygraph.out); int main(void) { //control program for PSmygraph. init();mygraph. run();mygraph. end();return 0;}[00192] In one aspect, application 1700 is specified as a data flow graph that includes a plurality of nodes. Each node represents a computation, which corresponds to a function as opposed to a single instruction. The nodes are interconnected by edges that represent data flows. The hardware implementation of a node may only execute in response to receiving data from each of the inputs to that node. Nodes generally execute in a non-blocking manner. The data flow graph specified by application 1700 represents a parallel specification to be implemented
in the SoC 200 as opposed to a sequential program. The system is capable of operating on application 1700 (e.g., in graph form as illustrated in Example 1 ) to map the various nodes to the appropriate subsystems of the SoC 200 for implementation therein.[00193] In one example, application 1700 is specified in a high-levelprogramming language (HLL) such as C and/or C++. As noted, though specified in an HLL, which is conventionally used to create sequential programs, application 1700, being a data flow graph, is a parallel specification. The system is capable of providing a class library that is used to build data flow graphs and, as such, application 1700. The data flow graph is defined by the user and compiled onto the architecture of the SoC 200. The class library may be implemented as a helper library with pre-defined classes and constructors for graphs, nodes, and edges that can be used to build application 1700. Application 1700 effectively executes on the SoC 200 and includes delegated objects that execute in the PS 212 of the SoC 200. The objects of application 1700 that execute in the PS 212 may be used to direct and monitor actual computations that are running on the SoC 200, e.g., in the PL 214, in the DPE array 202, and/or in hardwired circuit blocks 210.[00194] In accordance with the inventive arrangements described within this disclosure, accelerators (e.g., PL nodes) may be represented as objects in the data flow graph (e.g., application). The system is capable of automatically synthesizing the PL nodes and connecting the synthesized PL nodes for implementation in the PL 214. By comparison, in conventional EDA systems, users specify applications for hardware acceleration that utilize sequential semantics. The function that is hardware accelerated is specified through a function call. The interface to the hardware accelerated function (e.g., the PL node in this example) is defined by the function call and the various arguments provided in the function call as opposed to the connections on the data flow graph.[00195] As illustrated in the source code of Example 6, nodes A and F are designated for implementation in the PL 214, while nodes B, C, D, and E are designated for implementation within the DPE array 202. Connectivity of the nodes is specified by the data transfer edges in the source code. The source code of Example 6 also specifies a top level testbench and a control program that is executed in the PS 212.
[00196] Returning to FIG. 17A, application 1700 is mapped onto the SoC 200. SoC 200. As pictured, nodes A and F are mapped onto the PL 214. The shaded DPEs 204-13 and 204-14 represent the DPEs 204 onto which nodes B, C, D, and E are mapped. For example, nodes B and C are mapped onto DPE 204-13, while nodes D and E are mapped onto DPE 204-4. Nodes A and F are implemented in the PL 214 and are connected to DPE 204-13 and 204-44 via routing through the PL 214, particular tiles and switches of the SoC interface block 206, switches in the DPE interconnect of intervening DPEs 204, and using particular memories of selected neighboring DPEs 204.[00197] The binary generated for DPE 204-13 includes the necessary object code for DPE 204-13 to implement the computations corresponding to nodes B and C and configuration data to establish data paths between DPE 204-13 and DPE 204-14 and between DPE 204-13 and DPE 204-3. The binary generated for DPE 204-4 includes the necessary object code for DPE 204-4 to implement the computations corresponding to nodes D and E and configuration data to establish data paths with DPE 204-14 and DPE 204-5.[00198] Other binaries are generated for other DPEs 204 such as DPE 204-3, 204-5, 204-6, 204-7, 204-8, and 204-9 to connect DPEs 204-13 and DPE 204-4 to the SoC interface block 206. Appreciably, such binaries will include any object code should such other DPEs 204 implement other computations (have nodes of the application assigned thereto).[00199] In this example, the hardware compiler 1606 is unable to generate an implementation of the hardware portion that meets timing constraints due to the long route connecting DPE 204-14 and node F. Within this disclosure, a particular state of the implementation of the hardware portion of the application may be referred to as a state of a hardware design, where the hardware design is generated and/or updated throughout an implementation flow. The SoC interface block solution, for example, may allocate the signal crossing for node F to the tile of the SoC interface block below DPE 204-9. In that case, the hardware compiler 1606 is capable of providing a requested SoC interface block constraint to the DPE compiler 1602 requesting that the crossing through the SoC interface block 206 for node F be moved closer to DPE 204-4. For example, the requested SoC interface block constraint from the hardware compiler 1606 may request that the logical connections for DPE 204-4 be mapped to a tile immediately below DPE 204-4
within the SoC interface block 206. This remapping would allow the hardware compiler to place node F much closer to DPE 204-4 to improve timing.[00200] FIG. 17B illustrates another example mapping of application 1700 onto SoC 200. FIG. 17B illustrates an alternative and more detailed example than illustrated in FIG. 17A. FIG. 17B, for example, illustrates the mapping of nodes of application 1700 to particular DPEs 204 of the DPE array 202, connectivity established between the DPEs 204 to which nodes of application 1700 are mapped, the allocation of memory in memory modules of DPEs 204 to nodes of application 1700, mapping of data transfers to memory and core interfaces (e.g., 428, 430, 432, 434, 402, 404, 406, and 408) of DPEs 204 (represented with dual headed arrows) and/or to stream switches in the DPE interconnect 306, as performed by the DPE compiler 1602.[00201] In the example of FIG. 17B, memory modules 1702, 1706, 1710, 1714, and 1718 are shown along with cores 1704, 1708, 1712, 1716, and 1720. Cores 1704, 1708, 1712, 1716, and 1720 include program memories 1722, 1724, 1726, 1728, and 1730, respectively. In the upper row, core 1704 and memory module 1706 form a DPE 204, while core 1708 and memory module 1710 form another DPE 204. In the lower row, memory module 1714 and core 1716 form a DPE 204, while memory 1718 and core 1720 for another DPE 204.[00202] As illustrated, nodes A and F are mapped to the PL 214. Node A is connected to memory banks (e.g., shaded portions of memory banks) in memory module 1702 by way of stream switches and an arbiter in memory module 1702. Nodes B and C are mapped to core 1704. Instructions for implementing nodes B and C are stored in program memory 1722. Nodes D and E are mapped to core 1716, with instructions for implementing nodes D and E stored in program memory 1728. Node B is allocated and accesses the shaded portions of memory banks in memory module 1702 via the core-memory interfaces, while node C is allocated and accesses the shaded portions of memory banks in memory module 1706 via the core-memory interfaces. Nodes B, C, and E are allocated and capable of accessing the shaded portions of memory banks in memory module 1714 via the core-memory interfaces. Node D is capable of accessing the shaded portions of memory banks in memory module 1718 via the core-memory interfaces. Node F is connected to memory module 1718 via an arbiter and stream switches.
[00203] FIG. 17B illustrates that connectivity between nodes of the application may be implemented using memory and/or core interfaces sharing memories among cores and using the DPE interconnect 306.[00204] FIG. 18 illustrates an example implementation of another application that has been mapped onto the SoC 200. For purposes of illustration, only a subset of the different subsystems of the SoC 200 are shown. In this example, connections to nodes A and F, each being implemented in the PL 214, are routed through the NoC 208. The NoC 208 includes ingress/egress points 1802, 1804, 1806, 1808, 1810, 1812, 1814, and 1816 (e.g., NMUs/NSUs). The example of FIG. 18 illustrates the case where node A is placed relatively close to ingress/egress point 1802, while node F, which accesses volatile memory 134, has a long route through the PL 214 to reach the ingress/egress point 1816. If the hardware compiler 1606 is unable to place node F closer to the ingress/egress point 1816, the hardware compiler 1606 may request an updated NoC solution from the NoC compiler 1604. In that case, the hardware compiler 1606 is capable of invoking the NoC compiler 1604 with a NoC constraint to generate an updated NoC solution specifying a different ingress/egress point for node F, e.g. the ingress/egress point 1812. A different ingress/egress point for node F would allow the hardware compiler 1606 to place node F closer to the newly designated ingress/egress point specified in the updated NoC solution and take advantage of the faster data paths available in the NoC 208.[00205] FIG. 19 illustrates another example software architecture 1900 executable by the system described in connection with FIG. 1. For example, architecture 1900 may be implemented as one or more of the program modules 120 of FIG. 1. In the example of FIG. 19, application 1902 is intended for implementation within SoC 200.[00206] In the example of FIG. 19, a user is capable of interacting with a user interface 1906 provided by the system. In interacting with user interface 1906, the user may specify or provide an application 1902, performance and partitioning constraints 1904 for application 1902, and a base platform 1908.[00207] Application 1902 may include a plurality of different portions each corresponding to a different subsystem available in the SoC 200. Application 1902 may be specified as described in connection with Example 6, for example.Application 1902 includes a software portion that is to be implemented in the DPE array 202 and a hardware portion that is to be implemented in the PL 214.
Application 1902 may optionally include an additional software portion that is to be implemented in the PS 212 and a portion that is to be implemented in the NoC 208.[00208] The partitioning constraints (of the performance and partitioning constraints 1904) optionally specify the location or subsystem in which the various nodes of application 1902 are to be implemented. For example, partitioning constraints may indicate, on a per node basis for application 1902, whether the node is to be implemented in the DPE array 202 or in the PL 214. In other examples, location constraints are capable of providing more specific or detailed information to DPE compiler 1602 to perform mapping of kernels to DPE's, networks or data flows to stream switches, and buffers to the memory modules and/or banks of memory modules of DPEs.[00209] As an illustrative example, implementation of an application may require specific mapping. For instance, in an application where multiple copies of a kernel are to be implemented in the DPE array and each copy of the kernel operates on a different data set concurrently, it is preferable to have the data sets be located at the same relative address (location in memory) for every copy of the kernel executing in a different DPE of the DPE array. This may be accomplished using a location constraint. If this condition is not upheld by the DPE compiler 1602, each copy of the kernel must be programmed separately or independently rather than replicating the same programming across a plurality of different DPEs in the DPE array.[00210] Another illustrative example is placing a location constraint on an application that utilizes the cascade interfaces among DPEs. Since the cascade interfaces flow in one direction in each row, it may be preferable to have the start of a chain of DPEs coupled using the cascade interfaces not begin in a DPE having a missing cascade interface (e.g., a corner DPE) or in a position that cannot be easily replicated elsewhere in the DPE array (e.g., the last DPE in a row). The location constraint can force the start of the chain of DPEs of the application to begin at a particular DPE.[00211 ] The performance constraints (of the performance and partitioning constraints 1904) may specify various metrics such as power requirements, latency requirements, timing, and/or data throughput to be achieved by the implementation of the node whether in the DPE array 202 or in the PL 214.
[00212] Base platform 1908 is a description of the infrastructure circuitry that is to be implemented in the SoC 200 that interacts with and/or connects to the circuitry on the circuit board on which the SoC 200 is coupled. The base platform 1908 may be synthesizable. Base platform 1908, for example, specifies the circuitry that is to be implemented within the SoC 200 that receives signals from outside of the SoC 200 (e.g., external to the SoC 200) and provides signals to systems and/or circuitry outside of the SoC 200. As an example, base platform 1908 may specify circuit resources such as a Peripheral Component Interconnect Express (PCIe) node for communicating with the host system 102 and/or computing node 100 of FIG. 1 , a memory controller or controllers for accessing volatile memory 134 and/or non volatile memory 136, and/or other resources such as internal interfaces coupling the DPE array 202 and/or the PL 214 with the PCIe node. The circuitry specified by base platform 1908 is available for any application that may be implemented in the SoC 200 given a particular type of circuit board. In this regard, base platform 1908 is specific to the particular circuit board to which the SoC 200 is coupled.[00213] In one example, partitioner 1910 is capable of separating out the different portions of application 1902 based on the subsystem of SoC 200 in which each portion of application 1902 is to be implemented. In an example implementation, partitioner 1910 is implemented as a user directed tool where the user provides input indicating which of the different portions (e.g., nodes) of application 1902 corresponds to each of the different subsystems of the SoC 200. The input provided, for example, may be the performance and partitioning constraints 1904. For purposes of illustration, partitioner 1910 partitions application 1902 into a PS portion 1912 that is to execute on the PS 212, a DPE array portion 1914 that is to execute on the DPE array 202, a PL portion 1916 that is to be implemented in the PL 214, and a NoC portion 1936 that is implemented in the NoC 208. In one aspect, the partitioner 1910 is capable of generating each of the PS portion 1912, the DPE array portion 1914, the PL portion 1916, and the NoC portion 1936 as separate files or separate data structures.[00214] As pictured, each of the different portions corresponding to different subsystems is processed by a different compiler that is subsystem specific. For example, PS compiler 1918 is capable of compiling PS portion 1912 to generate one or more binaries that include object code executable by the PS 212. DPE compiler 1602 is capable of compiling DPE array portion 1914 to generate one or
more binaries that include object code executable by different DPEs 204, application data, and/or configuration data. Hardware compiler 1606 is capable of performing an implementation flow on PL portion 1916 to generate a configuration bitstream that can be loaded into the SoC 200 to implement PL portion 1916 in the PL 214. As defined herein, the term "implementation flow" means a process in which place and route and optionally synthesis are performed. The NoC compiler 1604 is capable of generating a binary specifying configuration data for the NoC 208 that, when loaded into the NoC 208, creates data paths therein connecting the various masters and slaves of the application 1902. These different outputs generated by compilers 1918, 1602, 1604, and/or 1606 are illustrated as binaries and configuration bitstreams 1924.[00215] In particular implementations, certain ones of compilers 1918, 1602, 1604, and/or 1606 are capable of communicating with one another during operation. By communicating at various stages during the design flow operating on application 1902, compilers 1918, 1602, 1604, and/or 1606 are capable of converging to a solution. In the example of FIG. 19, DPE compiler 1602 and hardware compiler 1606 are capable of communicating during operation while compiling portions 1914 and 1916, respectively, of application 1902. The hardware compiler 1606 and NoC compiler 1604 are capable of communicating during operation while compiling portions 1916 and 1936, respectively, of application 1902. The DPE compiler 1602 may also invoke the NoC compiler 1604 for obtaining a NoC routing solution and/or an updated NoC routing solution.[00216] The resulting binaries and configuration bitstreams 1924 may be provided to any of a variety of different targets. For example, the resulting binaries and configuration bitstream(s) 1924 may be provided to a simulation platform 1926, a hardware emulation platform 1928, an RTL simulation platform 1930, and/or to the target IC 1932. In the case of the RTL simulation platform 1930, hardware compiler 1922 may be configured to output RTL for the PL portion 1916 that may be simulated in RTL simulation platform 1930.[00217] Results obtained from the simulation platform 1926, the emulation platform 1928, the RTL simulation platform 1930, and/or from implementation of application 1902 in target IC 1932 may be provided to performance profiler and debugger 1934. Results from performance profiler and debugger 1934 may be
provided to user interface 1906, where the user may view the results of executing and/or simulating application 1902.[00218] FIG. 20 illustrates an example method 2000 of performing a design flow to implement an application in the SoC 200. Method 2000 may be performed by a system as described in connection with FIG. 1 . The system may execute a software architecture as described in connection with FIG. 16 or FIG. 19.[00219] In block 2002, the system receives an application. The application may specify a software portion for implementation within the DPE array 202 of SoC 200 and a hardware portion for implementation within the PL 214 of the SoC 200.[00220] In block 2004, the system is capable of generating a logical architecture for the application. For example, the DPE compiler 1602, as executed by the system, is capable of generating the logical architecture based on the software portion of the application to be implemented in the DPE array 202 and any high- level, user-specified metrics. The DPE compiler 1602 is also capable of generating an SoC interface block solution specifying a mapping of the connections in and out of the DPE array 202 to the physical data paths of the SoC interface block 206.[00221] In another aspect, in generating the logical architecture and the SoC interface block solution, the DPE compiler 1602 is capable of generating an initial mapping of nodes of the application to be implemented in the DPE array 202 (referred to as "DPE nodes") to particular DPEs 204. The DPE compiler 1602 optionally generates an initial mapping and routing of the global memory data structures of the application to global memory (e.g., volatile memory 134) by providing the NoC traffic for the global memory to the NoC compiler 1604. As discussed, the NoC compiler 1604 is capable of generating a NoC solution from the received NoC traffic. Using the initial mappings and routings, the DPE compiler 1602 is capable of simulating the DPE portion to validate the initial implementation of the DPE portion. The DPE compiler 1602 is capable of outputting the data generated by the simulation to the hardware compiler 1606 corresponding to each stream channel used in the SoC interface block solution.[00222] In one aspect, generating the logical architecture, as performed by the DPE compiler 1602, implements the partitioning previously described in connection with FIG. 19. The various example schemas illustrate how the different compilers (DPE compiler 1602, hardware compiler 1606, and the NoC compiler 1604) in FIG. 19 exchange decisions and constraints while compiling the portion of the
application that is allocated to each respective compiler. The various example schemas further illustrate how the decisions and/or constraints are logically across the different subsystems of the SoC 200.[00223] In block 2006, the system is capable of building a block diagram of the hardware portion. For example, the hardware compiler 1606, as executed by the system, is capable of generating a block diagram. The block diagram incorporates the hardware portion of the application, as specified by the logical architecture, with the base platform for the SoC 200. For example, the hardware compiler 1606 is capable of connecting the hardware portion and the base platform in generating the block diagram. Further, the hardware compiler 1606 is capable of generating the block diagram to connect IP cores corresponding to the hardware portion of the application to the SoC interface block based on the SoC interface block solution.[00224] For example, each node in the hardware portion of the application, as specified by the logical architecture, may be mapped to a particular RTL core (e.g., a user-provided or specified portion of custom RTL) or an available IP core. With the mappings of the nodes to cores being specified by the user, the hardware compiler 1606 is capable of building the block diagram to specify the various circuit blocks of the base platform, any IP cores of the PL 214 needed to interface with the DPE array 202 per the logical architecture, and/or any additional user specified IP cores and/or RTL cores that are to be implemented in the PL 214. Examples of the additional IP cores and/or RTL cores that may be manually inserted by the user include, but are not limited to, data-width conversion blocks, hardware buffers, and/or clock domain logic. In one aspect, each block of the block diagram can correspond to a particular core (e.g., circuit block) that is to be implemented in the PL 214. The block diagram specifies the connectivity of the cores to beimplemented in the PL and connectivity of the cores with physical resources of the NoC 208 and/or the SoC interface block 206 as determined from the SoC interface block solution and the logical architecture.[00225] In one aspect, the hardware compiler 1606 is also capable of creating logical connections between the cores of the PL 214 and the global memory (e.g., volatile memory 134) by creating NoC traffic as per the logical architecture and executing the NoC compiler 1604 to obtain the NoC solution. In one example, the hardware compiler 1606 is capable of routing the logical connections to validate the capacity of the PL 214 to implement the block diagram and the logical connections.
In another aspect, the hardware compiler 1606 is capable of using SoC interface block traces (e.g., described below in greater detail) with one or more data traffic generators as part of a simulation to validate the functionality of the block diagram with actual data traffic.[00226] In block 2008, the system performs an implementation flow on the block diagram. For example, the hardware compiler is capable of performing an implementation flow involving synthesis if needed, placement, and routing on the block diagram to generate a configuration bitstream that may be loaded into the SoC 200 to implement the hardware portion of the application in the PL 214.[00227] The hardware compiler 1606 is capable of performing the implementation flow on the block diagram using the SoC interface block solution and the NoC solution. For example, since the SoC interface block solution specifies particular stream channels of the SoC interface block 206 over which particular DPEs 204 communicate with the PL 214, the placer is capable of placing blocks of the block diagram that have connections to the DPEs 204 through the SoC interface block 206 close (e.g., within a particular distance) to the particular stream channels of the SoC interface block 206 to which the blocks are to connect. The ports of the blocks, for example, may be correlated with the stream channels specified by the SoC interface block solution. The hardware compiler 1606 is also capable of routing connections between the ports of blocks of the block diagram that connect to the SoC interface block 206 by routing signals input to and/or output from the ports to the BLIs of the PL 214 that connect to the particular stream channel(s) coupled to the ports as determined from the SoC interface block solution.[00228] Similarly, since the NoC solution specifies particular ingress/egress points to which circuit blocks in the PL 214 are to connect, the placer is capable of placing blocks of the block diagram that have connections to the NoC 208 close (e.g., within a particular distance) to the particular ingress/egress points to which the blocks are to connect. The ports of the blocks, for example, may be correlated with the ingress/egress points of the NoC solution. The hardware compiler 1606 is also capable of routing connections between the ports of blocks of the block diagram that connect to ingress/egress points of the NoC 208 by routing signals input to and/or output from the ports to the ingress/egress points of the NoC 208 logically coupled to the ports as determined from the NoC solution. The hardware compiler 1606 is further capable of routing any signals that connect ports of blocks
in the PL 214 to one another. In some applications, however, the NoC 208 may not be used to convey data between the DPE array 202 and the PL 214.[00229] In block 2010, during the implementation flow, the hardware compiler optionally exchanges design data with the DPE compiler 1602 and/or the NoC compiler 1604. For example, the hardware compiler 1606, the DPE compiler 1602, and the NoC compiler 1604 are capable of exchanging design data as described in connection with FIG. 16 on a one-time basis, as needed, or on an iterative or repeated basis. Block 2010 may be optionally performed. The hardware compiler 1606 is capable of exchanging design data with the DPE compiler 1602 and/or the NoC compiler 1604 prior to, or during, building of the block diagram, prior to and/or during placement, and/or prior to and/or during routing, for example.[00230] In block 2012, the system exports the final hardware design generated by the hardware compiler 1606 as a hardware package. The hardware package contains the configuration bitstream used to program the PL 214. The hardware package is generated according to the hardware portion of the application.[00231] In block 2014, the user configures a new platform using the hardware package. The user initiates generation of the new platform based on the user- provided configuration. The platform, as generated by the system using the hardware package, is used to compile the software portion of the application.[00232] In block 2016, the system compiles the software portion of the application for implementation in the DPE array 202. For example, the system executes the DPE compiler 1602 to generate one or more binaries that may be loaded into the various DPEs 204 of the DPE array 202. The binaries for the DPEs 204 can include the object code, application data, and the configuration data for the DPEs 204.Once the configuration bitstream and binaries are generated, the system is capable of loading the configuration bitstream and binaries into the SoC 200 to implement the application therein.[00233] In another aspect, the hardware compiler 1606 is capable of providing the hardware implementation to the DPE compiler 1602. The DPE compiler 1602 is capable of extracting the final SoC interface block solution that was relied on by the hardware compiler 1606 in performing the implementation flow. The DPE compiler 1602 performs the compilation using the same SoC interface block solution used by the hardware compiler 1606.
[00234] In the example of FIG. 20, each portion of the application is solved by a subsystem-specific compiler. The compilers are capable of communicating design data, e.g., constraints and/or proposed solutions, to ensure that interfaces between the various subsystems (e.g., the SoC interface block), as implemented for the application, are compliant and consistent. Though not specifically shown in FIG. 20, the NoC compiler 1604 may also be invoked to generate a binary for programming the NoC 208 if used in the application.[00235] FIG. 21 illustrates another example method 2100 of performing a design flow to implement an application in the SoC 200. Method 2100 may be performed by a system as described in connection with FIG. 1 . The system may execute a software architecture as described in connection with FIG. 16 or FIG. 19. Method 2100 may begin in block 2102 where the system receives an application. The application may be specified as a data flow graph to be implemented in the SoC 200. The application may include a software portion for implementation in the DPE array 202, a hardware portion for implementation in the PL 214, and data transfers for implementation in the NoC 208 of the SoC 200. The application may also include a further software portion for implementation in the PS 212.[00236] In block 2104, the DPE compiler 1602 is capable of generating a logical architecture, an SoC interface block solution, and SoC interface block traces from the application. The logical architecture may be based on the DPEs 204 required to implement the software portion of the application designated for implementation within the DPE array 202 and any IP cores to be implemented in the PL 214 needed to interface with the DPEs 204. As noted, the DPE compiler 1602 is capable of generating an initial DPE solution in which the DPE compiler 1602 performs an initial mapping of nodes (of the software portion of the application) to the DPE array 202. The DPE compiler 1602 is capable of generating an initial SoC interface block solution that maps the logical resources to physical resources (e.g., stream channels) of the SoC interface block 206. In one aspect, the SoC interface block solution may be generated using an initial NoC solution generated by the NoC compiler 1604 from the data transfers. The DPE compiler 1602 is further capable of simulating the initial DPE solution with the SoC interface block solution to simulate data flows through the SoC interface block 206. The DPE compiler 1602 is capable of capturing the data transfers through the SoC interface block
during the simulation as "SoC interface block traces" for subsequent use during the design flow illustrated in FIG. 21.[00237] In block 2104, the hardware compiler 1606 generates a block diagram of the hardware portion of the application to be implemented in the PL 214. The hardware compiler 1606 generates the block diagram based on the logical architecture and the SoC interface block solution and, optionally, additional IP cores specified by the user that are to be included in the block diagram with the circuit blocks specified by the logical architecture. In one aspect, the user manually inserts such additional IP cores and connects the IP cores to the other circuit blocks of the hardware description specified in the logical architecture.[00238] In block 2106, the hardware compiler 1606 optionally receives one or more user-specified SoC interface block constraints and provides the SoC interface block constraints to the DPE compiler 1602.[00239] In one aspect, prior to implementing the hardware portion of the application, the hardware compiler 1606 is capable of evaluating the physical connections defined between the NoC 208, the DPE array 202, and the PL 214 based on the block diagram and the logical architecture. The hardware compiler 1606 is capable of performing an architecture simulation of the block diagram to evaluate the connections between the block diagram (e.g., PL portion of the design) and the DPE array 202 and/or the NoC 208. For example, the hardware compiler 1606 is capable of performing a simulation using the SoC interface block traces generated by the DPE compiler 1602. As an illustrative and non-limiting example, the hardware compiler 1606 is capable of performing a SystemC simulation of the block diagram. In the simulation, data traffic is generated for the block diagram and for the stream channels (e.g., physical connections) between the PL 214 and the DPE array 202 (by way of the SoC interface block 206) and/or the NoC 208 using the SoC interface block traces. The simulation generates system performance and/or debugging information that is provided to the hardware compiler 1606.[00240] The hardware compiler 1606 is capable of evaluating the system performance data. If, for example, the hardware compiler 1606 determines, from the system performance data, that one or more design metrics for the hardware portion of the application are not met, the hardware compiler 1606 is capable of generating one or more SoC interface block constraints under the direction of the
user. The hardware compiler 1606 provides the SoC interface block constraints as a request to the DPE compiler 1602.[00241] The DPE compiler 1602 is capable of performing an updated mapping of the DPE portion of the application to DPEs 204 of the DPE array 202 that utilizes the SoC interface block constraints provided by the hardware compiler 1606. If, for example, the application is implemented where the hardware portion in the PL 214 connects to the DPE array 202 directly through the SoC interface block 206 (e.g., without traversing through the NoC 208), the DPE compiler 1602 may generate an updated SoC interface block solution for the hardware compiler 1606 without involving the NoC compiler 1604.[00242] In block 2108, the hardware compiler 1606 optionally receives one or more user-specified NoC constraints and provides the NoC constraints to the NoC compiler for validation. The hardware compiler 1606 may also provide NoC traffic to the NoC compiler 1606. The NoC compiler 1604 is capable of generating an updated NoC solution using the received NoC constraints and/or the NoC traffic. If, for example, the application is implemented where the hardware portion of the PL 214 connects to the DPE array 202, the PS 212, the hardwired circuit blocks 210, or the volatile memory 134 through the NoC 208, the hardware compiler 1606 is capable of calling the NoC compiler 1604 by providing the NoC constraints and/or NoC traffic to the NoC compiler 1604. The NoC compiler 1604 is capable of updating routing information for data paths through the NoC 208 as the updated NoC solution. The updated routing information may specify updated routes and particular ingress/egress points for the routes. The hardware compiler 1606 may obtain the updated NoC solution and, in response, generate updated SoC interface block constraints that are provided to the DPE compiler 1602. The process may be iterative in nature. The DPE compiler 1602 and the NoC compiler 1604 may operate concurrently as illustrated by blocks 2106 and 2108.[00243] In block 21 10, the hardware compiler 1606 is capable of performing synthesis on the block diagram. In block 21 12, the hardware compiler 1606 performs place and route on the block diagram. In block 21 14, while performing place and/or route, the hardware compiler is capable of determining whether the implementation of the block diagram, e.g., the current state of implementation of the hardware portion (e.g., the hardware design) at any of these different stages of the implementation flow, meets design metrics for the hardware portion of the
application. For example, the hardware compiler 1606 is capable of determining whether the current implementation meets the design metrics prior to placement, during placement, prior to route, or during route. In response to determining that the current implementation of the hardware portion of the application does not meet a design metric, method 2100 continues to block 21 16. Otherwise, method 2100 continues to block 2120.[00244] In block 21 16, the hardware compiler is capable of providing one or more user specified SoC interface block constraints to the DPE compiler 1602. The hardware compiler 1606 is capable of optionally providing one or more NoC constraints to the NoC compiler 1604. As discussed, the DPE compiler 1602 generates an updated SoC interface block solution using the SoC interface block constraint(s) received from the hardware compiler 1606. The NoC compiler 1604 optionally generates an updated NoC solution. For example, the NoC compiler 1604 can be invoked if one or more data paths between the DPE array 202 and the PL 214 flow through the NoC 208. In block 21 18, the hardware compiler 1606 receives the updated SoC interface block solution and optionally the updated NoC solution. After block 21 18, method 2100 continues to block 21 12 where the hardware compiler 1606 continues to perform place and/or route using the updated SoC interface block solution and optionally the updated NoC solution.[00245] FIG. 21 illustrates that the exchange of design data between the compilers may be performed in an iterative manner. For example, at any of a plurality of different points during the place and/or route stages, the hardware compiler 1606 is capable of determining whether the current state of the implementation of the hardware portion of the application meets the established design metrics. If not, the hardware compiler 1606 may initiate the exchange of design data as described to obtain an updated SoC interface block solution and an updated NoC solution that the hardware compiler 1606 uses for purposes of placement and routing. It should be appreciated that the hardware compiler 1606 need only invoke the NoC compiler 1604 in cases where the configuration of the NoC 208 is to be updated (e.g., where data from the PL 214 is provided to and/or received from other circuit blocks through the NoC 208).[00246] In block 2120, in the case where the hardware portion of the application meets the design metrics, the hardware compiler 1606 generates a configuration bitstream specifying an implementation of the hardware portion within the PL 214.
The hardware compiler 1606 is further capable of providing the final SoC interface block solution (e.g., the SoC interface block solution used for place and route) to the DPE compiler 1602 and providing the final NoC solution that may have been used for place and route to the NoC compiler 1604.[00247] In block 2122, the DPE compiler 1602 generates binaries forprogramming the DPEs 202 of the DPE array 204. The NoC compiler 1604 generates a binary for programming the NoC 208. For example, throughout blocks 2106, 2108, and 21 16, the DPE compiler 1602 and the NoC compiler 1604 may perform incremental validation functions where the SoC interface block solutions and the NoC solutions used are generated based on validation procedures that may be performed in less runtime than if complete solutions for the SoC interface block and the NoC were determined. In block 2122, the DPE compiler 1602 and the NoC compiler 1604 may generate the final binaries used to program the DPE array 202 and the NoC 208, respectively.[00248] In block 2124, the PS compiler 1918 generates the PS binary. The PS binary includes the object code that is executed by the PS 212. The PS binary, for example, implements the control program executed by the PS 212 to monitor operation of the SoC 200 with the application implemented therein. The DPE compiler 1602 may also generate a DPE array driver that may be compiled by the PS compiler 1918 and executed by the PS 212 to read and/or write to the DPEs 204 of the DPE array 202.[00249] In block 2126, the system is capable of deploying the configuration bitstream and the binaries in the SoC 200. The system, for example, is capable of combining the various binaries and the configuration bitstream into a PDI that may be provided to the SoC 200 and loaded into the SoC 200 to implement the application therein.[00250] FIG. 22 illustrates an example method 2200 of communication between the hardware compiler 1606 and the DPE compiler 1602. Method 2200 presents an example of how communications between the hardware compiler 1606 and the DPE compiler 1602 as described in connection with FIGs. 16, 19, 20, and 21 may be handled. Method 2200 illustrates an example implementation of a validation call (e.g., a validation procedure) conducted between the hardware compiler 1606 and the DPE compiler 1602. The example of method 2200 provides an alternative to performing full place and route for the DPE array 202 and/or the NoC 208 to
generate updated SoC interface block solutions in response to SoC interface block constraints provided from the hardware compiler 1606. Method 2200 illustrates an incremental approach where re-routing is attempted prior to initiating a mapping and routing of the software portion of the application.[00251] Method 2200 may begin in block 2202 where the hardware compiler 1606 provides one or more SoC interface block constraints to the DPE compiler 1602. The hardware compiler 1606, for example, during the implementation flow and in response to determining that a design metric for the hardware portion of the application is not or will not be met, may receive one or more user-specified SoC interface block constraints and/or generate one or more SoC interface block constraints. The SoC interface block constraints may specify a preferred mapping of the logical resource(s) to the physical stream channels of the SoC interface block 206 that is expected to result in improved Quality of Result (QoS) for the hardware portion of the application.[00252] The hardware compiler 1606 provides the SoC interface block constraints to the DPE compiler 1602. The SoC interface block constraints provided from the hardware compiler 1606 may fall into two different categories. The first category of SoC interface block constraint is a hard constraint. The second category of SoC interface block constraint is a soft constraint. Hard constraints are design constraints that must be satisfied to implement the application within the SoC 200. Soft constraints are design constraints that may be violated in the implementation of the application for the SoC 200.[00253] In one example, hard constraints are user-specified constraints for the hardware portion of the application to be implemented in the PL 214. The hard constraints may include any available constraint types such as location, power, timing, etc., that are user-specified constraints. Soft constraints may include any available constraint that is generated by the hardware compiler 1606 and/or the DPE compiler 1602 throughout the implementation flow such as a constraint specifying a particular mapping of logical resource(s) to stream channels of the SoC interface block 206 as described.[00254] In block 2204, the DPE compiler 1602, in response, to receiving the SoC interface block constraint(s), initiates a validation process to incorporate the received SoC interface block constraints in generating an updated SoC interface block solution. In block 2206, the DPE compiler 1602 is capable of differentiating
between hard constraint(s) and soft constraint(s) received from the hardware compiler 1606 relating to the hardware portion of the application.[00255] In block 2208, the DPE compiler 1602 routes the software portion of the application while following both the hard constraint(s) and the soft constraint(s) provided from the hardware compiler. The DPE compiler 1602, for example, is capable of routing connections among the DPEs 204 of the DPE array 202 and the data paths between the DPEs 204 and the SoC interface block 206 to determine which stream channels (e.g., tiles, stream switches, and ports) of the SoC interface block 206 are used for data path crossings between the DPE array 202 and the PL 214 and/or NoC 208. If the DPE compiler 1602 successfully routes the software portion of the application for implementation in the DPE array 202 while following both of the hard constraint(s) and the soft constraint(s), method 2200 continues to block 2218. If the DPE compiler 1602 is not able to generate a route for the software portion of the application in the DPE array while following both of the hard constraint(s) and the soft constraint(s), e.g., the constraints are un-routable, method 2200 continues to block 2210.[00256] In block 2210, the DPE compiler 1602 routes the software portion of the application while following only the hard constraint(s). In block 2210, the DPE compiler 1602 ignores the soft constraint(s) for purposes of the routing operation. If the DPE compiler 1602 successfully routes the software portion of the application for implementation in the DPE array 202 while following only the hard constraint(s), method 2200 continues to block 2218. If the DPE compiler 1602 is not able to generate a route for the software portion of the application in the DPE array 202 while following only the hard constraint(s), method 2200 continues to block 2212.[00257] Blocks 2208 and 2210 illustrate an approach for the validation operation that seeks to use the SoC interface block constraint(s) provided from the hardware compiler 1606 to create an updated SoC interface block solution in less time than were a full map (e.g., place) and route of the DPE nodes to be performed. As such, blocks 2208 and 2210 involve only routing without attempting to map (e.g., remap) or "place" the DPE nodes to DPEs 204 of the DPE array 202.[00258] Method 2200 continues to block 2212 in the case where routing alone is unable to arrive at an updated SoC interface block solution using the SoC interface block constraint(s) from the hardware compiler. In block 2212, the DPE compiler 1602 is capable of mapping the software portion of the application to DPEs in the
DPE array 202 using both of the hard constraint(s) and the soft constraint(s). The DPE compiler 1602 is also programmed with the architecture (e.g., connectivity) of the SoC 200. The DPE compiler 1602 performs the actual assignment of logical resources to physical channels of the SoC interface block 206 (e.g., to stream channels) and is also capable of modeling the architectural connectivity of the SoC 200.[00259] As an example, consider DPE node A communicating with a PL node B. Each block of the block diagram can correspond to a particular core (e.g., circuit block) that is to be implemented in the PL 214. PL node B communicates with DPE node A through a physical channel X in the SoC interface block 206. Physical channel X carries the data stream(s) between DPE node A and PL node B. The DPE compiler 1602 is capable of mapping DPE node A to a particular DPE Y so that the distance between DPE Y and the physical channel X is minimized.[00260] In some implementations of the SoC interface block 206, one or more of the tiles included therein are not connected to the PL 214. The unconnected tiles may be a result of the placement of particular hardwired circuit blocks 210 in and/or around the PL 214. This architecture, e.g., with unconnected tiles in the SoC interface block 206, complicates routing between the SoC interface block 206 and the PL 214. The connectivity information regarding unconnected tiles is modeled in the DPE compiler 1602. The DPE compiler 1602, as part of performing mapping, is capable of selecting DPE nodes that have connections with the PL 214. The DPE compiler 1602, as part of performing mapping, is capable of minimizing the number of selected DPE nodes that are mapped to DPEs 204 in columns of the DPE array 202 immediately above the unconnected tiles of the SoC interface block 206. The DPE compiler 1602 maps DPE nodes that do not have connections (e.g., direct connections) to the PL 214 (e.g., nodes that instead connect to other DPEs 204) to the columns of the DPE array 202 positioned above the unconnected tiles of the SoC interface block 206.[00261 ] In block 2214, the DPE compiler 1602 routes the remapped software portion of the application while following only the hard constraint(s). If the DPE compiler 1602 successfully routes the remapped software portion of the application for implementation in the DPE array 202 while following only the hard constraint(s), method 2200 continues to block 2218. If the DPE compiler 1602 is not able to generate a route for the software portion of the application in the DPE array 202
while following only the hard constraint(s), method 2200 continues to block 2216. In block 2216, the DPE compiler 1602 indicates that the validation operation failed. The DPE compiler 1602 may output a notification and may provide the notification to the hardware compiler 1606.[00262] In block 2218, the DPE compiler 1602 generates an updated SoC interface block solution and a score for the updated SoC interface block solution. The DPE compiler 1602 generates the updated SoC interface block solution based on the updated routing or the updated mapping and routing determined in block 2208, block 2210, or blocks 2212 and 2214.[00263] The score generated by the DPE compiler 1602 indicates the quality of the SoC interface block solution based on the mapping and/or routing operations performed. In one example implementation, the DPE compiler 1602 determines the score based on how many soft constraints were not met and the distance between the stream channel requested in the soft constraint and the actual channel assigned in the updated SoC interface block solution. The number of soft constraints not met and the distance, for example, both may be inversely proportional to the score.[00264] In another example implementation, the DPE compiler 1602 determines the score based on the quality of the updated SoC interface block solution using one or more design cost metrics. These design cost metrics may include the number of data movements supported by the SoC interface block solution, a memory conflict cost, and the latency of the routes. In one aspect, the number of data movements in the DPE array 202 may be quantified by the number of DMA transfers used in the DPE array 202 in addition to those needed to transfer data across the SoC interface block 206. The memory conflict cost may be determined based on the number of concurrent accessing circuits (e.g., DPE or DMA) for each memory bank. The latency of the routes may be quantified by the minimum number of cycles required to transfer the data between the SoC interface block 206 ports and the individual source or destination DPE 204. The DPE compiler 1602 determines a higher score when the design cost metrics are lower (e.g., a sum of the design cost metrics are lower).[00265] In another example implementation, the total score of an updated SoC interface block solution is computed as a fraction (e.g., 80/100) where the numerator is reduced from 100 by the sum of the number of additional DMA
transfers, the number of concurrent accessing circuits for each memory bank in excess of two, and the number of hops needed for the routes between the SoC interface block 206 ports and the DPE 204 cores.[00266] In block 2220, the DPE compiler 1602 provides the updated SoC interface block solution and the score to the hardware compiler 1606. The hardware compiler 1606 is capable of evaluating the various SoC interface block solutions received from the DPE compiler 1602 based on the score of each respective SoC interface block solution. In one aspect, the hardware compiler 1606, for example, is capable of retaining prior SoC interface block solutions. The hardware compiler 1606 is capable of comparing the score of the updated SoC interface block solution with the score of a previous (e.g., an immediately prior SoC interface block solution) and using the updated SoC interface block solution if the score of the updated SoC interface block solution exceeds the score of the prior SoC interface block solution.[00267] In another example implementation, the hardware compiler 1606 receives an SoC interface block solution from the DPE compiler 1602 with a score of 80/100. The hardware compiler 1606 is unable to arrive at an implementation of the hardware portion of the application within the PL 214 and provides one or more SoC interface block constraints to the DPE compiler 1602. The updated SoC interface block solution received by the hardware compiler 1606 from the DPE compiler 1602 has a score of 20/100. In that case, in response to determining that the score of the newly received SoC interface block solution does not exceed (e.g., is lower) than the score of the prior SoC interface block solution, the hardware compiler 1606 relaxes one or more of the SoC interface block constraints (e.g., soft constraints) and provides the SoC interface block constraints, including the relaxed constraint(s), to the DPE compiler 1602. The DPE compiler 1602 attempts to generate another SoC interface block solution that, in view of the relaxed design constraint(s), has a score higher than 20/100 and/or 80/100.[00268] In another example, the hardware compiler 1606 may choose to use a prior SoC interface block solution with a higher or highest score. The hardware compiler 1606 may revert to an earlier SoC interface block solution at any point such as, for example, in response to receiving an SoC interface block solution having a lower score than an immediately prior SoC interface block solution or in response to receiving an SoC interface block solution with a lower score than a
prior SoC interface block solution after one or more of the SoC interface block constraints have been relaxed.[00269] FIG. 23 illustrates an example method 2300 of handling SoC interface block solutions. Method 2300 may be performed by the hardware compiler 1606 to evaluate received SoC interface block solution(s) and select an SoC interface block solution, referred to as the current best SoC interface block solution, for use in performing the implementation flow on the hardware portion of the application.[00270] In block 2302, the hardware compiler 1606 receives an SoC interface block solution from the DPE compiler 1602. The SoC interface block solution received in block 2302 may be the initial or first SoC interface block solution provided from the DPE compiler 1602. In providing SoC interface block solutions to the hardware compiler 1606, the DPE compiler 1602 further provides the score for the SoC interface block solution. At least initially, the hardware compiler 1606 selects the first SoC interface block solution to the be current best SoC interface block solution.[00271] In block 2304, the hardware compiler 1606 optionally receives one or more hard SoC interface block constraints from the user. In block 2306, the hardware compiler is capable of generating one or more soft SoC interface block constraints for implementing the hardware portion of the application. The hardware compiler generates the soft SoC interface block constraints in an effort to meet hardware design metrics.[00272] In block 2308, the hardware compiler 1606 sends the SoC interface block constraints (e.g., both hard and soft) to the DPE compiler 1602 for validation. In response to receiving the SoC interface block constraints, the DPE compiler is capable of generating an updated SoC interface block solution based on the SoC interface block constraints received from the hardware compiler 1606. The DPE compiler 1602 provides the updated SoC interface block solution to the hardware compiler 1606. Accordingly, in block 2310, the hardware compiler receives the updated SoC interface block solution.[00273] In block 2312, the hardware compiler 1606 compares the score of the updated SoC interface block solution (e.g., the most recently received SoC interface block solution) with the score of the first (e.g., prior received) SoC interface block solution.
[00274] In block 2314, the hardware compiler 1606 determines whether the score of the updated (e.g., most recently received) SoC interface block solution exceeds the score of the previously received (e.g., first) SoC interface block solution. In block 2316, the hardware compiler 1606 selects the most recently received (e.g., updated) SoC interface block solution as the current best SoC interface block solution.[00275] In block 2318, the hardware compiler 1606 determines whether an improvement goal has been achieved or a time budget has been exceeded. For example, the hardware compiler 1606 is capable of determining whether a current implementation state of the hardware portion of the application is meeting a larger number of design metrics and/or has come closer to meeting one or more design metrics. The hardware compiler 1606 is also capable of determining whether a time budget has been exceeded based on the amount of processing time spent on place and/or route and whether that time exceeds a maximum placement time, a maximum route time, or a maximum amount of time for both place and route. In response to determining that an improvement goal was reached or a time budget exceeded, method 2300 continues to block 2324. If not, method 2300 continues to block 2320.[00276] In block 2324, the hardware compiler 1606 uses the current best SoC interface block solution for implementing the hardware portion of the application.[00277] Continuing with block 2320, the hardware compiler 1606 relaxes one or more of the SoC interface block constraints. The hardware compiler 1606 may relax, for example, or change, one or more of the soft constraints. An example of relaxing or changing a soft SoC interface block constraint includes removing (e.g., deleting) the soft SoC interface block constraint. Another example of relaxing or changing a soft SoC interface block constraint includes replacing a soft SoC interface block constraint with a different SoC interface block constraint. The replacement soft SoC interface block constraint may be less strict than the original being replaced.[00278] In block 2322, the hardware compiler 1606 is capable of sending the SoC interface block constraint(s), including the relaxed SoC interface block constraint(s), to the DPE compiler 1602. After block 2322, method 2300 loops back to block 2310 to continue processing as described. For example, the DPE compiler generates a further updated SoC interface block solution based on the SoC interface block
constraints received from the hardware compiler in block 2322. In block 2310, the hardware compiler receives the further updated SoC interface block solution.[00279] Method 2300 illustrates an example process of choosing an SoC interface block solution from the DPE compiler 1602 to use for performing the implementation flow and the circumstances in which the SoC interface block constraint(s) may be relaxed. It should be appreciated that the hardware compiler 1606 may provide SoC interface block constraints to the DPE compiler 1602 at any of a variety of different points during the implementation flow to obtain an updated SoC interface block solution as part of a reconciliation and/or validation process.For example, at any point in which the hardware compiler 1606 determines (e.g., based on a timing, power, or other check or analysis) that the implementation of the hardware portion of the application, in its current state, does not meet or will not meet a design metric of the application, the hardware compiler 1606 may request an updated SoC interface block solution by providing updated SoC interface block constraint(s) to the DPE compiler 1602.[00280] FIG. 24 illustrates another example of an application 2400 for implementation in the SoC 200. Application 2400 is specified as a directed flow graph. Nodes are shaded and shaped differently to distinguish between PL nodes, DPE nodes, and I/O nodes. In the example shown, the I/O nodes may be mapped onto the SoC interface block 206. The PL nodes are implemented in the PL. The DPE nodes are mapped to particular DPEs. Though not shown in its entirety, the application 2400 includes 36 kernels (e.g., nodes) to be mapped to DPEs 204, 72 PL to DPE array data streams, and 36 DPE array to PL data streams.[00281] FIG. 25 is an example illustration of an SoC interface block solution generated by the DPE compiler 1602. The SoC interface block solution of FIG. 25 may be generated by the DPE compiler 1602 and provided to the hardware compiler 1606. The example of FIG. 25 illustrates a scenario in which the DPE compiler 1602 generates an initial mapping of DPE nodes to DPEs 204 of the DPE array 202. Further, the DPE compiler 1602 successfully routes the initial mapping of DPE nodes. In the example of FIG. 25, only columns 6-17 of the DPE array 202 are shown. Further, each column includes 4 DPEs 204.[00282] FIG. 25 illustrates a mapping of DPE nodes to DPEs 204 of the DPE array 202 and a routing of data streams to SoC interface block 206 hardware. The mapping of DPE nodes 0-35 of the application 2400 to DPEs 204, as determined
by the DPE compiler 1602, is shown with reference to the DPE array 202. The routing of data streams between the DPEs and particular tiles of the SoC interface block 206 is shown as a collection of arrows. For purposes of illustration in describing FIGs. 25-30, the key displayed in FIG. 25 is used to differentiate between data streams controlled by soft constraints, hard constraints, and data streams having no applicable constraints.[00283] With reference to FIGs. 25-30, soft constraints correspond to routings determined by the DPE compiler 1602 and/or the hardware compiler 1606, while hard constraints may include user-specified SoC interface block constraints. All of the constraints shown in FIG. 25 are soft constraints. The example of FIG. 25 illustrates the case where the DPE compiler 1602 successfully determines an initial SoC interface block solution. In one aspect, the DPE compiler 1602 may be configured to attempt, at least initially, to use vertical routes for the SoC interface block solution as shown before attempting to use other routes that traverse along (e.g., left to right) a row DPEs 204 from one column to another.[00284] FIG. 26 illustrates an example of routable SoC interface block constraints received by the DPE compiler 1602. The DPE compiler 1602 is capable of generating an updated SoC interface block solution specifying an updated routing in the form of updated SoC interface block constraints. In the example of FIG. 26, a larger number of the SoC interface block constraints are hard constraints. In this example, the DPE compiler 1602 successfully routes the data streams of the DPE array 202 while observing each type of constraint shown.[00285] FIG. 27 illustrates an example of un-routable SoC interface block constraints that are to be observed by the DPE compiler 1602. The DPE compiler 1602 is unable to produce an SoC interface block solution that observes the constraints illustrated in FIG. 27.[00286] FIG. 28 illustrates an example where the DPE compiler 1602 ignores the soft type SoC interface block constraints from FIG. 27. In the example of FIG. 28, the DPE compiler 1602 successfully routes the software portion of the application for implementation in the DPE array 202 using only the hard constraints. Those data streams not controlled by constraints may be routed in any way that the DPE compiler 1602 sees fit or is able to do so.[00287] FIG. 29 illustrates another example of un-routable SoC interface block constraints. The example of FIG. 29 has only hard constraints. As such, the DPE
compiler 1602, being unable to ignore the hard constraints, initiates a mapping (or re-mapping) operation.[00288] FIG. 30 illustrates an example mapping of the DPE nodes of FIG. 29. In this example, subsequent to remapping, the DPE compiler 1602 is able to successfully route the DPE nodes to generate an updated SoC interface block solution.[00289] FIG. 31 illustrates another example of un-routable SoC interface block constraints. The example of FIG. 31 has only hard constraints. As such, the DPE compiler 1602, being unable to ignore hard constraints, initiates a mapping operation. For purposes of illustration, the DPE array 202 includes only three rows of DPEs (e.g., 3 DPEs in each column).[00290] FIG. 32 illustrates an example mapping of the DPE nodes of FIG. 31. FIG. 32 illustrates the result obtained from the re-mapping operation initiated as described in connection with FIG. 31 . In this example, subsequent to re-mapping, the DPE compiler 1602 is able to successfully route the software solution of the application to generate an updated SoC interface block solution.[00291] In one aspect, the system is capable of performing the mapping illustrated in FIGs. 25-32 by generating an Integer Linear Programming (ILP) formulation of the mapping problem. The ILP formulation may include a plurality of different variables and constraints that define the mapping problem. The system is capable of solving the ILP formulation while also minimizing the cost(s). Costs may be determined, at least in part, based on a number of DMA engines used. In this manner, the system is capable of mapping the DFG onto the DPE array.[00292] In another aspect, the system is capable of ordering nodes of the DFG in decreasing order of priority. The system may decide priority based on one or more factors. Examples of the factors can include, but are not limited to, the height of the node in the DFG graph, the total degree of the node (e.g., the sum of all edges entering and leaving the node), and/or the type of edges connected to the node such as memory, stream, and cascade. The system is capable of placing the node on the best DPE available based on affinity and validity. The system is capable of determining validity based on whether all resource requirements of this node can be met on a given DPE (e.g., compute resources, memory buffers, stream resources). The system is capable of determining affinity based on one or more other factors. Examples of affinity factors may include placing the node on the
same DPE or an adjacent DPE where the neighbors of this node have already been placed to minimize DMA communication, architectural constraints such as whether this node is part of a cascade chain, and/or finding a DPE that has maximally free resources. If the node is placed with all constraints being met, the system is capable of increasing priority of neighboring nodes of the placed node so that such nodes are handled next. If no available placement is valid for the current node, the system may try to unplace some other nodes from their best candidate DPE(s) to make room for this node. The system may put the unplaced nodes back on the priority queue to be placed again. The system is capable of limiting the total effort expended in finding a good solution by keeping track of the total number of placements and unplacements performed. It should be appreciated, however, that other mapping techniques may be used and that the examples provided herein are not intended to be limiting.[00293] FIG. 33 illustrates another example software architecture 3300 that is executable by the system described in connection with FIG. 1. For example, the architecture 3300 of FIG. 33 may be implemented as one or more of the program modules 120 of FIG. 1. The example software architecture 3300 of FIG. 33 may be used in cases where the application, e.g., the data flow graph, specifies one or more Fligh-Level Synthesis (FILS) kernels for implementation in the PL 214. For example, the PL nodes of the application reference FILS kernels that require FILS processing. In one aspect, the FILS kernels are specified in a high-level language (FILL) such as C and/or C++.[00294] In the example of FIG. 33, the software architecture 3300 includes the DPE compiler 1602, the hardware compiler 1606, an FILS compiler 3302, and a system linker 3304. The NoC compiler 1604 may be included and used in conjunction with the DPE compiler 1602 to perform the validation check 3306 as previously described within this disclosure.[00295] As illustrated, the DPE compiler 1602 receives an application 3312, an SoC architecture description 3310, and optionally a test bench 3314. The application 3312, as discussed, may be specified as a data flow graph that includes parallel execution semantics. The application 3312 may include interconnected PL nodes and DPE nodes and specify runtime parameters. In this example, the PL nodes reference FILS kernels. The SoC architecture description 3310 may be a data structure or a file that specifies information such as the size and dimensions of
the DPE array 202, the size of the PL 214 and the various programmable circuit blocks available therein, the type of PS 212 such as the type of processors and other devices included in the PS 212, and other physical characteristics of the circuitry in the SoC 200 in which the application 3312 is to be implemented. The SoC architecture description 3310 may also specify connectivity (e.g., interfaces) among the subsystems included therein.[00296] The DPE compiler 1602 is capable of outputting the HLS kernels to the HLS compiler 3302. The HLS compiler 3302 transforms the HLS kernels, which are specified in an HLL, into HLS IPs that may be synthesized by the hardware compiler. For example, the HLS IPs may be specified as register transfer level (RTL) blocks. The HLS compiler 3302, for example, generates an RTL block for each HLS kernel. As pictured, the HLS compiler 3302 outputs the HLS IPs to the system linker 3304.[00297] The DPE compiler 1602 generates additional outputs such as the initial SoC interface block solution and a connection graph. The DPE compiler 1602 outputs the connection graph to the system linker 3304 and the SoC interface block solution to the hardware compiler 1606. The connection graph specifies connectivity between nodes corresponding to HLS kernels to be implemented in PL 214 (now converted to HLS IPs) and nodes to be implemented in the DPE array 202.[00298] As pictured, the system linker 3304 receives the SoC architecture description 3310. System linker 3304 may also receive one or more HLS and/or RTL blocks directly from application 3312 that are not processed through DPE compiler 1602. The system linker 3304 is capable of automatically generating a block diagram corresponding to the hardware portion of the application using the received HLS and/or RTL blocks, HLS IPs, and the connection graph specifying connectivity between the IP kernels and the connectivity between the IP kernels and the DPE nodes. In one aspect, the system linker 3304 is capable of integrating the block diagram with a base platform (not shown) for the SoC 200. For example, the system linker 3304 is capable of connecting the block diagram to the base platform resulting in an integrated block diagram. The block diagram and the connected base platform may be referred to as a synthesizable block diagram.[00299] In another aspect, HLS IPs and RTL IPs referenced as kernels within the SDF graph (e.g., application 3312) can be compiled into IPs outside of DPE
compiler 1602. The compiled IPs can be provided directly to system linker 3304. System linker 3304 is capable of automatically generating a block diagram corresponding to the hardware portion of the application using the provided IPs.[00300] In one aspect, system linker 3304 is capable of including within the block diagram additional hardware-specific details derived from the original SDF (e.g., application 3312) and generated connection graph. For example, since application 3312 includes software models that are actual FILS models that can be translated into IPs or correlated (e.g., matched) to IPs in a database of such IPs using some mechanism (e.g., by name or other matching/correlation technique), system linker 3304 is capable of automatically generating the block diagram (e.g., without user intervention). In this example, custom IPs may not be used. In automatically generating the block diagram, system linker 3304 is capable of automatically inserting one or more additional circuit blocks such as data-width conversion blocks, hardware buffers, and/or clock domain crossing logic that, in other cases described herein, were manually inserted and connected by the user. System linker 3304, for example, is capable of analyzing the data types and the software model to determine that one or more additional circuit blocks, as described, are needed to create the connections specified by the connection graph.[00301] The system linker 3304 outputs the block diagram to the hardware compiler 1606. The hardware compiler 1606 receives the block diagram and the initial SoC interface block solution generated by the DPE compiler 1602. The hardware compiler 1606 is capable of initiating the validation check 3306 with the DPE compiler 1602 and optionally the NoC compiler 1604 as previously described in connection with block 2010 of FIG. 20, blocks 2106, 2108, 21 12, 21 14, 21 16, and 21 18 of FIG. 21 , FIG. 22, and FIG. 23. The validation may be an iterative process where the hardware compiler provides design data such as the various types of constraints (which may include relaxed/modified constraints in an iterative approach) to the DPE compiler 1602 and optionally to the NoC compiler 1604 and, in return, receives an updated SoC interface block solution from the DPE compiler 1602 and optionally an updated NoC solution from the NoC compiler 1604.[00302] Hardware compiler 1606 is capable of generating a hardware package that includes the configuration bitstream that implements the hardware portion of the application 3312 in the PL 214. The hardware compiler 1606 is capable of outputting the hardware package to the DPE compiler 1602. The DPE compiler
1602 is capable of generating the DPE array configuration data (e.g., one or more binaries) that program the software portion of the application 3312 intended for implementation in the DPE array 202 therein.[00303] FIG. 34 illustrates another example method 3400 of performing a design flow to implement an application in the SoC 200. Method 3400 may be performed by a system as described in connection with FIG. 1 . The system may execute a software architecture as described in connection with FIG. 33. In the example of FIG. 34, the application being processed includes nodes that specify FILS kernels for implementation in the PL 214.[00304] In block 3402, the DPE compiler 1602 receives the application, an SoC architecture description of the SoC 200, and optionally a test bench. In block 3404, the DPE compiler 1602 is capable of generating a connection graph and providing the connection graph to the system linker. In block 3406, the DPE compiler 1602 generates an initial SoC interface block solution and provides the initial SoC interface block solution to the hardware compiler 1606. The initial SoC interface block solution can specify an initial mapping of DPE nodes of the application to DPEs 204 of the DPE array 202 and a mapping of the connections in and out of the DPE array 202 to physical data paths of the SoC interface block 206.[00305] In block 3408, the FILS compiler 3302 is capable of performing FILS on the FILS kernels to generate synthesizable IP cores. For example, the DPE compiler 1602 provides the FILS kernels specified by the nodes of the application to the FILS compiler 3302. The FILS compiler 3302 generates an FILS IP for each of the FILS kernels received. The FILS compiler 3302 outputs the FILS IPs to the system linker.[00306] In block 3410, the system linker is capable of automatically generating a block diagram corresponding to the hardware portion of the application using the connection graph, the SoC architecture description, and the FILS IPs. In block 3412, the system linker is capable of integrating the block diagram and a base platform for the SoC 200. For example, the hardware compiler 1606 is capable of connecting the block diagram to the base platform resulting in an integrated block diagram. In one aspect, the block diagram and the connected base platform are referred to as a synthesizable block diagram.[00307] In block 3414, the hardware compiler 1606 is capable of performing an implementation flow on the integrated block diagram. During the implementation
flow, the hardware compiler 1606 is capable performing validation as described herein in cooperation with the DPE compiler 1602 and optionally the NoC compiler 1604 to converge to an implementation of the hardware portion of the application for implementation in the PL. For example, as discussed, the hardware compiler 1606 is capable of invoking the DPE compiler 1602 and optionally the NoC compiler 1604 in response to determining that a current implementation state of the hardware portion of the application does not meet one or more design metrics. The hardware compiler 1606 may invoke the DPE compiler 1602 and optionally the NoC compiler 1604 prior to placement, during placement, prior to routing, and/or during routing.[00308] In block 3416, the hardware compiler 1606 exports the hardware implementation to the DPE compiler 1602. In one aspect, the hardwareimplementation may be output as a device support archive (DSA) file. The DSA file may include platform metadata, emulation data, one or more configuration bitstreams as generated by the hardware compiler 1606 from the implementation flow, and the like. The hardware implementation may also include the final SoC interface block solution and optionally the final NoC solution used by the hardware compiler 1606 to create the implementation of the hardware portion of the application.[00309] In block 3418, the DPE compiler 1602 completes the software generation for the DPE array. For example, the DPE compiler 1602 generates the binaries used to program the DPEs used in the application. In generating the binaries, the DPE compiler 1602 is capable of using the final SoC interface block solution and optionally the final NoC solution used by the hardware compiler 1606 to perform the implementation flow. In one aspect, the DPE compiler is capable of determining the SoC interface block solution used by the hardware compiler through inspection of the configuration bitstream and/or the metadata included in the DSA.[00310] In block 3420, the NoC compiler 1604 generates a binary or binaries for programming the NoC 208. In block 3422, the PS compiler 1918 generates the PS binary. In block 3424, the system is capable of deploying the configuration bitstream and the binaries in the SoC 200.[00311] FIG. 35 illustrates another example method 3500 of performing a design flow to implement an application in the SoC 200. Method 3500 may be performed by a system as described in connection with FIG. 1 . The application may be
specified as a data flow graph as described herein and include a software portion for implementation within the DPE array 202 and a hardware portion for implementation within the PL 214.[00312] In block 3502, the system is capable of generating a first interface solution mapping logical resources used by the software portion to hardware resources of an interface block coupling the DPE array 202 and the PL 214. The DPE compiler 1602, for example, may generate the initial, or first, SoC interface block solution.[00313] In block 3504, the system is capable of generating a connection graph specifying connectivity among the HLS kernels and nodes of the software portion to be implemented in the DPE array. In one aspect, the DPE compiler 1602 is capable of generating the connection graph.[00314] In block 3506, the system is capable of generating a block diagram based on the connection graph and the HLS kernels. The block diagram is synthesizable. A system linker, for example, is capable of generating the synthesizable block diagram.[00315] In block 3508, the system is capable of performing an implementation flow on the block diagram using the first interface solution. As discussed, the hardware compiler 1606 is capable of exchanging design data with the DPE compiler 1602 and optionally the NoC compiler 1604 during the implementation flow. The hardware compiler 1606 and the DPE compiler 1602 may iteratively exchange data where the DPE compiler 1602 provides updated SoC interface block solutions to the hardware compiler 1606 in response to being invoked by the hardware compiler 1606. The hardware compiler 1606 may invoke the DPE compiler by providing one or more constraints for the SoC interface block thereto. The hardware compiler 1606 and the NoC compiler 1604 may iteratively exchange data where the NoC compiler 1604 provides updated NoC solutions to the hardware compiler 1606 in response to being invoked by the hardware compiler 1606. The hardware compiler 1606 may invoke the NoC compiler 1604 by providing one or more constraints for the NoC 208 thereto.[00316] In block 3510, the system is capable of compiling, using the DPE compiler 1602, the software portion of the application for implementation in one or more DPEs 204 of the DPE array 202. The DPE compiler 1602 may receive the results of the implementation flow in order to use a consistent interface between the
DPE array 202 and the PL 214 (e.g., a same SoC interface block solution used during the implementation flow by the hardware compiler 1606).[00317] For purposes of explanation, specific nomenclature is set forth to provide a thorough understanding of the various inventive concepts disclosed herein. The terminology used herein, however, is for the purpose of describing particular aspects of the inventive arrangements only and is not intended to be limiting.[00318] As defined herein, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise.[00319] As defined herein, the terms "at least one," "one or more," and "and/or," are open-ended expressions that are both conjunctive and disjunctive in operation unless explicitly stated otherwise. For example, each of the expressions "at least one of A, B, and C," "at least one of A, B, or C," "one or more of A, B, and C," "one or more of A, B, or C," and "A, B, and/or C" means A alone, B alone, C alone, A and B together, A and C together, B and C together, or A, B and C together.[00320] As defined herein, the term "automatically" means without user intervention. As defined herein, the term "user" means a human being.[00321] As defined herein, the term "computer readable storage medium" means a storage medium that contains or stores program code for use by or in connection with an instruction execution system, apparatus, or device. As defined herein, a "computer readable storage medium" is not a transitory, propagating signal per se. A computer readable storage medium may be, but is not limited to, an electronic storage device, a magnetic storage device, an optical storage device, an electromagnetic storage device, a semiconductor storage device, or any suitable combination of the foregoing. The various forms of memory, as described herein, are examples of computer readable storage media. A non-exhaustive list of more specific examples of a computer readable storage medium may include: a portable computer diskette, a hard disk, a RAM, a read-only memory (ROM), an erasable programmable read-only memory (EPROM or Flash memory), an electronically erasable programmable read-only memory (EEPROM), a static random access memory (SRAM), a portable compact disc read-only memory (CD-ROM), a digital versatile disk (DVD), a memory stick, a floppy disk, or the like.[00322] As defined herein, the term "if" means "when" or "upon" or "in response to" or "responsive to," depending upon the context. Thus, the phrase "if it is determined" or "if [a stated condition or event] is detected" may be construed to
mean "upon determining" or "in response to determining" or "upon detecting [the stated condition or event]" or "in response to detecting [the stated condition or event]" or "responsive to detecting [the stated condition or event]" depending on the context.[00323] As defined herein, the term "high-level language" or "HLL" means a programming language, or set of instructions, used to program a data processing system where the instructions have a strong abstraction from the details of the data processing system, e.g., machine language. For example, an HLL is capable of automating or hiding aspects of operation of the data processing system such as memory management. Though referred to as HLLs, these languages are typically classified as "efficiency-level languages". HLLs expose hardware-supported programming models directly. Examples of HLLs include, but are not limited to, C, C++, and other suitable languages.[00324] An HLL may be contrasted with a hardware description language (HDL) such as Verilog, System Verilog, and VHDL, which are used to describe digital circuits. HDLs allow a designer to create a definition of a digital circuit design that may be compiled into a register transfer level (RTL) netlist that is typically technology independent.[00325] As defined herein, the term "responsive to" and similar language as described above, e.g., "if," "when," or "upon," means responding or reacting readily to an action or event. The response or reaction is performed automatically. Thus, if a second action is performed "responsive to" a first action, there is a causal relationship between an occurrence of the first action and an occurrence of the second action. The term "responsive to" indicates the causal relationship.[00326] As defined herein, the terms "one embodiment," "an embodiment," "one or more embodiments," "particular embodiments," or similar language mean that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment described within this disclosure. Thus, appearances of the phrases "in one embodiment," "in an embodiment," "in one or more embodiments," "in particular embodiments," and similar language throughout this disclosure may, but do not necessarily, all refer to the same embodiment. The terms "embodiment" and "arrangement" are used interchangeably within this disclosure.
[00327] As defined herein, the term "output" means storing in physical memory elements, e.g., devices, writing to display or other peripheral output device, sending or transmitting to another system, exporting, or the like.[00328] As defined herein, the term "substantially" means that the recited characteristic, parameter, or value need not be achieved exactly, but that deviations or variations, including for example, tolerances, measurement error, measurement accuracy limitations, and other factors known to those of skill in the art, may occur in amounts that do not preclude the effect the characteristic was intended to provide.[00329] The terms first, second, etc. may be used herein to describe various elements. These elements should not be limited by these terms, as these terms are only used to distinguish one element from another unless stated otherwise or the context clearly indicates otherwise.[00330] A computer program product may include a computer readable storage medium (or media) having computer readable program instructions thereon for causing a processor to carry out aspects of the inventive arrangements described herein. Within this disclosure, the term "program code" is used interchangeably with the term "computer readable program instructions." Computer readable program instructions described herein may be downloaded to respectivecomputing/processing devices from a computer readable storage medium or to an external computer or external storage device via a network, for example, the Internet, a LAN, a WAN and/or a wireless network. The network may include copper transmission cables, optical transmission fibers, wireless transmission, routers, firewalls, switches, gateway computers and/or edge devices including edge servers. A network adapter card or network interface in each computing/processing device receives computer readable program instructions from the network and forwards the computer readable program instructions for storage in a computer readable storage medium within the respective computing/processing device.[00331] Computer readable program instructions for carrying out operations for the inventive arrangements described herein may be assembler instructions, instruction-set-architecture (ISA) instructions, machine instructions, machine dependent instructions, microcode, firmware instructions, or either source code or object code written in any combination of one or more programming languages, including an object-oriented programming language and/or procedural
programming languages. Computer readable program instructions may include state-setting data. The computer readable program instructions may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the latter scenario, the remote computer may be connected to the user's computer through any type of network, including a LAN or a WAN, or the connection may be made to an external computer (for example, through the Internet using an Internet Service Provider). In some cases, electronic circuitry including, for example, programmable logic circuitry, an FPGA, or a PLA may execute the computer readable program instructions by utilizing state information of the computer readable program instructions to personalize the electronic circuitry, in order to perform aspects of the inventive arrangements described herein.[00332] Certain aspects of the inventive arrangements are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, may be implemented by computer readable program instructions, e.g., program code.[00333] These computer readable program instructions may be provided to a processor of a general-purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or otherprogrammable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks.These computer readable program instructions may also be stored in a computer readable storage medium that can direct a computer, a programmable data processing apparatus, and/or other devices to function in a particular manner, such that the computer readable storage medium having instructions stored therein comprises an article of manufacture including instructions which implement aspects of the operations specified in the flowchart and/or block diagram block or blocks.[00334] The computer readable program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other device to cause a series of operations to be performed on the computer, other programmable
apparatus or other device to produce a computer implemented process, such that the instructions which execute on the computer, other programmable apparatus, or other device implement the functions/acts specified in the flowchart and/or block diagram block or blocks.[00335] The flowchart and block diagrams in the Figures illustrate thearchitecture, functionality, and operation of possible implementations of systems, methods, and computer program products according to various aspects of the inventive arrangements. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified operations.[00336] In some alternative implementations, the operations noted in the blocks may occur out of the order noted in the figures. For example, two blocks shown in succession may be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. In other examples, blocks may be performed generally in increasing numeric order while in still other examples, one or more blocks may be performed in varying order with the results being stored and utilized in subsequent or other blocks that do not immediately follow. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, may be implemented by special purpose hardware-based systems that perform the specified functions or acts or carry out combinations of special purpose hardware and computer instructions.[00337] The corresponding structures, materials, acts, and equivalents of all means or step plus function elements that may be found in the claims below are intended to include any structure, material, or act for performing the function in combination with other claimed elements as specifically claimed.[00338] A method includes, for an application specifying a software portion for implementation within a DPE array of a device and a hardware portion for implementation within PL of the device, generating, using a processor, a logical architecture for the application and a first interface solution specifying a mapping of logical resources to hardware of an interface circuit block between the DPE array and the programmable logic. The method includes building a block diagram of the hardware portion based on the logical architecture and the first interface solution
and performing, using the processor, an implementation flow on the block diagram. The method include compiling, using the processor, the software portion of the application for implementation in one or more DPEs of the DPE array.[00339] In another aspect, the building the block diagram includes adding to the block diagram at least one IP core for implementation within the programmable logic.[00340] In another aspect, during the implementation flow, a hardware compiler builds the block diagram and performs the implementation flow by exchanging design data with a DPE compiler configured to compile the software portion.[00341] In another aspect, the hardware compiler exchanges further design data with a NoC compiler. The hardware compiler receives a first NoC solution configured to implement routes through a NoC of the device that couples the DPE array to the PL of the device.[00342] In another aspect, the performing the implementation flow is performed based on the exchanged design data.[00343] In another aspect, the compiling the software portion is performed based on an implementation of the hardware portion of the application for implementation in the PL generated from the implementation flow.[00344] In another aspect, in response to a hardware compiler configured to build the block diagram and perform the implementation flow determining that an implementation of the block diagram does not meet a design metric for the hardware portion, providing a constraint for the interface circuit block to a DPE compiler configured to compile the software portion. The hardware compiler receives, from the DPE compiler, a second interface solution generated by the DPE compiler based on the constraint.[00345] In another aspect, the performing the implementation flow is performed based on the second interface solution.[00346] In another aspect, the hardware compiler, in response to determining that an implementation of the block diagram does not meet a design metric using a first NoC solution for a NoC, provides a constraint for the NoC to a NoC compiler. The hardware compiler receives, from the NoC compiler, a second NoC solution generated by the NoC compiler based on the constraint for the NoC.[00347] A system includes a processor configured to initiate operations. The operations include, for an application specifying a software portion for
implementation within a DPE array of a device and a hardware portion for implementation within PL of the device, generating a logical architecture for the application and a first interface solution specifying a mapping of logical resources to hardware of an interface circuit block between the DPE array and the PL. The operations include building a block diagram of the hardware portion based on the logical architecture and the first interface solution, performing an implementation flow on the block diagram, and compiling the software portion of the application for implementation in one or more DPEs of the DPE array.[00348] In another aspect, the building the block diagram includes adding to the block diagram at least one IP core for implementation within the PL.[00349] In another aspect, the operations include, during the implementation flow, executing a hardware compiler that builds the block diagram and performs the implementation flow by exchanging design data with a DPE compiler configured to compile the software portion.[00350] In another aspect, the operations include the hardware compiler exchanging further design data with a NoC compiler and the hardware compiler receiving a first NoC solution configured to implement routes through a NoC of the device that couples the DPE array to the PL of the device.[00351] In another aspect, the performing the implementation flow is performed based on the exchanged design data.[00352] In another aspect, the compiling the software portion is performed based on a hardware design for the hardware portion of the application for implementation in the PL generated from the implementation flow.[00353] In another aspect, the operations include, in response to a hardware compiler configured to build the block diagram and perform the implementation flow determining that an implementation of the block diagram does not meet a design constraint for the hardware portion, providing a constraint for the interface circuit block to a DPE compiler configured to compile the software portion. The hardware compiler receives, from the DPE compiler, a second interface solution generated by the DPE compiler based on the constraint.[00354] In another aspect, the performing the implementation flow is performed based on the second interface solution.[00355] In another aspect, the hardware compiler, in response to determining that an implementation of the block diagram does not meet a design metric using a first
NoC solution for a NoC, provides a constraint for the NoC to a NoC compiler. The hardware compiler receives, from the NoC compiler, a second NoC solution generated by the NoC compiler based on the constraint for the NoC.[00356] A method includes, for an application having a software portion for implementation in a DPE array of a device and a hardware portion forimplementation in PL of the device, performing, using a processor executing a hardware compiler, an implementation flow on the hardware portion based on an interface block solution that maps logical resources used by the software portion to hardware of an interface block coupling the DPE array to the PL. The method includes, in response to not meeting a design metric during the implementation flow, providing, using the processor executing the hardware compiler, an interface block constraint to a DPE compiler. The method also includes, in response to receiving the interface block constraint, generating, using the processor executing the DPE compiler, an updated interface block solution and providing the updated interface block solution from the DPE compiler to the hardware compiler.[00357] In another aspect, the interface block constraint maps the logical resources used by the software portion to physical resources of the interface block.[00358] In another aspect, the hardware compiler continues the implementation flow using the updated interface block solution.[00359] In another aspect, the hardware compiler iteratively provides interface block constraints to the DPE compiler responsive to not meeting design constraints for the hardware portion.[00360] In another aspect, the interface block constraint includes a hard constraint and a soft constraint. In that case, the method includes the DPE compiler routing the software portion of the application using both the hard constraint and the soft constraint to generate the updated interface block solution.[00361] In another aspect, the method includes, in response to failing to generate the updated interface block solution using both the hard constraint and the soft constraint, routing the software portion of the application using only the hard constraint to generate the updated interface block solution.[00362] In another aspect, the method includes, in response to failing to generate the updated mapping using only the hard constraint, mapping the software portion using both the hard constraint and the soft constraint and routing the software portion using only the hard constraint to generate the updated interface block
solution.[00363] In another aspect, wherein the interface block solution and the updated interface block solution each has a score, the method includes comparing the scores and, in response to determining that the score for the interface block solution exceeds the score for the updated interface block solution, relaxing the interface block constraint and submitting the relaxed interface block constraint to the DPE compiler to obtain a further updated interface block solution.[00364] In another aspect, the interface block solution and the updated interface block solution each has a score. The method includes comparing the scores and, in response to determining that the score for the updated interface block solution exceeds the score for the interface block solution, using the updated interface block solution for performing the implementation flow.[00365] A system includes a processor configured to initiate operations. The operations include, for an application having a software portion for implementation in a DPE array of a device and a hardware portion for implementation in PL of a device, performing, using a hardware compiler, an implementation flow on the hardware portion based on an interface block solution that maps logical resources used by the software portion to hardware of an interface block coupling the DPE array to the PL. The operations include, in response to not meeting a design metric during the implementation flow, providing, using the hardware compiler, an interface block constraint to a DPE compiler. The operations further include, in response to receiving the interface block constraint, generating, using the DPE compiler, an updated interface block solution and providing the updated interface block solution from the DPE compiler to the hardware compiler.[00366] In another aspect, the interface block constraint maps the logical resources used by the software portion to physical resources of the interface block.[00367] In another aspect, the hardware compiler continues the implementation flow using the updated interface block solution.[00368] In another aspect, the hardware compiler iteratively provides interface block constraints to the DPE compiler responsive to not meeting design constraints for the hardware portion.[00369] In another aspect, the interface block constraint includes a hard constraint and a soft constraint. In that case, processor is configured to initiate operations including the DPE compiler routing the software portion of the
application using both the hard constraint and the soft constraint to generate the updated interface block solution.[00370] In another aspect, the operations include, in response to failing to generate the updated mapping using both the hard constraint and the soft constraint, routing the software portion of the application using only the hard constraint to generate the updated interface block solution.[00371] In another aspect, the operations include, in response to failing to generate the updated mapping using only the hard constraint, mapping the software portion using both the hard constraint and the soft constraint and routing the software portion using only the hard constraint to generate the updated interface block solution.[00372] In another aspect, the interface block solution and the updated interface block solution each has a score. The processor is configured to initiate operations including comparing the scores and, in response to determining that the score for the interface block solution exceeds the score for the updated interface block solution, relaxing the interface block constraint and submitting the relaxed interface block constraint to the DPE compiler to obtain a further updated interface block solution.[00373] In another aspect, the interface block solution and the updated interface block solution each has a score. The processor is configured to initiate operations including, comparing the scores and, in response to determining that the score for the updated interface block solution exceeds the score for the interface block solution, using the updated interface block solution for performing theimplementation flow.[00374] A method includes, for an application specifying a software portion for implementation within a DPE array of a device and a hardware portion having HLS kernels for implementation within PL of the device, generating, using a processor, a first interface solution mapping logical resources used by the software portion to hardware resources of an interface block coupling the DPE array and the PL. The method includes generating, using the processor, a connection graph specifying connectivity among the HLS kernels and nodes of the software portion to be implemented in the DPE array and generating, using the processor, a block diagram based on the connection graph and the HLS kernels, wherein the block diagram is synthesizable. The method further includes performing, using the
processor, an implementation flow on the block diagram based on the first interface solution and compiling, using the processor, the software portion of the application for implementation in one or more DPEs of the DPE array.[00375] In another aspect, the generating the block diagram includes performing HLS on the HLS kernels to generate synthesizable versions of the HLS kernels and constructing the block diagram using the synthesizable versions of the HLS kernels.[00376] In another aspect, the synthesizable versions of the HLS kernels are specified as RTL blocks.[00377] In another aspect, the generating the block diagram is performed based on a description of an architecture of an SoC in which the application is to be implemented.[00378] In another aspect, the generating the block diagram includes connecting the block diagram with a base platform.[00379] In another aspect, the performing the implementation flow includes synthesizing the block diagram for implementation in the PL, and placing and routing the synthesized block diagram based on the first interface solution.[00380] In another aspect, the method includes, during the implementation flow, executing a hardware compiler that builds the block diagram and performs the implementation flow by exchanging design data with a DPE compiler configured to compile the software portion.[00381 ] In another aspect, the method includes the hardware compiler exchanging further design data with a NoC compiler and the hardware compiler receiving a first NoC solution configured to implement routes through a NoC of the device that couples the DPE array to the PL of the device.[00382] In another aspect, the method includes, in response to a hardware compiler configured to build the block diagram and perform the implementation flow determining that an implementation of the block diagram does not meet a design metric for the hardware portion, providing a constraint for the interface circuit block to a DPE compiler configured to compile the software portion. The method also includes the hardware compiler receiving, from the DPE compiler, a second interface solution generated by the DPE compiler based on the constraint.[00383] In another aspect, the performing the implementation flow is performed based on the second interface solution.
[00384] A system includes a processor configured to initiate operations. The operations include, for an application specifying a software portion forimplementation within a DPE array of a device and a hardware portion having HLS kernels for implementation within PL of the device, generating a first interface solution mapping logical resources used by the software portion to hardware resources of an interface block coupling the DPE array and the PL. The operations include generating a connection graph specifying connectivity among the HLS kernels and nodes of the software portion to be implemented in the DPE array and generating a block diagram based on the connection graph and the HLS kernels, wherein the block diagram is synthesizable. The operations further include performing an implementation flow on the block diagram based on the first interface solution and compiling the software portion of the application for implementation in one or more DPEs of the DPE array.[00385] In another aspect, the generating the block diagram includes performing HLS on the HLS kernels to generate synthesizable versions of the HLS kernels and constructing the block diagram using the synthesizable versions of the HLS kernels.[00386] In another aspect, the synthesizable versions of the HLS kernels are specified as RTL blocks.[00387] In another aspect, the generating the block diagram is performed based on a description of an architecture of an SoC in which the application is to be implemented.[00388] In another aspect, the generating the block diagram includes connecting the block diagram with a base platform.[00389] In another aspect, the performing the implementation flow includes synthesizing the block diagram for implementation in the PL, and placing and routing the synthesized block diagram based on the first interface solution.[00390] In another aspect, the operations include, during the implementation flow, executing a hardware compiler that builds the block diagram and performs the implementation flow by exchanging design data with a DPE compiler configured to compile the software portion.[00391] In another aspect, the operations include the hardware compiler exchanging further design data with a NoC compiler and the hardware compiler
receiving a first NoC solution configured to implement routes through a NoC of the device that couples the DPE array to the PL of the device.[00392] In another aspect, the operations include, in response to a hardware compiler configured to build the block diagram and perform the implementation flow determining that an implementation of the block diagram does not meet a design metric for the hardware portion, providing a constraint for the interface circuit block to a DPE compiler configured to compile the software portion. The method also includes the hardware compiler receiving, from the DPE compiler, a second interface solution generated by the DPE compiler based on the constraint.[00393] In another aspect, the performing the implementation flow is performed based on the second interface solution.[00394] One or more computer program products are disclosed herein that include a computer readable storage medium having program code stored thereon. The program code is executable by computer hardware to initiate the various operations described within this disclosure.[00395] The description of the inventive arrangements provided herein is for purposes of illustration and is not intended to be exhaustive or limited to the form and examples disclosed. The terminology used herein was chosen to explain the principles of the inventive arrangements, the practical application or technical improvement over technologies found in the marketplace, and/or to enable others of ordinary skill in the art to understand the inventive arrangements disclosed herein. Modifications and variations may be apparent to those of ordinary skill in the art without departing from the scope and spirit of the described inventive arrangements. Accordingly, reference should be made to the following claims, rather than to the foregoing disclosure, as indicating the scope of such features and implementations.[00396] Example 1 illustrates an example schema for a logical architecture derived from an application.Example 1{"$schema": "http://json-schema.Org/draft-4/schema#","description": "DPE/IPI Logical Architecture Specification","id": "LogicalArchSchema-0.1 ",
"compatible": [ "LogicalArchSchema-0.1 " ],"definitions": {"ArrayString": {"type": "array","items": { "type": "string" }} ."LogicalConnection": {"type": "object","properties": {"type" : { "type" : "string", "enum": [ "stream", "mem", "event" ] }, "direction" : { "type" : "string", "enum": [ "mejojol", "pl_to_me", "me_to_noc", "noc_to_me", "noc_to_pl", "pl_to_noc", "noc_to_noc","plJojDl"] },"srcPort" : {"type" : "object","properties": {"instName" : { "type" : "string" },"portName" : { "type" : "string" }} ."additionalProperties": false,"required": [ "instName", "portName" ]}."dstPorts" : {"type" : "array","items" : {"type": "object","properties": {"instName" : { "type" : "string" },"portName" : { "type" : "string" }}."additionalProperties": false,"required": [ "instName", "portName" ]}
}."memMode" : { "type" : "string", "enum": [ "read-only", "write- only", "read-write" ] },"addrType" : { "type" : "string", "enum": [ "virtual", "physical" ] }}."additionalProperties": false,"required": [ "type", "direction", "srcPort", "dstPorts" ]} ."LogicalPort": {"type": "object","properties": {"type" : { "type" : "string", "enum": [ "stream", "mem", "event" ] }, "direction" : { "type" : "string", "enum": [ "master", "slave" ] }, "dataWidth" : { "type" : "integer", "minimum" : 1 },"clkFreq" : { "type" : "double" },"traceFile" : { "type" : "string" },"annotation": { "$ref": "#/definitions/ArrayString" },"hw annotation": { "type" : "string" },"sdfioName": { "$ref": "#/definitions/ArrayString" },"vlnvName" : { "type" : "string" },"mechannel" : { "type" : "string" }}."additionalProperties": false,"required": [ "type", "direction", "dataWidth", "clkFreq" ]} ."DPEIP": {"type": "object","properties": {"vlnvName" : { "type" : "string" },"annotation": { "type" : "string" },"hw annotation": { "type" : "string" },"meshimPorts" : {"type" : "object","properties" : { "$ref": "#/definitions/LogicalPort" }
}}."additionalProperties": false,"required": [ "meshimPorts", "annotation" ]} ."NoCIP": {"type": "object","properties": {"type" : { "type" : "string", "enum": [ "stream", "mem"] }, "vlnvName" : { "type": "string" },"annotation": { "type" : "string" },"hw annotation": { "type" : "string" },"nocPorts" : {"type" : "object","properties" : { "$ref": "#/definitions/LogicalPort" }}}."additionalProperties": false,"required": [ "nocPorts", "annotation" ]} ."PLIP": {"type": "object","properties": {"ckernelName" : { "type" : "string" },"sdfinstName" : { "type" : "string" },"vlnvName" : { "type" : "string" },"annotation": { "type" : "string" },"hw annotation": { "type" : "string" },"pIPorts" : {"type" : "object","properties" : { "$ref": "#/definitions/LogicalPort" }}}."additionalProperties": false,
"required": [ "pIPorts", "annotation" ]}}."type": "object","properties": {"appld" : { "type": "string" },"schema" : { "type": "string" },"device" : { "type": "string" },"platform" : { "type": "string" },"connections" : {"type": "object","properties": { "$ref": "#/definitions/LogicalConnection" }, "minProperties": 0} ."DPE": {"type": "object","properties": { "$ref": "#/definitions/DPEIP" },"minProperties": 0} ."PL": {"type": "object","properties": { "$ref": "#/definitions/PLIP" },"minProperties": 0} ."NoC": {"type": "object","properties": { "$ref": "#/definitions/NoCIP" },"minProperties": 0}}."required": [
appld]}[00397] Example 2 illustrates an example schema for a SoC interface block solution for an application to be implemented in the DPE array 202.Example 2{"$schema": "http://json-schema.Org/draft-3/schema#","description": "DPE Solution schema","id": "DPESolutionSpecification","definitions": {},"type" : "object","properties" : {"version" : { "type" : "string" },"Placement" : { "type" : "array","items" : {"properties" : {"Logicallnstance" : {"type" : "object","properties" : {"InstanceName" : { "type" : "string" },"PortName" : { "type" : "string" }}}."Physical Instance" : {"type" : "array","items" : { "type" : "string" }}."IsSoft" : {"type" : "boolean" }}}}
}}}[00398] Example 3 illustrates an example schema for a NoC solution for an application to be implemented in the NoC 208.Example 3{"$schema": "http://json-schema.Org/draft-3/schema#","description": "NOC Solution schema","id": "SolutionsSchema","definitions": {},"type" : "object","properties" : {"SolutionType" : { "type" : "string" },"Paths" : {"type" : "array","items" : {"properties" : {"Phase" : { "type" : "integer" },"From" : { "type" : "string" },"From Locked" : { "type" : "boolean" },"To" : { "type" : "string" },ToLocked" : { "type" : "boolean" },"Port" : {"type" : "string"},"ReadTC" : { "type" : "string", "enum" : ["LL", "BE", "ISOC"] }, "WriteTC" : { "type" : "string", "enum" : ["LL", "BE", "ISOC"] }, "ReadBW" : { "type" : "integer", "minimum" : 0, "maximum" : 19200}, "WriteBW" : { "type" : "integer", "minimum" : 0, "maximum" : 19200}, " ReadAch i eved BW" : {"type" : "integer"},"WriteAchievedBW" : {"type" : "integer"},"ReadLatency" : { "type" : "integer", "minimum" : 4},"WriteLatency" : {"type" : "integer", "minimum" : 4},
"ReadBestPossibleLatency" : {"type" : "integer", "minimum" : 4}, "WriteBestPossibleLatency" : {"type" : "integer", "minimum" : 4}, "PathLocked" : { "type" : "boolean" },"Nets" : {"type" : "array","items" : {"properties" : {"PhylnstanceStart": {"type" : "string"},"PhylnstanceEnd" : {"type" : "string"},"VC" : {"type" : "integer", "minimum" : 0, "maximum" : 7},"Connections" : {"type" : "array", "items" : { "type" : "string" } }, "RequiredBW" : {"type" : "integer"},"AchievedBW" : {"type" : "integer"},"AchievedLatency" : {"type" : "integer"},"CommType" : { "type" : "string", "enum" : ["READ", "WRITE", "READ REQ", "WRITE_RESP"] }}}}}}}."Components" : {"type" : "array","items" : {"properties" : {"Name" : { "type" : "string" },"TrafficLInst" : { "type" : "string" },"Portlndex" : { "type" : "integer" },"Destld" : { "type" : "integer" },"required" : ["Name", "Destld" ],"additionalProperties" : false}}
}}}}[00399] Example 4 illustrates an example schema for specifying SoC interface block constraints and/or NoC constraints.Example 4{"$schema": "http://json-schema.Org/draft-3/schema#","description": "NOC Constraints schema","id": "ConstraintsSpecification","definitions": {},"type" : "object","properties" : {"version" : { "type" : "string" },"Placement" : { "type" : "array","items" : {"properties" : {"Logicallnstance" : {"type" : "string"},"Physical Instance" : {"type" : "array", "items" : { "type" : "string" } }, "IsSoft" : {"type" : "boolean" }}}}}}}[00400] Example 5 illustrates an example schema for specifying the NoC traffic.Example 5{
"$schema": "http://json-schema.Org/draft-7/schema#","description": "NOC Traffic Specification Schema","id": "T rafficSpecification","type": "object","definitions": {},"additionalProperties": false,"properties" : {"Logicallnstances" : {"type" : "array","items" : {"type": "object","additionalProperties": false,"properties" : {"Name" : { "type" : "string"},"IsMaster" : { "type" : "boolean"},"CompType" : { "type" : "string" },"Ports" : { "type" : "array", "items" : {"type" : "string"}},"Protocol" : { "type" : "string", "enum" : ["AXI_MM", "AXI_STRM"] }, "SysAddress" : { "type" : "integer" },"SysAddressSize" : { "type" : "integer" },"SysAddresses" : {"type" : "array","items" : {"type":"object","additionalProperties": false,"properties" : {"Base" : { "type" : "integer" },"Size" : { "type" : "integer" }}."required" : ["Base", "Size" ]}}."AxiDataWidth" : { "type" : "integer" },
"Num Readoutstanding" : { "type" : "integer", "minimum" : 0,"maximum" : 64 },"NumWriteOutstanding" : { "type" : "integer", "minimum" : 0,"maximum" : 64 },"ReadRateLimiter" : { "type" : "integer" },"WriteRateLimiter" : { "type" : "integer" },"InterleaveSize" : { "type" : "integer" },"ExternalConn" : { "type" : "string" },"Is Virtual" : { "type" : "boolean", "default" : false }} ."required" : ["Name", "CompType", "Protocol"]}}."Paths" : {"type" : "array","items" : {"type": "object","additionalProperties": false,"properties" : {"Phase" : { "type" : "integer" },"From" : { "type" : "string" },"To" : { "type" : "string" },"Port" : {"type" : "string"},"CommType" : { "type" : "string", "enum" : ["MM_ReadWrite", "STRM", "MM ReadOnly", "MM_WriteOnly"] },"ReadTC" : { "type" : "string", "enum" : ["LL", "BE", "ISOC"] },"WriteTC" : { "type" : "string", "enum" : ["LL", "BE", "ISOC"] },"WriteBurstSize" : { "type" : "integer", "minimum" : 1 , "maximum" : 256}."ReadBurstSize" : { "type" : "integer", "minimum" : 1 , "maximum" : 256}."ReadBW" : { "type" : "integer", "minimum" : 0, "maximum" : 19200], "WriteBW" : { "type" : "integer", "minimum" : 0, "maximum" : 19200], "ReadLatency" : { "type" : "integer", "minimum" : 0],
"WriteLatency" : {"type" : "integer", "minimum" : 0}, "ReadAvgBurst" : { "type" : "integer", "minimum" : 0}, "WriteAvgBurst" : { "type" : "integer", "minimum" : 0}, "ExclusiveGroup" : {"type" : "string"}}}}}} |
PROBLEM TO BE SOLVED: To provide an inductor having a patterned ground plane.SOLUTION: An inductor 300 includes a conductor 310 formed on a first layer, and a patterned ground plane 320 formed on a second layer under the conductor. The patterned ground plane has an open center area and a shape matching the shape of the conductor. The patterned ground plane includes multiple shields, e.g., eight shields 330 for eight sides of an octagonal shape conductor. Each shield has multiple slots 350 formed perpendicularly to the conductor. Partitioning the patterned ground plane into separate shields and forming slots on each shield help prevent the flow of eddy current on the patterned ground plane, which may improve the Q of the inductor. Multiple interconnects 332 couple the multiple shields to circuit ground, which is located at the center of the conductor. |
A substrate; a conductor formed on a first layer; a patterned ground plane formed on the second layer below the conductor and formed between the conductor and the substrate; The patterned ground plane has an open central area and comprises a plurality of shields, each shield having a plurality of slots, and the plurality of slots of each shield on the outer edge of the shield Form a comb-like pattern having a slot opening and a common connection on the closed inner edge, said shields being electrically separated from one another by cuts located near the eight corners of said patterned ground plane, the slots Are formed along the conductor on the plurality of shields and also at the corners of the conductor, and the patterned ground plane is a ground where the magnetic field is patterned Allowing a plurality of interconnects to couple the shields to a circuit ground located at the center of the patterned ground plane, with each interconnect being a respective one. An apparatus coupled to the closed inner edge of the shield and the circuit ground.The apparatus of claim 1, wherein the patterned ground plane has a shape that matches the shape of the conductor.The apparatus of claim 1, wherein the patterned ground plane is formed using a low loss metal.The apparatus of claim 1, wherein the patterned ground plane is symmetrical about the center of the conductor.The apparatus of claim 1, wherein the plurality of slots for each shield are perpendicular to the conductor.The apparatus of claim 1, wherein the plurality of slots for each shield run from the outer edge of the shield towards a closed inner edge and stop prior to the closed inner edge.The apparatus of claim 1, wherein the conductor has an octagonal shape having eight sides, and the patterned ground plane comprises eight shields for the eight sides of the conductor.The apparatus of claim 1, wherein the conductor comprises a single turn.The apparatus of claim 1, further comprising: a guard ring formed around the conductor.The apparatus of claim 1, wherein the conductor and the patterned ground plane are for an inductor.The apparatus of claim 1, wherein the conductor and the patterned ground plane are for a transformer or a balun.A substrate; a conductor formed on a first layer; a patterned ground plane formed on the second layer below the conductor and formed between the conductor and the substrate; Note that the patterned ground plane has an open central area and comprises a plurality of shields, each shield having a plurality of slots, and the plurality of slots of each shield on the outer edge of the shield Forming a comb-like pattern having a common connection on the slot opening and closed inner edge, the shields being electrically isolated from one another by cuts located near the eight corners of the patterned ground plane, A slot is formed along the conductor and on the plurality of shields, and also at the corners of the conductor, and the patterned ground plane is in contact with the patterned magnetic field. Allowing a plurality of interconnects to couple the plurality of shields to a circuit ground located at the center of the patterned ground plane, and each interconnect, respectively; An integrated circuit coupled to the closed inner edge of the shield and the circuit ground.The integrated circuit of claim 12, wherein the patterned ground plane has a shape that matches the shape of the conductor.The integrated circuit of claim 12, wherein the plurality of slots for each shield are perpendicular to the conductor.Forming a conductor on a first layer; forming a patterned ground plane between the conductor and a substrate on a second layer below the conductor; The grounded ground plane has an open central area and comprises a plurality of shields, each shield having a plurality of slots, the plurality of slots of each shield being slot openings on the outer edge of the shield and Forming a comb-like pattern having a common connection on the closed inner edge, the shields being electrically isolated from one another by cuts located near the eight corners of the patterned ground plane, the slots being A conductor is formed on the plurality of shields, and also at the corners of the conductor, wherein the patterned ground plane is such that a magnetic field passes through the patterned ground plane A plurality of interconnects connect the plurality of shields to circuit ground located at the center of the patterned ground plane, each interconnect being a closed shield of the respective shield Coupled to the inner edge and the circuit ground.16. The method of claim 15, wherein forming the patterned ground plane comprises forming the patterned ground plane having a shape that matches the shape of the conductor.The method of claim 15, further comprising: forming the plurality of slots for each shield perpendicular to the conductor.Means for forming a conductor on the first layer; means for forming a patterned ground plane between the conductor and the substrate on the second layer below the conductor; Note that the patterned ground plane has an open central area and comprises a plurality of shields, each shield having a plurality of slots, and the plurality of slots of each shield on the outer edge of the shield Forming a comb-like pattern having a common connection on the slot opening and closed inner edge, the shields being electrically isolated from one another by cuts located near the eight corners of the patterned ground plane, A slot is formed along the conductor and on the plurality of shields, and also at the corners of the conductor, and the patterned ground plane is in contact with the patterned magnetic field. A plurality of interconnects couple the shields to circuit ground located at the center of the patterned ground plane, passing through a plane to the substrate, wherein each interconnect is a respective one An apparatus coupled to the closed inner edge of the shield and the circuit ground.19. The apparatus according to claim 18, wherein the means for forming the patterned ground plane comprises means for forming the patterned ground plane having a shape matched to the shape of the conductor. .19. The apparatus of claim 18, further comprising: means for forming the plurality of slots for each shield perpendicular to the conductor.An inductor comprising a conductor formed on a first layer and a patterned ground plane formed on the second layer below the conductor and formed between the conductor and a substrate; Note that the patterned ground plane has an open central area and comprises a plurality of shields, each shield having a plurality of slots, and the plurality of slots of each shield on the outer edge of the shield Forming a comb-like pattern having a common connection on the slot opening and closed inner edge, the shields being electrically isolated from one another by cuts located near the eight corners of the patterned ground plane, Slots are formed along the conductor on the plurality of shields and also at the corners of the conductor, and the patterned ground plane is patterned with the magnetic field. A plurality of interconnects connect the plurality of shields to a circuit ground located at the center of the patterned ground plane, with each interconnection being allowed to pass through the ground plane to the substrate; An amplifier coupled to the inductor, coupled to the closed inner edge of each shield and the circuit ground.22. The apparatus of claim 21, wherein the inductor and the amplifier form an oscillator, and the inductor is part of a resonator tank circuit for the oscillator.22. The apparatus of claim 21, wherein the inductor and the amplifier form a low noise amplifier (LNA), and the inductor is a modified inductor or load inductor for the LNA.The apparatus of claim 1, wherein the slots are even at corners of the patterned ground plane. |
Inductor with patterned ground planebackground[I. FIELD [0002] The present disclosure relates generally to electronics, and more particularly to inductors for integrated circuits (ICs) or printed circuit boards (PCBs).[II. BACKGROUND With the recent advances in IC process technology, it is possible to manufacture radio frequency ICs (RFICs) for various applications such as wireless communication, networking, computing, etc. is there. These RFICs can include analog circuit blocks previously implemented with bulky discrete circuit components. By implementing analog circuit blocks on the RFIC, certain advantages can be realized, such as smaller size, lower cost, improved reliability.Many analog circuit blocks utilize inductors to perform desired functions and / or achieve desired performance. For example, filters, resonator tank circuits, and impedance matching networks can include inductors to obtain the desired circuit response. In some applications, such as resonator tank circuits for voltage controlled oscillators (VCOs), inductors with high quality factor (Q) to get good performance for VCOs desirable. However, obtaining high Q due to various types of losses can be difficult as described below. This can be especially true at high frequencies used by many wireless communication systems.Inductors with patterned ground planes and higher Q at high frequencies and good performance are described herein. The patterned ground plane is a ground plane having a pattern of etched-out portions, which is in contrast to a solid ground plane without any etched-out portions.In one design, the inductor includes a conductor formed on a first layer and a patterned ground plane formed on a second layer below the conductor. The patterned ground plane can have an open central area and a shape that matches the shape of the conductor. The patterned ground plane can include multiple shields. In one design, the conductor has an octagonal shape with eight sides, and the patterned ground plane has eight shields for eight sides of the conductor. Each shield has a plurality of slots perpendicular to the conductor. Dividing the patterned ground plane into separate shields and forming a slot on each shield helps to prevent the flow of eddy currents on the patterned ground plane, which The Q of the inductor can be improved. The plurality of interconnects couple the plurality of shields to the circuit ground, which can be located at the center of the patterned ground plane.Inductors with patterned ground planes can be used for various circuit blocks such as VCO, low noise amplifier (LNA). Various aspects and features of the disclosure are described in further detail below.FIG. 1 shows a schematic of the VCO.FIG. 2 shows a plan view of the inductor without any shielding.FIG. 3 shows a top view of an inductor with a patterned ground plane.FIG. 4 shows a more detailed plan view of a portion of the patterned ground plane.FIG. 5 shows a side view of an inductor with a patterned ground plane.FIG. 6 shows an improvement in an inductor Q having a patterned ground plane.FIG. 7 shows the use of a guard ring and a patterned ground plane for isolation.FIG. 8 shows a plot of separation with different separation mechanisms.FIG. 9 shows a process for forming an inductor having a patterned ground plane.FIG. 10 shows a block diagram of a wireless device.Detailed descriptionFIG. 1 shows a schematic of one design of VCO 100. In this design, the VCO 100 includes an amplifier (AMP) 110 and a resonator tank circuit 120, which comprises an inductor 130 and a variable capacitor (varactor) 140. Amplifier 110 provides the signal gain needed for oscillation. The amplifier 110 and the resonator tank circuit 120 together provide the 360 ° phase shift needed for oscillation. VCO 100 provides an oscillator signal (O SC) having a frequency of f osc. The oscillation frequency f osc is mostly determined by the inductance of the inductor 130 and the capacitance of the varactor 140. All of the components of VCO 100, including inductor 130, can be fabricated on the RFIC to obtain various benefits, such as smaller size, lower cost, improved reliability, and the like.FIG. 2 shows a top view of an on-chip inductor 200 that can be implemented on the RFIC. Inductor 200 can be used for inductor 130 in FIG. Inductor 200 includes a single turn conductor 210 having an octagonal shape. In general, the inductor can have any number of turns, any shape, and, for example, square, rectangular, hexagonal, octagonal, circular, etc. The octagonal shape can provide good Q and ease of implementation.The width of the conductor 210, the number of turns, and the spacing between the turns may be selected based on various factors, such as the desired inductance for the inductor 200 and Q. The conductor 210 may be (i) a low-loss metal (e.g. copper) on the metal layer, (ii) a lossy metal (e.g. aluminum) on the layer under the metal layer, or (iii) ) It can be manufactured using various types of conductive materials, such as any other material. Higher Q can be achieved for the inductor 200 if the conductor 210 is manufactured using low loss metal. Smaller sized inductors 200 can be fabricated on lossy metal layers because different IC design rules can be applied.On-chip inductor 200 may have a lower Q due to silicon substrate losses, which may be due to the resistance of silicon. Silicon substrate losses can include magnetic losses and electrical losses. Magnetic losses can be attributed to eddy currents induced on silicon. Electrical losses can be attributed to the resistance of silicon. Silicon substrate losses can be worse at high frequencies and can be a major Q-limiting contributor in VCOs operating in the 4 to 12 GHz range.The on-chip inductor 200 may also have a relatively large size and may be more vulnerable to substrate noise. Noise on the substrate can couple to the conductor 210 and degrade the quality of the signal in the conductor. A guard ring can be formed around conductor 210 to reduce substrate coupling. However, guard rings may not be able to provide sufficient substrate separation.A solid ground plane can be formed under the conductor 210 to reduce silicon substrate losses and improve substrate isolation. This solid ground plane can improve electrical losses by terminating the electric field from conductor 210 to the ground plane instead of the substrate. A solid ground plane can also improve substrate isolation and reduce substrate noise coupling. However, if a solid ground plane is formed using a low loss metal, then the magnetic field from conductor 210 can be blocked, which increases the magnetic loss and the performance of inductor 200. May be adversely affected. Conversely, if a solid ground plane is formed using a lossy material, such as polysilicon, then the magnetic field can more easily pass through the solid ground plane, which is Loss can be reduced. However, lossy ground planes may not be effective in preventing electric fields from terminating at the substrate.FIG. 3 shows a plan view of one design of an inductor 300 having a patterned ground plane 320. Inductor 300 can be used for inductor 130 in FIG. In this design, inductor 300 has an octagonal shape and includes a single turn of conductor 310, shown by the thick dashed line in FIG. The size of the octagonal shape and the width of the conductor 310 can be selected to obtain the desired inductance and Q for the inductor 300.Patterned ground plane 320 is designed to achieve the following functions: terminate the electric field from conductor 310, and allow the magnetic field to pass through patterned ground plane 320. It can be done. Patterned ground plane 320 includes various features to accomplish the above function.In the design shown in FIG. 3, a patterned ground plane 320 is formed substantially under the conductor 310, and thus an electric field can be shielded from traveling to the substrate. This then reduces the field loss and can also provide isolation of the substrate noise. The patterned ground plane 320 can have a slightly larger shape than the conductor 310 to capture fringe fields at the edge of the conductor. Patterned ground plane 320 does not cover the central area of conductor 310. This can allow the magnetic field to pass freely through the central area, and hence reduce the magnetic losses. The magnetic field for the inductor 300 can be similar to the magnetic field for the inductor 200 in FIG. 2 without any ground plane, and a good magnetic field distribution using the patterned ground plane 320 Even can be held. As a result, the inductance and series resistance of inductor 300 may change little, even with the presence of patterned ground plane 320. A ground plane is not required at the central area to terminate the electric field, which substantially propagates from the conductor 310 down to the patterned ground plane below.The magnetic field from the current flowing on the conductor 310 can cause eddy currents on the patterned ground plane 320. Eddy currents on the patterned ground plane 320 can reduce the inductance and reduce the Q of the inductor 300. Therefore, it is desirable to prevent or reduce the flow of eddy currents on the patterned ground plane 320.In the design shown in FIG. 3, the patterned ground plane 320 is divided into eight separate shields 330 a-330 h for eight sides of the conductor 310. The eight shields 330 a-330 h are electrically isolated from one another by eight cuts located near the eight corners of the patterned ground plane 320. Dividing the patterned ground plane 320 into separate shields 330a through 330h helps to prevent the flow of eddy currents through the patterned ground plane.The eight shields 330a-330h are coupled to a central ground point 340 located at the center of the patterned ground plane 320 via eight interconnects 332a-332h, respectively. The interconnects 332a-332h can be formed on the same layer as the shields 330a-330h and using the same material.In one design, the patterned ground plane 320 is formed using low loss metal in the metal layer below the conductor 310. Low loss metals can have higher conductivity and lower sheet resistance as compared to polysilicon. A low loss patterned ground plane can be more effective at terminating the electric field from conductor 310, which can then improve the Q and substrate separation for inductor 300 it can. However, low loss metals can make the magnetic field more difficult to pass.In the design shown in FIG. 3, slots 350 are cut along the patterned ground plane 320 and perform several functions. First, slots 350 allow magnetic fields to pass, which can reduce magnetic losses. Second, the slots 350 help cut the flow of eddy currents in each shield 330, which can improve Q. If no slot is present, then the magnetic field from the current on conductor 320 will induce eddy currents on patterned ground plane 320 in the opposite direction to the current on conductor 320. The slots 350 are perpendicular to the normal flow of eddy currents in the patterned ground plane 320 and can therefore cut the flow of eddy currents. The size and spacing of the slots 350 are selected to (i) allow the magnetic field to pass and (ii) terminate most of the electric field on the patterned ground plane 320. be able to. In one design, the slots 350 can be as narrow as 0.1 micrometer (μm), although other sizes can also be used.In the design shown in FIG. 3, the slots 350 are cut from the outer edges to the inner edges of the patterned ground plane 320 but stop short of the inner edges. The slots 350 are also perpendicular (or at a 90 degree angle with respect to the conductor 310) along all eight sides of the conductor. The perpendicular direction of the slots 350 helps to cut the flow of eddy currents. Interconnects 332a-332h are coupled to the inner edges of shields 330a-330h, respectively, of patterned ground plane 320. Computer simulation shows that the slots cut in this way and the connection of the interconnects 332 to the inner edge of the shield 330 can provide good performance.In the design shown in FIG. 3, the patterned ground plane 320 is symmetrical between the left and right and also between the upper and lower halves. This symmetry can allow for cancellation of eddy currents in interconnects 332a-332h.FIG. 4 shows a plan view of a portion of the patterned ground plane 320 in more detail. The shields 330b and 330c of the patterned ground plane 320 are electrically separated by cuts 334 to avoid eddy currents. Slots 350 are formed on each shield 330 and are perpendicular to the conductors 310 on the shields. The slots 350 can even exist at the corners to prevent eddy current flow. The slots 350 for each shield 330 form a comb-like pattern with slot openings on the outer edge of the shield and common connections on the inner edge. Each shield 330 is coupled to a central ground point via a respective interconnect 332.FIG. 5 shows a side view of a portion of inductor 300. The conductor 310 can be formed on one layer of the RFIC. A shield 330 for the patterned ground plane 320 and an interconnect 332 can be formed on the second layer of the RFIC. The second layer can be any layer of RFIC on substrate 360. An electric field (E-field) can be running from the conductor 310 to the shield 330. Most of the electric field can be terminated by the shield 330. Shield 330 also provides substrate isolation and prevents noise on the substrate from coupling to conductor 310. The magnetic field (H-field) may be free to pass through the center of the patterned ground plane 320 for the open central area. The magnetic field may also pass through certain portions of shield 330 via slots 350 (not shown in FIG. 5). The slot 350 is perpendicular to the conductor 310 and parallel to the magnetic field.For clarity, various details of the patterned ground plane have been described above for a single turn inductor having an octagonal shape. In general, the conductors for the inductor can have any shape and any number of turns. The patterned ground plane for the conductor can have a shape that matches the shape of the conductor. The patterned ground plane can be split into any number of separate shields to cut the flow of eddy currents through the patterned ground plane. The patterned ground plane can be split such that there is one shield for each side of the conductor, as shown in FIG. The patterned ground plane can also be split into more or fewer shields. The shields can be coupled via interconnections to a common ground point, as shown in FIG. Alternatively, the shields may be coupled to the circuit ground in other ways, for example, each shield may be directly coupled to the circuit ground. Each shield can have any number of slots, and these slots can have any suitable size and spacing. The slots can be perpendicular to the conductor and can be formed in a comb-like pattern as shown in FIG. 3 to cut the flow of eddy currents. The slots can also have other patterns.A patterned ground plane can be formed with low loss metal to provide a good termination for the electric field and to improve substrate isolation. Patterned ground planes can also be formed using lossy materials for other considerations.FIG. 6 illustrates the improvement in inductor Q achieved using the patterned ground plane described herein. Plot 610 shows the Q of inductor 200 in FIG. 2, which does not have a patterned ground plane. The inductor 200 has a maximum Q of about 28 at a frequency of about 6 GHz. Plot 620 shows the Q of inductor 300 in FIG. 3, which has a patterned ground plane 320. The inductor 300 has a maximum Q of about 38 at a frequency of about 8 GHz. As shown in FIG. 6, the Q of inductor 300 is significantly better than the Q of inductor 200 over the 4 to 12 GHz frequency range, which covers the VCO operating frequency for a number of communication systems.The inductor may be manufactured in close proximity to circuitry that may generate interference to the inductor. The interference may couple to the inductor through the substrate and / or through other mechanisms. It may be desirable to reduce the amount of interference from the circuit.FIG. 7 illustrates some mechanisms for achieving neighboring interference isolation. Inductor 700 may be formed near circuit 714, which may generate interference. Inductor 700 can have a patterned ground plane implemented as described above in the case of FIGS. The patterned ground plane can provide isolation of interference coupled via the substrate. Additionally, to improve isolation, a guard ring 712 can be formed around inductor 700 and can be coupled to circuit ground. Guard ring 712 can provide isolation of interference coupled through the silicon substrate. The distance D2 between the guard ring 712 and the inductor 700 and the distance D1 between the guard ring 712 and the circuit 714 can be selected based on the desired amount of separation.FIG. 8 shows a plot of the separation between inductor 700 and circuit 714 in FIG. 7 with different separation mechanisms. The plot 810 shows the separation between the inductor 700 and the circuit 714 with only the guard ring 712 but no patterned ground plane. The plot 812 illustrates the separation between the inductor 700 and the circuit 714 having only a patterned ground plane but no guard ring. Plots 810 and 812 show that the patterned ground plane can provide better isolation than the guard ring. This is because the patterned ground plane may be much closer (e.g., a few microns) to the inductor 700, and the inductor can block the direction of viewing the silicon substrate. The guard ring may be further from the inductor (e.g., several tens of microns) and may only be able to collect a portion of the noise already coupled onto the substrate. is there.Plot 814 shows the separation between inductor 700 and circuit 714, having both a patterned ground plane and guard ring 712. Plots 810, 812 and 814 show that the combination of the patterned ground plane and the guard ring can provide more separation than either alone. The guard ring can therefore be used for the inductor when more isolation is desired for sensitive circuits such as, for example, VCOs.The patterned ground plane described herein can be used for single-ended inductors and also for differential inductors, as described above. The patterned ground plane can also be used for transformers, baluns used for differential to single ended conversion, etc. For example, a transformer or balun can have N turns that can be numbered sequentially from 1 to N. The odd numbered turns (eg, turns 1 and 3) can be used for the primary conductor, and the even numbered turns (eg, turns 2 and 4) of the secondary conductor Can be used forFIG. 9 shows one design of a process 900 for forming a circuit component having a patterned ground plane. Conductors (eg, for inductors, transformers, or baluns) can be formed on the first layer of the IC or printed circuit board (block 912). The conductors can have any shape and number of turns. A patterned ground plane can be formed on the second layer below the conductor, for example, using a low loss metal (block 914). The patterned ground plane can have an open central area and a shape that matches the shape of the conductor. The patterned ground plane can comprise a plurality of shields. In one design, the conductor has an octagonal shape with eight sides, and the patterned ground plane has eight shields for eight sides of the conductor. The patterned ground plane can be symmetrical about the center of the conductor.Each shield can have a plurality of slots formed on the shield. The slots can be perpendicular to the conductor, and can be formed along the conductor as well as at the corners of the conductor. The slots for each shield can run from the outer edge of the shield towards the inner edge and can stop prior to the inner edge.A plurality of interconnects may be formed to couple the plurality of shields to circuit ground, which may be located at the center of the patterned ground plane (block 916). Each interconnect can be coupled between the inner edge of the respective shield and circuit ground. If more separation is desired, a guard ring may also be formed around the conductor (block 918).Inductors with patterned ground planes as described herein can be used for various systems and applications such as communication, networking, computing, and the like. The use of inductors with patterned ground planes in wireless communication devices is described below.FIG. 10 shows a block diagram of a wireless device 1000 that can be used for wireless communication. The wireless device 1000 can be a cellular telephone, a personal digital assistant (PDA), a terminal, a handset, a wireless modem, a laptop computer, and the like. The wireless device 1000 can provide bi-directional communication via the transmit and receive paths.On the transmit path, digital processor 1010 can process the data to be transmitted and provide a stream of chips to transceiver unit 1020. Within transceiver unit 1020, one or more digital-to-analog converters (DACs) 1022 may convert the stream of chips into one or more analog signals. The analog signal (s) is filtered by filter 1024, amplified by a variable gain amplifier (VGA) 1026, and from baseband by mixer 1028 to produce an upconverted signal. It can be frequency upconverted to RF. Frequency upconversion may be performed based on the transmit local oscillator (LO) signal from VCO 1030. The upconverted signal is filtered by filter 1032, amplified by power amplifier (PA) 1034, routed through duplexer (D) 1036, and transmitted via antenna 1040. It can be done.On the receive path, the RF signal is received by antenna 1040, routed through duplexer 1036, amplified by LNA 1044, filtered by filter 1046, and RF received by mixer 1048 using the received LO signal from VCO 1050. Can be down converted from baseband to baseband. The downconverted signal from mixer 1048 is buffered by buffer (BUF) 1052, filtered by filter 1054, and one or more analog-to-digital converters (in order to obtain one or more streams of samples). Analog-to-digital converters (ADC) 1056 can be digitized. The sample stream (s) can be provided to digital processor 1010 for processing.FIG. 10 shows a specific transceiver design. In general, signal conditioning for each path can be performed using one or more stages of amplifiers, filters, and mixers. FIG. 10 shows several circuit blocks that can be used for signal conditioning on the transmit and receive paths.In the design shown in FIG. 10, transceiver unit 1020 includes two VCOs 1030 and 1050 for the transmit and receive paths, respectively. Digital processor 1010 includes a high speed VCO 1012 that can generate clocks for the various units within processor 1010. The VCOs 1012 1030 1050 can be implemented with various VCO designs, such as the design shown in FIG. Each VCO can be designed to operate at a particular frequency or range of frequencies. For example, the VCOs 1030 and 1050 may be configured as one or more of the following frequency bands-1850 to 1990 MHz Personal Communication System (PCS) band, 824 to 894 MHz cellular band, 1710 to 1880 MHz digital cellular system (Digital Cellular System (DCS) band, 890 to 960 MHz GSM® 900 band, 1920 to 2170 MHz international mobile telecommunications-2000 (IMT-2000) band, and 1574.4 to 1576.4 MHz global positioning system It can be designed to operate at integer multiples (e.g., 1x, 2x, or 4x) of the (Global Positioning System) (GPS) band. A phase locked loop (PLL) 1060 may receive control information from digital processor 1010 and provide control for VCOs 1030 and 1050 to generate the appropriate transmit and receive LO signals, respectively. it can.An inductor with a patterned ground plane (shown as “Ind” in FIG. 10) can be used for various circuit blocks in the wireless device 1000. For example, an inductor with a patterned ground plane can be used in a resonator tank circuit for VCOs 1012, 1030 and / or 1050. An inductor with a patterned ground plane can also be used as a load inductor and / or a degeneration inductor for LNA 1044. An inductor with a patterned ground plane can also be used for any of the filters in transceiver unit 1020. Inductors with patterned ground planes may also be used, such as before and / or after mixers 1028 or 1048, after driver amplifiers (not shown in FIG. 10) prior to PA 1034.Inductors with patterned ground planes as described herein can be used in ICs, analog ICs, RFICs, mixed signal ICs, application specific integrated circuits (ASICs), printed circuit boards (PCBs), electronics It can be implemented on a device or the like. Inductors having a patterned ground plane may be complementary metal oxide semiconductor (CMOS), N-channel MOS (NMOS), P-channel MOS (PMOS), bipolar junction transistor (bipolar junction transistor) (BJT), bipolar-CMOS (BiCMOS), silicon germanium (SiGe), gallium arsenide (GaAs), etc. can also be manufactured using various IC process technologies.An apparatus implementing an inductor having a patterned ground plane as described herein may be a stand-alone device or may be part of a larger device. The device may be (i) a stand-alone IC, (ii) a set of one or more ICs that may include a memory IC for storing data and / or instructions, (iii) an RF receiver (RFR) or an RF transmitter / RFICs such as / Receivers (RTRs), (iv) ASICs such as Mobile Station Modems (MSM), (v) Modules that can be embedded in other devices, (vi) Receivers, cellular phones, wireless devices, handsets, Or mobile units, (vii) others, etc.The previous description of the disclosure is provided to enable any person skilled in the art to make or use the disclosure. Various modifications to the disclosure will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other variations without departing from the scope of the disclosure. it can. Thus, the present disclosure is not intended to be limited to the examples and designs described herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein. Hereinafter, the invention at the time of filing of the present application will be additionally stated. [Supplementary Note 1] A conductor formed on a first layer; a patterned ground plane formed on a second layer under the conductor; and the patterned ground plane being open An apparatus having a central area and comprising a plurality of shields, each shield having a plurality of slots. [Appendix 2] The apparatus according to Appendix 1, wherein the patterned ground plane has a shape matched to the shape of the conductor. [Appendix 3] The apparatus according to Appendix 1, wherein the patterned ground plane is formed using a low loss metal. [Appendix 4] The apparatus according to Appendix 1, wherein the patterned ground plane is symmetrical about the center of the conductor. 5. The apparatus of clause 1, wherein the plurality of slots for each shield are perpendicular to the conductor. 6. The apparatus of clause 1, wherein the plurality of slots for each shield run from the outer edge to the inner edge of the shield and stop prior to the inner edge. [Appendix 7] The apparatus according to Appendix 1, wherein a slot is formed along the conductor on the plurality of shields and also at a corner of the conductor. [Appendix 8] The apparatus according to appendix 1, further comprising: a plurality of interconnects coupling the plurality of shields to a circuit ground located at the center of the patterned ground plane. Statement 9: The device according to statement 8, wherein each interconnection is coupled between the inner edge of the respective shield and the circuit ground. [Appendix 10] The conductor according to appendix 1, wherein the conductor has an octagonal shape with eight sides, and the patterned ground plane comprises eight shields for the eight sides of the conductor. Device. [Supplementary note 11] The device according to Supplementary note 1, wherein the conductor comprises a single turn. [Appendix 12] The apparatus according to appendix 1, further comprising: a guard ring formed around the conductor. [Supplementary note 13] The device according to Supplementary note 1, wherein the conductor and the patterned ground plane are for an inductor. [Appendix 14] The apparatus according to appendix 1, wherein the conductor and the patterned ground plane are for a transformer or a balun. [Supplementary Note 15] A conductor formed on a first layer; a patterned ground plane formed on a second layer under the conductor; and the patterned ground plane being open An integrated circuit having a central area and comprising a plurality of shields, each shield having a plurality of slots. [Supplementary note 16] The integrated circuit according to supplementary note 15, wherein the patterned ground plane has a shape matched to the shape of the conductor. Statement 17. The integrated circuit according to statement 15, wherein the plurality of slots for each shield are perpendicular to the conductor. [Appendix 18] The integrated circuit according to appendix 15, further comprising: a plurality of interconnects coupling the plurality of shields to a circuit ground located at the center of the patterned ground plane. [Supplementary note 19] forming a conductor on a first layer; forming a patterned ground plane on a second layer under the conductor; and the patterned ground plane being Having an open central area and comprising a plurality of shields, each shield having a plurality of slots. [Appendix 20] The method according to appendix 19, wherein forming the patterned ground plane comprises forming the patterned ground plane having a shape matched to the shape of the conductor. . Statement 21. The method according to statement 19, further comprising: forming the plurality of slots for each shield perpendicular to the conductor. E22: The method of E19, further comprising: forming a plurality of interconnects for coupling the plurality of shields to a circuit ground located at the center of the patterned ground plane. [Supplementary note 23] A means for forming a conductor on a first layer; a means for forming a patterned ground plane on a second layer under the conductor; and the above-mentioned patterned A ground plane having an open central area and comprising a plurality of shields, each shield comprising a plurality of slots; [Appendix 24] The means for forming the patterned ground plane may comprise means for forming the patterned ground plane having a shape matched to the shape of the conductor. Device described. Clause 25. The apparatus of clause 23, further comprising: means for forming the plurality of slots for each shield perpendicular to the conductor. Statement 26. The apparatus of statement 23, further comprising: means for forming a plurality of interconnects for coupling the plurality of shields to a circuit ground located at the center of the patterned ground plane. [Supplementary note 27] An inductor comprising: a conductor formed on a first layer; and a patterned ground plane formed on a second layer under the conductor; A flat having an open central area and comprising a plurality of shields, each having a plurality of slots; an amplifier coupled to the inductor. Statement 28. The apparatus according to statement 27, wherein the inductor and the amplifier form an oscillator, and the inductor is part of a resonator tank circuit for the oscillator. Statement 29. The apparatus according to statement 27, wherein the inductor and the amplifier form a low noise amplifier (LNA), and the inductor is a modified inductor or a load inductor for the LNA. |
An instruction can be received at a sequencer from a controller. The sequencer can be in a package including the sequencer and one or more memory components. The sequencer is operatively coupled to a controller that is separate from the package. A processing device of the sequencer can perform an operation based on the instruction on at least one of the one or more memory components in the package. |
1.A method including:Instructions are received from the controller at the sequencer, wherein the sequencer is in a package that includes the sequencer and one or more memory components, and wherein the sequencer is operably coupled to the Encapsulating the controller separately; andThe processing device through the sequencer operates on at least one of the one or more memory components in the package based on the instruction.2.The method of claim 1, wherein the operation includes one or more of the following:Interface with the one or more memory components via a protocol;Implement the operating timing requirements for the one or more memory components; orOperations are reordered based on rules related to data coherence.3.The method of claim 1, wherein the sequencer and the controller are operatively coupled via a serializer/deserializer SerDes interface.4.The method of claim 1, wherein the one or more memory components include a first memory type, and the sequencer will interface with the first memory type via a protocol based on the first memory type .5.The method of claim 4, further comprising:A second instruction is received at a second sequencer, wherein the second sequencer is positioned in another package, and the second sequencer is operatively coupled to one or more in the other package A second memory assembly and the second sequencer are operatively coupled to the controller.6.The method of claim 5, wherein the one or more second memory components include a second memory type different from the first memory type, and the second sequencer will be based on the second Another protocol of the memory type interfaces with the second memory type.7.The method of claim 1, wherein the trace between the sequencer and the one or more memory components in the package is shorter than the trace between the sequencer and the controller line.8.The method of claim 1, wherein the sequencer corresponds to independent silicon, the one or more memory components correspond to independent die, and the independent silicon and the independent die are included in the package in.9.A system including:One or more memory components; andThe sequencer component, wherein the sequencer component is in a package that includes the sequencer component and the one or more memory components, and wherein the sequencer component is operatively coupled to the package With a separate controller, the sequencer component performs the following operations:Receive instructions from the controller; andAn operation is performed on at least one of the one or more memory components in the package based on the instruction.10.The system of claim 9, wherein the operation includes one or more of the following:Interface with the one or more memory components via the first protocol;Implement the operating timing requirements for the one or more memory components; orInterface with the controller via a second protocol.11.The system of claim 9, wherein the sequencer assembly and the controller are operatively coupled via a serializer/deserializer SerDes interface.12.The system according to claim 9, wherein the one or more memory components include a first memory type, and the sequencer component will communicate with the first memory type via a protocol based on the first memory type. Pick up.13.The system of claim 12, wherein the system further comprises:A second sequencer assembly, wherein the second sequencer assembly is positioned in another package, and the second sequencer assembly is operably coupled to one or more second A memory assembly and the second sequencer assembly is operatively coupled to the controller, the second sequencer assembly performs the following operations:Receiving a second instruction from the controller; andA second operation is performed on at least one of the one or more second memory components in the another package based on the second instruction.14.The system according to claim 13, wherein the one or more second memory components include a second memory type different from the first memory type, and the second sequencer component will be based on the first The protocol of the second memory type interfaces with the second memory type.15.The system of claim 9, wherein the traces between the sequencer component and the one or more memory components in the package are shorter than between the sequencer component and the controller Of the trace.16.The system of claim 9, wherein the sequencer component corresponds to independent silicon, the one or more memory components correspond to independent die, and the independent silicon and the independent die are included in the In the package.17.A system including:Memory components; andThe sequencer assembly is operatively coupled with the storage assembly to perform the following operations:Receiving instructions from a controller outside the system;Determine the operation performed on the memory component based on the instruction; andThe operation is performed on the memory component.18.The system of claim 17, wherein the instruction includes a codeword, and the operation includes dividing the codeword into parts and issuing a command to store the part at the memory component.19.18. The system of claim 17, wherein the sequencer component further determines the timing of when to perform the operation based on the timing requirements for the memory type of the memory component.20.The system according to claim 18, wherein the sequencer component further performs the following operations:Receiving a second instruction from the controller outside the system;Determining a second operation performed on the memory component based on the second instruction;It is determined based on the rule that the second operation will be performed before the operation; andThe second operation is performed on the memory component before the operation. |
Memory subsystem containing sequencer in package separated from controllerTechnical fieldEmbodiments of the present disclosure generally relate to memory subsystems, and more specifically, to memory subsystems that include an in-package sequencer separate from the controller.Background techniqueThe memory subsystem may be a storage system, such as a solid state drive (SSD) or a hard disk drive (HDD). The memory subsystem may be a memory module, such as a dual in-line memory module (DIMM), a small form-factor DIMM (SO-DIMM), or a non-volatile dual in-line memory module (NVDIMM). The memory subsystem may include one or more memory components that store data. The memory component may be, for example, a non-volatile memory component and a volatile memory component. In general, the host system can utilize the memory subsystem to store data at and retrieve data from the memory component.Description of the drawingsThe present disclosure will be understood more fully based on the detailed description given below and the accompanying drawings of various embodiments of the present disclosure.Figure 1 illustrates an example computing environment including a memory subsystem according to some embodiments of the present disclosure.Figure 2 illustrates an example package containing multiple sequencers operably coupled to different memory components having different memory types according to some embodiments of the present disclosure.Figure 3 is a flowchart of an example method for executing instructions according to some embodiments of the present disclosure.Figure 4 is a flowchart of an example method for operating on a memory component according to some embodiments of the present disclosure.Figure 5 illustrates a controller including a reduced number of pins and a reduced external size according to some embodiments of the present disclosure.Figure 6 is a flowchart of an example method for determining configuration parameters to be used for error correction code operations according to some embodiments of the present disclosure.Figure 7 is a flowchart of an example method for determining configuration parameters to be used for memory management operations according to some embodiments of the present disclosure.Figure 8 is a flowchart of an example method for determining configuration parameters to be used for memory mapping operations according to some embodiments of the present disclosure.9 is a flowchart of an example method for determining configuration parameters for operation of a sequencer and to send the configuration parameters to the sequencer according to an embodiment of the present disclosure.Figure 10 is a block diagram of an example computer system in which embodiments of the present disclosure may operate.Detailed waysAn aspect of the present disclosure is directed to a memory subsystem that includes a sequencer in a package separate from the controller. The memory subsystem is also referred to as "memory device" hereinafter. An example of a memory subsystem is a storage device coupled to a central processing unit (CPU) via peripheral interconnects (e.g., input/output bus, storage area network). Examples of storage devices include solid state drives (SSD), flash drives, universal serial bus (USB) flash drives, and hard disk drives (HDD). Another example of a memory subsystem is a memory module coupled to the CPU via a memory bus. Examples of memory modules include dual in-line memory modules (DIMMs), small form-factor DIMMs (SO-DIMMs), non-volatile dual in-line memory modules (NVDIMMs), and the like. In some embodiments, the memory subsystem may be a hybrid memory/storage device subsystem. Generally speaking, a host system can utilize a memory subsystem that includes one or more memory components. The host system can provide data for storage at the memory subsystem and can request data to be retrieved from the memory subsystem.The memory subsystem may include multiple memory components that can store data from the host system. Each storage component can contain different types of media. Examples of media include, but are not limited to, cross-point arrays of non-volatile memories and flash-based memories such as single-level cell (SLC) memory, three-level cell (TLC) memory, and four-level cell (QLC) memory. The characteristics of different types of media may differ between one media type and another media type. An example of a characteristic associated with memory components is data density. The data density corresponds to the amount of data (for example, data bits) that can be stored per memory cell of the memory component. Using the example of a flash-based memory, a four-level cell (QLC) can store four data bits, and a single-level cell (SLC) can store one data bit. Another example of a characteristic of a memory component is the access speed, which corresponds to the amount of time the memory component accesses data stored at the memory component.The memory subsystem may also include a controller operably coupled to the memory assembly. The controller can operate as a "bridge" between the host system and the memory components of the memory subsystem for data transmission and/or management. In some cases, the controller and associated memory components may be manufactured by different vendors, and each of the controller and/or memory components may have a corresponding package. To increase the capacity of the memory subsystem, memory components can be added to the memory subsystem. The controller must interface with multiple memory components. To interface with memory components, in conventional systems, the controller contains a large number of pins. Including a large number of pins can increase the package size of the controller, which can in turn increase the appearance size of the system.In some conventional systems, the controller uses a serializer/deserializer (SerDes) connection (for example, Serial Advanced Technology Attachment (SATA), Universal Serial Bus (USB), Peripheral Component Interconnect Express (PCIe), Universal A flash storage device (UFS), etc.) interfaces with the host system to minimize the pin count. A conventional controller may include a sequencer component that interfaces with the memory component and instructs the memory component using a protocol and timing requirements specific to the memory type of the memory component (eg, read/write delay, etc.). The controller can interface with the memory component via a parallel interface using double data rate (DDR) to obtain a specific bandwidth and capacity. Increasing the number of memory components directly interfaced with the controller can use more space and cause difficulties when wiring parallel interfaces. Therefore, the wiring path (e.g., trace) between the controller and the memory component can be long, thereby compromising signal integrity. In addition, the use of longer wiring paths to the memory components via the parallel interface can make the load larger, thereby consuming undesirable amounts of power.Aspects of the present disclosure solve the above and other deficiencies by separating the sequencer component from the controller and including the sequencer component with one or more memory components in a separate package. Sequencer components can be manufactured in independent silicon, memory components can be manufactured in independent dies, and independent silicon and independent dies can be contained in the same package. The package may refer to a housing that supports the connection of the package to the electrical contacts of the application board and prevents physical damage and corrosion. The application board may refer to the printed circuit board on which the controller, package, and/or memory components reside. Each sequencer component operates with a certain type of memory component (for example, cross-point array, NAND flash, etc.) and can operate with multiple memory components with the type of memory. The sequencer component can interface with the memory component via a protocol specific to the memory type. Each package may include multiple sequencer components that interface with corresponding different types of memory components. In addition, the memory subsystem may include multiple packages that each include one or more sequencer components that interface with one or more memory components.The sequencer component can interface with the controller via a SerDes connection that provides higher bandwidth than a parallel interface. In addition, SerDes connections use fewer pins than parallel connections. Therefore, the disclosed technique can be used to reduce the pin count in the controller while still housing the same number or more memory components contained in the package coupled to the controller. Reducing the pin count of the controller can result in a reduced physical size of a memory subsystem that contains the same capacity (eg, the same number of memory components) as previous conventional systems that include more pins.In addition, the signal integrity can be improved because the distance between the sequencer component and the memory component in an independent package is shorter than the distance between the sequencer component and the memory component in the conventional system, in which the distance between the sequencer component and the memory component is shorter The sequencer component is inside the controller. That is, the package is smaller than the application board, and therefore, the trace between the sequencer component and the memory component within the package is shorter than the conventional system in which the trace runs on the application board. Shorter traces can improve signal integrity, as well as reduce the load on the package and consume less power than conventional systems where the wiring path is longer.In some embodiments, the sequencer component may try to maximize the interface bandwidth between the memory component and the sequencer component by imposing timing requirements on the memory type of the memory component. Timing requirements may involve the latency of read/write operations for the memory type. The sequencer component can time when it issues a read/write command based on the latency of the type of memory component. In addition, the sequencer component can reorder the commands based on certain rules related to the commands and the addresses involved in the commands. That is, the sequencer component can reorder read/write requests by considering the rules to ensure data coherence. For example, if there is a write request and then there is a read request to the same address, the rule may indicate that the read request cannot move before the write request because the read request will return old data. Therefore, the sequencer component can reorder operations based on the bandwidth of the memory component and enforce the timing of when to transfer operations to the memory component.In some embodiments, a controller lacking a sequencer component can perform one or more operations related to memory management, memory mapping, and/or error correction. The data received by the controller and/or the result of the operation may be stored in the memory buffer of the controller. The controller can transmit data and/or results to the sequencer component via several output pins. Each operation can be adapted via the sequencer component for the specific type of memory component contained in the package coupled to the controller.The controller may determine one or more configuration parameters to be used for different operations, and the one or more configuration parameters may be based on the memory type of the memory component associated with the controller and coupled to the sequencer component. The memory component can determine the memory by receiving an indication of the memory type from the host system, accessing the memory type previously stored in the local memory of the controller, or querying the sequencer component to obtain the memory type of the memory component coupled to the sequencer component The storage type of the component. The controller can operate based on configuration parameters specific to the memory type.For example, a memory management operation may include performing wear leveling on the memory components in the package. Wear leveling may refer to alternate memory components selected for read and/or write operations to ensure that each memory component is evenly depleted. The wear leveling scheme may be based on the type of memory component (for example, cross-point array, flash, etc.) due to different attributes of the memory type. Therefore, the memory component can determine the first configuration parameter of the first wear leveling scheme for the first memory component having the first memory type and the second wear leveling scheme for the second memory component of the second memory type. 2. Configuration parameters.In another example, operations related to error correction may include error correction code operations that can be used to improve the reliability of data stored in the memory subsystem. The error correction code operation may refer to a technique for expressing a sequence of data so that errors introduced into the data can be detected and corrected based on other remaining data. The sequence of data can be referred to as a codeword. The type of error correction code may include block code (for example, Hamming code, Reed Solomon code, etc.). Generally, an encoder encodes data to be written with additional data bits to form a codeword, and parts of the codeword may be distributed (eg, partitioned) across memory components of the memory subsystem. When the data is to be read, the decoder decodes the codeword by removing the extra data bits and providing the required original data.The configuration parameters used for the error correction code operation may include the error correction code parameters (e.g., encoding/decoding) for the memory type of the memory component. The controller may receive data from the host system and generate codewords for the data based on the configuration parameters by using an error correction code operation. Subsequently, the codewords can be sent to a sequencer component outside the controller, and the sequencer component can distribute the codewords according to the timing requirements and rules described above.In another example, a memory mapping operation may include address translation. The host system can utilize an address space that is different from the actual physical address space of the memory component. Therefore, the memory component can determine the configuration parameters of the memory mapping operation of the memory type to be used for the memory component. The memory component may perform logical address mapping to physical address mapping based on configuration parameters for the type of memory component involved in the operation. The memory component can send the physical address in the command to the sequencer.In another embodiment, the controller may determine configuration parameters for operations performed by the sequencer component, and send the configuration parameters to the sequencer component. The configuration parameters may include timing requirements for the memory type of the memory component coupled to the sequencer component. As described above, the sequencer component can time when to issue a command to the memory component (e.g., read/write operation) based on the timing requirements for the memory type of the memory component.Figure 1 illustrates an example computing environment 100 including a memory subsystem 110 according to some embodiments of the present disclosure. The memory subsystem 110 may include media, such as the memory component 112. The memory component 112 may be a volatile memory component, a non-volatile memory component, or a combination of such components. In some embodiments, the memory subsystem is a storage system. An example of a storage system is SSD. In some embodiments, the memory subsystem 110 is a hybrid memory/storage device subsystem.In some embodiments, the memory component 112 may be contained in separate corresponding packages 130A to 130N. As depicted, the memory components 112A(1) to 112N(1) are coupled to the first sequencer component 140A in the first package 130A and the memory components 112A(2) to 112N(2) are coupled to the other package 130N Another sequencer assembly 140N. Each of the sequencer components 140A to 140N may be manufactured in independent silicon, and each of the memory components 112 may be manufactured in independent dies. In a conventional memory subsystem, the sequencer component 140 is generally located within the memory system controller 115 (hereinafter referred to as "controller"). The sequencer component 140 and the corresponding memory component 112 may be contained in a single package and coupled via a short trace 160 to improve the performance of issuing commands from the sequencer component 140 to the memory component 112. By using a shorter trace between the sequencer component 140 and the memory component, compared to conventional arrangements, the power load consumption can be reduced and the data signal integrity can be enhanced. In addition, as discussed herein, moving the sequencer component 140 to the package 130 separate from the controller 115 can provide many other benefits, such as reducing the size of the memory subsystem 110 and increasing the size of the controller 115 and the memory component 112. Bandwidth and so on.For example, compared to a parallel interface, the sequencer component 140 and the memory component 112 in the package 130 may be coupled to the controller 115 via the SerDes interface 150. The SerDes interface 150 provides higher bandwidth than the parallel interface, and also uses fewer outgoing pins, thereby reducing the number of pins required by the controller 115 to provide the same capacity (for example, the number of memory components 112)的Memory subsystem 110. For example, the SerDes interface can use six pins (e.g., two for clock, two for transmission, and two for reception), but a parallel interface can use more than twenty pins for operation. Reducing the outgoing pin count of the controller 115 can improve the overall size of the memory subsystem 110 by reducing the size of the controller 110. In addition, removing the sequencer assembly 140 from the controller 115 can also reduce the size of the controller 115.The sequencer components 140A to 140N can perform one or more operations and can be configured based on the type of memory component 112 to which the corresponding sequencer component is coupled. For example, the sequencer component 140A may receive various data from the controller 115 and is based on the timing requirements of the type of attached memory components 112A(1) to 112A(1) and certain rules for ensuring data coherence Wait to schedule when to issue read/write commands to the attached memory components 112A(1) to 112A(1). In some embodiments, one sequencer assembly 140 is coupled to a memory assembly 112 having a single memory type. There may be many sequencer components 140 included in each package 130, and therefore, a single package 130 may include different types of memory components 112 that are coupled to different corresponding sequencer components 140 within the package 140. In additional embodiments, each package 140 may include a memory component 112 having a single memory type, and therefore, each package 130 may be dedicated to providing operating characteristics associated with the type of memory component 112 being used.Generally speaking, the computing environment 100 may include a host system 120 that uses the memory subsystem 110. For example, the host system 120 may write data to the memory subsystem 110 and read data from the memory subsystem 110. The host system 120 may be a computing device, such as a desktop computer, a laptop computer, a web server, a mobile device, or such a computing device including memory and processing devices. The host system 120 may include or be coupled to the memory subsystem 110 such that the host system 120 can read data from the memory subsystem 110 or write data to the memory subsystem 110. The host system 120 may be coupled to the memory subsystem 110 via a physical host interface. As used herein, "coupled to" generally refers to a connection between components, which can be an indirect communication connection or a direct communication connection (for example, no intervening components), whether wired or wireless, including electrical connections, optical connections, etc. Connection, magnetic connection, etc. Examples of physical host interfaces include (but are not limited to) serial/deserialization (SerDes) interface, serial advanced technology attachment (SATA) interface, peripheral component interconnect high-speed (PCIe) interface, universal serial bus (USB) interface, Fibre Channel, Serial Attached SCSI (SAS), etc. The physical host interface can be used to transfer data between the host system 120 and the memory subsystem 110. When the memory subsystem 110 is connected to the host system 120 through a PCIe interface, the host system 120 may further utilize an NVM high-speed (NVMe) interface to access the memory components 112A to 112N. The physical host interface may provide an interface for transferring control, address, data, and other signals between the memory subsystem 110 and the host system 120.The memory component 112 may include any combination of different types of non-volatile memory components and/or volatile memory components. Examples of non-volatile memory components include NAND flash memory. Each of the memory components 112 may include one or more arrays of memory cells, such as single-level cells (SLC) or multi-level cells (MLC) (e.g., three-level cells (TLC) or four-level cells). (QLC)). In some embodiments, a particular memory component may include both the SLC portion and the MLC portion of the memory cell. Each of the memory units may store one or more data bits (e.g., blocks of data) used by the host system 120. Although a non-volatile memory component is described, such as a NAND-type flash memory, the memory component 112 may be based on any other type of memory, such as a volatile memory. In some embodiments, the memory component 112 may be (but is not limited to) random access memory (RAM), read only memory (ROM), dynamic random access memory (DRAM), synchronous dynamic random access memory (SDRAM), Phase change memory (PCM), magnetic random access memory (MRAM), NOR flash memory, electrically erasable programmable read-only memory (EEPROM), and a cross-point array of non-volatile memory cells. The cross-point array of the non-volatile memory can be combined with a stackable cross-grid data access array for bit storage based on changes in body resistance. In addition, compared to many flash-based memories, cross-point non-volatile memory can perform in-place write operations, where non-volatile memory cells can be programmed without pre-erasing the non-volatile memory cells. In addition, the memory cells of the memory component 112 may be grouped into memory pages or data blocks, which may refer to the cells of the memory component for storing data.The controller 115 can communicate with the memory component 112 via the sequencer component 140 to perform operations, such as reading data, writing data, or erasing data at the memory component 112, and other such operations. In one example, and as discussed further below, the controller 115 may include an error component 116. The error correction code can be used to improve the reliability of the data stored in the memory subsystem 110. The error correction code may refer to a technique used to express a sequence of data so that errors introduced into the data can be detected and corrected based on other remaining data. The sequence of data can be referred to as a codeword. The type of error correction code may include block code (for example, Hamming code, Reed Solomon code, etc.).The error component 116 can perform an error correction code encoding operation that encodes data with additional data bits (for example, parity bits) received by the host system 120 to form data to be written to via the sequencer component 140 The codeword of the memory component 112. The error component 116 may also perform an error correction code decoding operation, which decodes codewords by removing extra data bits. The encoding/decoding operation may use certain configuration parameters based on the type of memory component 112 on which the data is to be stored. The controller 115 may send one or more codewords to the sequencer component 140A. The sequencer component 140A can consider the bandwidth and availability of the memory components 112A(1) to 112N(1), the timing requirements (for example, read/write delay) of the memory components 112A(1) to 112N(1), and The rules regarding the sequencing of read/write operations determine which parts of the codeword are stored on the memory components 112A(1) to 112N(1). One purpose of the sequencer component 116 may be to maximize the bandwidth of the interface between the sequencer component 116 and the memory components 112A(1) to 112N(1).The controller 115 may include hardware, such as one or more integrated circuits and/or discrete components, buffer memory, or a combination thereof. The controller 115 may be a microcontroller, a dedicated logic circuit (for example, a field programmable gate array (FPGA), an application specific integrated circuit (ASIC), etc.), or other suitable processor. The controller 115 may include a processor (processing device) 117 configured to execute instructions stored in the local memory 119. In the illustrated example, the local memory 119 of the controller 115 includes an embedded memory configured to store instructions for performing operations that control the memory subsystem 110 (including handling between the memory subsystem 110 and the host system 120). Communication) various processes, operations, logic flows and routines. In some embodiments, the local memory 119 may include memory registers that store memory pointers, extracted data, and the like. The local memory 119 may also include read-only memory (ROM) for storing microcode. Although the example memory subsystem 110 in FIG. 1 has been illustrated as including the controller 115, in another embodiment of the present disclosure, the memory subsystem 110 may not include the controller 115, and may actually rely on (for example, by an external host) Or provided by a processor or controller separate from the memory subsystem) external control.Generally speaking, the controller 115 can receive commands or operations from the host system 120 and can convert the commands or operations into instructions or appropriate commands to achieve the desired access to the memory component 112. The controller 115 may include an error component 116 that performs error correction code operations, a memory mapping component 118 that performs address translation between logical block addresses and physical block addresses associated with the memory component 112, and a memory management component 121 that performs wear leveling operations. . The processing device 117 can execute various components 116, 118, and 121. In addition, the various components 116, 118, and 121 may use configuration parameters specific to the type of memory component 112 included in the memory subsystem 110. The configuration parameters can be received from the host system 120, and the configuration parameters can be pre-stored in the local memory 119 during the manufacturing process, and/or can be queried via the sequencer component 140 by querying what type of memory component 112 is included in the package 130 To extract the configuration parameters from the package 130. In some cases, the sequencer component 140 may provide a notification indicating the type of memory component 112 with which it is associated. Other details regarding the operation of the error component 116, the memory mapping component 118, and the memory management component 121 are described below.The controller 115 may be responsible for other operations, such as garbage collection operations, encryption operations, and/or cache operations. The controller 115 may further include a host interface circuit to communicate with the host system 120 via a physical host interface. The host interface circuit can convert commands received from the host system into command instructions to access the memory component 112 via the sequencer components 140A to 140N, and convert responses associated with the memory component 112 into information for the host system 120 .The memory subsystem 110 may also include additional circuits or components that are not illustrated. In some embodiments, the memory subsystem 110 may include a cache or buffer (e.g., DRAM) and address circuits (e.g., row decoder and column decoder), which can receive addresses from the controller 115 and decode the addresses for access Memory component 112.Figure 2 illustrates an example package 130A that includes a plurality of sequencer components 140 operably coupled to different memory components 112 having different memory types according to some embodiments of the present disclosure. As depicted, the first sequencer component 140A(1) is coupled to the first memory components 112A (1.1) to 112N (1.2) having the first memory type (eg, NAND flash), and the second sequencer component 140N(1) is coupled to second memory components 112A (2.1) to 112N (2.2) having a second memory type (e.g., cross-point array). It should be understood that any number of sequencer components coupled to corresponding memory components having corresponding memory types may be included in the package 130 to meet the desired performance attributes of the package 130.Figure 3 is a flowchart of an example method 300 for performing instructions according to some embodiments of the present disclosure. The method 300 may be performed by processing logic, which may include hardware (e.g., processing device, circuit, dedicated logic, programmable logic, microcode, device hardware, integrated circuit, etc.), software (e.g., running on the processing device) Or executed instructions) or a combination thereof. In some embodiments, the method 300 is performed by the sequencer component 140A of FIG. 1. Although shown in a specific order or order, unless otherwise specified, the order of the processes can be modified. Therefore, the illustrated embodiments should be understood as examples only, and the illustrated processes may be performed in a different order, and some processes may be performed in parallel. In addition, one or more processes may be omitted in various embodiments. Therefore, not all procedures are required in each embodiment. Other process flows are possible.At block 310, the processing device receives an instruction at the sequencer component 140A. Instructions can be received from the controller 115. The sequencer assembly 140A may be positioned in a package 130A that includes a sequencer assembly 140A coupled to one or more memory assemblies 112A(1) through 112N(1). The sequencer component 140A may be manufactured in its own independent silicon, the memory components 112A(1) to 112N(1) may be manufactured in its own independent die, and the independent silicon and independent die may be contained in the package 130A . The sequencer assembly 140A may be coupled to the controller 115 separate from the package 130A. The sequencer assembly 140A may be coupled to the controller 115 via the SerDes interface. The trace between the sequencer component 140A and the memory components 112A(1) to 112N(1) may be shorter than the trace between the sequencer component 140A and the controller 115.At block 320, the processing device of the sequencer component 140A operates on at least one of the one or more memory components 112A(1) to 112N(1) based on the instruction. Operations may include interfacing with one or more memory components 112A(1) through 112N(1) via a type-specific protocol for the memory components 112A(1) through 112N(1), based on the memory components 112A(1) through 112N( The type of 1) implements the operation timing requirements for one or more memory components 112A(1) to 112N(1), and reorders operations based on rules related to data coherence.In some embodiments, the processing device may implement timing requirements for when to issue commands based on the read/write delays of various memory components 112A(1) to 112N(1). For example, if it is determined how long it takes the memory components 112A(1) to 112N(1) to operate, the processing device can schedule when to issue subsequent commands to the memory components 112A(1) to 112N(1) . In some cases, the delay can be determined based on configuration parameters. In another example, the processing device may dynamically determine the time delay. In addition, if the time delay changes during the use of the memory component, the processing device may consider the change when issuing other commands. The processing device may enforce timing requirements to maximize the bandwidth between the sequencer component 140A and the memory components 112A(1) to 112N(1).In addition, the processing device may reorder operations based on rules related to the commands and addresses involved in the instructions received from the controller 115. In general, the processing device can reorder the read and write operations to maximize the bandwidth between the sequencer component 140A and the memory components 112A(1) to 112N(1). For example, if there is a read operation received for a first address, but the memory component 112A(1) containing the address is busy, then another operation that can be performed earlier can be used to move the read operation to improve performance. If the reordering satisfies the rules, the reordering can be performed. For example, the instruction may specify writing and reading at the same address of the memory component 112A(1). In such cases, the rule may specify that operations cannot be reordered, because if the read operation is reordered first, then the read operation will provide the old data before the write operation updates the data.In some embodiments, the second sequencer component 140N may receive the second instruction. The second sequencer component 140N can be positioned in another package 140N, and the second sequencer component 140N can be operably coupled to one or more second memory components 112A(2) to 112N within the second package 130N (2). The second sequencer assembly 140N can be operably coupled to the controller 115. The memory components 112A(2) to 112N(2) in the second package 130N may include memory types different from the memory components 112A(1) to 112N(1) in the package 130A. The second sequencer component 140N may interface with the second memory type via a protocol specific to the second memory type.Figure 4 is a flowchart of an example method for operating on a memory component according to some embodiments of the present disclosure. The method 400 may be performed by processing logic, which may include hardware (e.g., processing device, circuit, dedicated logic, programmable logic, microcode, device hardware, integrated circuit, etc.), software (e.g., running on the processing device) Or executed instructions) or a combination thereof. In some embodiments, the method 400 is performed by the sequencer component 140A of FIG. 1. Although shown in a specific order or order, unless otherwise specified, the order of the processes can be modified. Therefore, the illustrated embodiments should be understood as examples only, and the illustrated processes may be performed in a different order, and some processes may be performed in parallel. In addition, one or more processes may be omitted in various embodiments. Therefore, not all procedures are required in each embodiment. Other process flows are possible.At block 410, the processing device of the sequencer component 140A receives instructions from the controller 115 located outside the system containing the sequencer component 140A. In some embodiments, the system may be package 130A. The package 130A may include a sequencer assembly 140A operably coupled to the memory assembly 112A(1). The sequencer assembly 140A can be operably coupled to the controller 115 external to the package 130A. In some embodiments, the trace between the sequencer component 140A and the memory component 112A(1) may be shorter than the trace between the controller 115 and the sequencer component 140A. The sequencer assembly 140A and the controller 115 may be coupled via a SerDes interface.At block 420, the processing device determines an operation to be performed on the memory component 112A(1) based on the instruction. The instruction may be to write data to the physical address of the memory component 112A(1) or to read data from the physical address of the memory component 112A(1). For example, an instruction may include a codeword, and the operation may include dividing the codeword into parts and issuing a command to store the parts on one or more data blocks of the memory component 112A(1). The codeword may be encoded by the controller 115 based on configuration parameters specific to the type of memory component 112A(1) contained in the package 130A. The processing device may determine the timing of when to operate based on the timing requirements for the memory type of the memory component 112A(1).At block 430, the processing device operates on the memory component 112A(1). For example, the processing device may cause the memory component 112A(1) to write portions of the codeword to one or more data blocks of the memory component 112A(1).In some cases, before performing the operation, the processing device may also receive a second instruction from the controller 115 outside the system. The processing device may determine the second operation performed on the memory component 112A(1) based on the second instruction. The processing device may determine based on the rule that the second operation will be performed before the operation. For example, a first instruction may be associated with a read operation to an address, and a second instruction may be associated with a write operation to the address. The rule can specify that the write operation is performed before the read operation so that the read operation returns the current data. The processing device may then perform a second operation on the memory component 112A(1) before the operation and then perform the operation on the memory component 112A(1).FIG. 5 illustrates a controller 115 including a reduced number of pins 500 and a reduced external size according to some embodiments of the present disclosure. The controller is coupled to the package 130A via a connection 150, which in some embodiments may be a SerDes interface. As described above, the SerDes interface can use roughly six outgoing pins of the controller 115 to communicate with the sequencer assembly 140A. The six outgoing pins can include two pins for clock, two pins for transmission, and two pins for reception. It should be understood that in conventional systems, a parallel interface with twenty or more pins is generally used to connect the controller 115 to the memory component 112. However, the embodiments of the present disclosure may use the SerDes interface by moving the sequencer component 140A to the package 130A having the memory components 112A to 112N and indirectly connecting the controller 115 to the memory components 112A to 112N through the sequencer component 140A . Therefore, the bandwidth between the controller 115 and the memory components 112A to 112N can be increased using the SerDes interface 150, the size of the controller 115 can be reduced due to the reduced number of pins 500, and the external size of the memory subsystem 110 Can be reduced, among other things.As depicted, the controller 115 includes an error component 116, a memory mapping component 118, and a memory management component 121. The various components 115, 116, and 121 may perform various operations based on configuration parameters specific to the type of the memory components 112A to 112N included in the package 130. Figures 6 to 8 generally relate to the controller 115 that uses specific types of configuration parameters for the memory components 112A to 112N to perform different operations. In addition, the controller 115 may determine the type of memory component included in the package 130A and may provide configuration parameters related to the timing requirements of the specific memory type to the sequencer component 140A. Figure 9 generally relates to the controller determining the configuration parameters and transmitting the configuration parameters to the sequencer assembly 140A.Figure 6 is a flowchart of an example method 600 for determining configuration parameters to be used for error correction code operations according to some embodiments of the present disclosure. The method 600 may be performed by processing logic, which may include hardware (e.g., processing device, circuit, dedicated logic, programmable logic, microcode, hardware of the device, integrated circuit, etc.), software (e.g., running on the processing device) Or executed instructions) or a combination thereof. In some embodiments, the method 600 is performed by the error component 116 of FIG. 1. Although shown in a specific order or order, unless otherwise specified, the order of the processes can be modified. Therefore, the illustrated embodiments should be understood as examples only, and the illustrated processes may be performed in a different order, and some processes may be performed in parallel. In addition, one or more processes may be omitted in various embodiments. Therefore, not all procedures are required in each embodiment. Other process flows are possible.At block 610, the processing device determines configuration parameters to be used for error correction code (ECC) operations. The configuration parameters are based on the memory type of the memory component 112A(1) associated with the controller 115. The memory component 112A(1) may be included in the package 130A along with the sequencer component 140A. The sequencer assembly 140A and the memory assembly 112A(1) may be communicatively coupled. The controller 115 may be coupled with the sequencer assembly 140A external to the controller 115 via a SerDes interface. The controller 115 can issue instructions to the sequencer component 140A, and the sequencer component 140A can determine various operations performed on the memory component 112A(1) associated with the controller 115.The processing device can determine the configuration parameters in several ways. For example, in block 612, the processing device may receive from the host system 120 a first data structure (e.g., table) that includes configuration parameters for one or more types of memory components 112. In some embodiments, the processing device may be notified by the sequencer component 140A about the types of the memory components 112A to 112N contained in the package 130A. In another embodiment, the sequencer component 140A may request the sequencer component 140A to provide the types of memory components 112A to 112N contained in the package 130A. The processing device may use the types of memory components 112A to 112N to search the first data structure to determine the configuration parameters to be used for error correction code operations. Specifically, the configuration parameters may relate to the encoding/decoding scheme used, which may vary based on the type of memory component used.Another way to determine the configuration parameters is shown in block 614, in which the processing device can access the second data structure containing the configuration parameters in the local memory 119. The second data structure may be stored in the local memory 119 after the controller 115 is manufactured and when the initial settings and data are loaded into the controller 115. In some embodiments, the second data structure may be stored in the local memory 119 during the update of software, firmware, or the like. Similar to the content described above, the processing device may search for the second data structure for the type of the memory components 112A to 112N used and determine the configuration parameters to be used for the error correction code operation.Yet another way of determining configuration parameters is shown in block 616, where the processing device can query the sequencer component 140A to obtain the configuration parameters. For example, the sequencer component 140A may receive a request from the controller 115 and determine the configuration parameters by searching the local memory of the package 130A or based on the attributes of the memory components 112A to 112N known by the sequencer component 140A. The sequencer component 140A may provide configuration parameters to be used for error correction code operations to the controller 115.At block 620, the processing device receives data from the host system 120. The data may include data to be stored in the memory subsystem 110 requested by the host system 120. In one example, the data may be user data.At block 630, the processing device generates a codeword for the data by using an ECC operation based on the configuration parameter. As described above, the configuration parameters may include ECC parameters used to package the memory types of the memory components 112A to 112N in 130A. The ECC parameter may specify the encoding/decoding scheme applied to the data during ECC operation. It should be understood that the controller 115 may be associated with more than one memory component, and other memory components may be of different types. Using the disclosed technology, the controller 115 can determine the configuration parameters to be used for the ECC operation of each type of associated memory component and can use the corresponding configuration parameters to perform the ECC operation.At block 640, the processing device sends the codeword to the sequencer component 140A external to the controller 115. In some cases, the codeword to be written may be stored in the local memory 119 (for example, a storage buffer), and the processing device may use the output pin of the controller 115 to transfer the code stored at the storage buffer via the SerDes interface. The words are transferred to the sequencer component 140A.In some embodiments, the controller 115 may request to read the codeword from the sequencer component 140A. The sequencer component 140A can provide the codeword, and the controller 115 can decode the codeword based on the determined configuration parameter. In some cases, the decoded data may be transmitted by the controller 115 to the host system 120.Figure 7 is a flowchart of an example method 700 for determining configuration parameters to be used for memory management operations according to some embodiments of the present disclosure. The method 700 may be performed by processing logic, which may include hardware (e.g., processing device, circuit, dedicated logic, programmable logic, microcode, hardware of the device, integrated circuit, etc.), software (e.g., running on the processing device) Or executed instructions) or a combination thereof. In some embodiments, the method 700 is performed by the memory management component 121 of FIG. 1. Although shown in a specific order or order, unless otherwise specified, the order of the processes can be modified. Therefore, the illustrated embodiments should be understood as examples only, and the illustrated processes may be performed in a different order, and some processes may be performed in parallel. In addition, one or more processes may be omitted in various embodiments. Therefore, not all procedures are required in each embodiment. Other process flows are possible.At block 710, the processing device determines configuration parameters to be used for memory management operations. The configuration parameters are based on the memory type of the memory component 112A(1) associated with the controller 115. In one example, the configuration parameters to be used for memory management operations may involve a wear leveling scheme for a specific type of memory component 112A(1). The memory component 112A(1) may be included in the package 130A along with the sequencer component 140A. The sequencer assembly 140A and the memory assembly 112A(1) may be communicatively coupled. The controller 115 may be coupled with the sequencer assembly 140A external to the controller 115 via a SerDes interface. The controller 115 can issue instructions to the sequencer component 140A, and the sequencer component 140A can determine various operations performed on the memory component 112A(1) associated with the controller 115.Similar to how to determine the configuration parameters to be used for the ECC operation with reference to FIG. 6, the processing device may determine the configuration parameters to be used for the memory management operation in several ways. For example, in block 712, the processing device may receive from the host system 120 a first data structure (e.g., table) containing configuration parameters to be used for memory management operations of the memory component 112A(1) of a particular type. In another example, in block 714, the processing device may access in the local memory 119 a second data structure containing configuration parameters to be used for memory management operations. In yet another example, in block 716, the processing device may query the sequencer component 140A to obtain configuration parameters to be used for memory management operations. Additionally or alternatively, in some embodiments, the processing device may query the sequencer component 140A for the type of the memory component 112A(1) and use the received response to any of the techniques described above.At block 720, the processing device determines a wear leveling scheme for the sequencer component 140A to be applied to operations on the memory component 112A(1) based on the configuration parameters. Certain types (e.g., cross-point array, NAND flash, etc.) may contain different attributes, such as the degradation rate of the physical media during operation. Using the configuration parameters for the memory component 112A(1) of the type described, the processing device can determine to distribute read/write operations evenly, read operations or write operations disproportionately, or some combination thereof Different data blocks of the memory component 112A(1) and/or the memory components 112A(1) to 112N(1) to ensure that the loss of the operation is distributed to increase the service life of the memory components 112A(1) to 112N(1). Program.At block 730, the processing device sends the wear leveling scheme and/or data to the sequencer component 140A. The wear leveling scheme may refer to the scheduling of which memory components are used for operations, or the actual instructions used to perform operations on certain memory components to implement wear leveling. In some cases, the wear leveling scheme and/or any data to be written may be stored in the local memory 119 (for example, a storage buffer), and the processing device may be stored in the storage buffer through the output pin of the controller 115 The wear leveling scheme and/or data at the location are transmitted to the sequencer component 140A.The sequencer component 140A can use when to schedule which memory components 112A(1) to 112N(1) to use for certain operations and when to perform wear leveling on the memory components 112A(1) to 112N(1) Program. A wear leveling scheme adapted to the type of memory component 112 provides a flexible architecture in which different types of memory components 112 can be used based on their desired performance characteristics, while still maximizing the durability of the memory component 112.Figure 8 is a flowchart of an example method 800 for determining configuration parameters to be used for memory mapping operations according to some embodiments of the present disclosure. The method 800 may be performed by processing logic, which may include hardware (e.g., processing device, circuit, dedicated logic, programmable logic, microcode, device hardware, integrated circuit, etc.), software (e.g., running on the processing device) Or executed instructions) or a combination thereof. In some embodiments, the method 800 is performed by the memory mapping component 118 of FIG. 1. Although shown in a specific order or order, unless otherwise specified, the order of the processes can be modified. Therefore, the illustrated embodiments should be understood as examples only, and the illustrated processes may be performed in a different order, and some processes may be performed in parallel. In addition, one or more processes may be omitted in various embodiments. Therefore, not all procedures are required in each embodiment. Other process flows are possible.At block 810, the processing device determines the configuration parameters to be used for the memory mapping operation. The configuration parameters are based on the memory type of the memory component 112A(1) associated with the controller 115. In one example, the configuration parameters to be used for the memory mapping operation may include a memory mapping to a physical address for a specific type of memory component 112A(1). The memory component 112A(1) may be included in the package 130A along with the sequencer component 140A. The sequencer assembly 140A and the memory assembly 112A(1) may be communicatively coupled. The controller 115 may be coupled with the sequencer assembly 140A external to the controller 115 via a SerDes interface. The controller 115 can issue instructions to the sequencer component 140A, and the sequencer component 140A can determine various operations performed on the memory component 112A(1) associated with the controller 115.Similar to how to determine the configuration parameters to be used for the ECC operation with reference to FIG. 6, the processing device may determine the configuration parameters to be used for the memory mapping operation in several ways. For example, in block 812, the processing device may receive from the host system 120 a first data structure (e.g., table) containing configuration parameters to be used for a particular type of memory mapping operation of the memory component 112A(1). In another example, in block 814, the processing device may access in the local memory 119 a second data structure containing configuration parameters to be used for the memory mapping operation. In yet another example, in block 816, the processing device may query the sequencer component 140A to obtain configuration parameters to be used for the memory mapping operation. Additionally or alternatively, in some embodiments, the processing device may query the sequencer component 140A for the type of the memory component 112A(1) and use the received response to any of the techniques described above.At block 820, the processing device uses the memory map to translate the logical address where the data is read or written to the physical address on the memory component 112A(1) based on the configuration parameters. In some embodiments, the host system 120 may send data to the controller 115, and the data may include a logical address where the data is stored in the host system 120. Using the memory map, the processing device can translate the logical address into a physical address in the memory component 112A(1).At block 830, the processing device sends the physical address and/or data to the sequencer component 140A. In some cases, the physical address and/or data may be stored in the local memory 119 (for example, the storage buffer), and the processing device may store the physical address and/or the physical address and/or stored in the storage buffer through the output pin of the controller 115. Or the data is transferred to the sequencer assembly 140A.The sequencer component 140A can use the physical address to write data to the memory component 112A (1). As can be appreciated, different types of memory components 112 may have different physical addresses. Therefore, enabling the memory mapping component 118 to translate a logical address into any physical address specific to the target memory component 112 may provide the benefit of using different types of memory components 112 in the memory subsystem 110 based on the desired performance of the memory subsystem 110.9 is a flowchart of an example method 900 for determining configuration parameters for operation of a sequencer component and to send the configuration parameters to the sequencer component according to an embodiment of the present disclosure. The method 900 may be performed by processing logic, which may include hardware (e.g., processing device, circuit, dedicated logic, programmable logic, microcode, hardware of the device, integrated circuit, etc.), software (e.g., running on the processing device) Or executed instructions) or a combination thereof. In some embodiments, the method 900 is performed by the controller 115 of FIG. 1. Although shown in a specific order or order, unless otherwise specified, the order of the processes can be modified. Therefore, the illustrated embodiments should be understood as examples only, and the illustrated processes may be performed in a different order, and some processes may be performed in parallel. In addition, one or more processes may be omitted in various embodiments. Therefore, not all procedures are required in each embodiment. Other process flows are possible.At block 910, the processing device determines configuration parameters for one or more operations performed by the sequencer component 140A. The configuration parameters are based on the memory type of the memory component 112A(1) associated with the controller 115. Operations may involve imposing specific types of timing requirements on the memory component 112A(1). As such, in some embodiments, the configuration parameters may include timing parameters that change based on the type of the memory component 112A(1), different generations of the memory component 112A(1), and the like. Configuration parameters can also contain rules for reordering operations. The memory component 112A(1) may be included in the package 130A along with the sequencer component 140A. The sequencer assembly 140A and the memory assembly 112A(1) may be communicatively coupled. The controller 115 may be coupled with the sequencer assembly 140A external to the controller 115 via a SerDes interface.Similar to how to determine the configuration parameters to be used for the ECC operation with reference to FIG. 6, the processing device may determine the configuration parameters to be used for the operation performed by the sequencer component 140A in several ways. For example, in block 912, the processing device may receive from the host system 120 a first data structure (e.g., table) containing configuration parameters for operations performed by the sequencer component 140A based on the specific type of the memory component 112A(1) ). In another example, in block 914, the processing device may access in the local memory 119 a second data structure containing configuration parameters for operations performed by the sequencer component 140A. In yet another example, in block 916, the processing device may query the sequencer component 140A to obtain configuration parameters to be used for operation. Additionally or alternatively, in some embodiments, the processing device may query the sequencer component 140A for the type of the memory component 112A(1) and use the received response to any of the techniques described above.At block 920, the processing device sends the configuration parameters to be written and/or any data to the sequencer component 140A. In some cases, the configuration parameters and/or data may be stored in the local memory 119 (for example, a storage buffer), and the processing device may use the output pins of the controller 115 to store the configuration parameters and/or data in the storage buffer. Or the data is transferred to the sequencer assembly 140A.The sequencer component 140A can operate using configuration parameters. For example, the configuration parameters may include timing requirements for the type of memory component 112A(1), and the processing device may sequence the order of operations performed on the memory component 112A(1) based on the timing requirements. In addition, the configuration parameters may include rules based on commands and addresses included in the instructions to reorder the order of operations. As described above, the sequencer component 140A can maximize the bandwidth between the sequencer component 140A and the memory component 112A(1) by imposing timing requirements and usage rules to reorder the order of operations.Figure 10 illustrates an example machine of a computer system 1000 within which a set of instructions for causing the machine to perform any one or more of the methods discussed herein can be executed. In some embodiments, the computer system 1000 may correspond to a host system (for example, the host system 120 of FIG. 1) that includes, is coupled to, or utilizes a memory subsystem (for example, the memory subsystem 110 of FIG. 1) or may be used for control (Eg, execute an operating system to perform operations corresponding to the error component 116, the memory mapping component 118, and/or the memory management component 121 of FIG. 1) or the sequencer components 140A to 140N of FIG. In alternative embodiments, the machine may be connected (eg, networked) to other machines in the LAN, intranet, extranet, and/or the Internet. The machine can operate as a peer machine in a peer-to-peer (or distributed) network environment or as a server or client machine in a cloud computing infrastructure or environment, and operate in the capacity of a server or client machine in a client-server network environment .The machine can be a personal computer (PC), tablet PC, set-top box (STB), personal digital assistant (PDA), cellular phone, network appliance, server, network router, switch or bridge or can (sequentially or otherwise) Any machine that executes a set of instructions that specify actions to be taken by the machine. In addition, although describing a single machine, the term "machine" should also be considered to encompass any collection of machines that individually or collectively execute one (or more) sets of instructions to perform any of the methods discussed herein. Or multiple.The example computer system 1000 includes a processing device 1002, a main memory 1004 (for example, read only memory (ROM), flash memory, dynamic random access memory (DRAM), such as synchronous DRAM (SDRAM) or Rambus DRAM (RDRAM), etc.), The static memory 1006 (for example, flash memory, static random access memory (SRAM), etc.) and the data storage system 1018 communicate with each other via a bus 1030.The processing device 1002 represents one or more general processing devices, such as a microprocessor, a central processing unit, or the like. More precisely, the processing device may be a complex instruction set computing (CISC) microprocessor, a reduced instruction set computing (RISC) microprocessor, a very long instruction word (VLIW) microprocessor or a processor implementing other instruction sets, Or a processor that implements a combination of instruction sets. The processing device 1002 may also be one or more dedicated processing devices, such as an application specific integrated circuit (ASIC), a field programmable gate array (FPGA), a digital signal processor (DSP), a network processor, or the like. The processing device 602 is configured to execute instructions 1026 for performing the operations and steps discussed herein. The computer system 1000 may further include a network interface device 1008 to communicate via the network 1020.The data storage system 1018 may include a machine-readable storage medium 1024 (also referred to as a computer-readable medium) on which one or more instruction sets 1026 or any one or more of the methods or functions described herein are stored. software. The instructions 1026 may also completely or at least partially reside in the main memory 1004 and/or in the processing device 1002 during the execution of the instructions 1026 by the computer system 1000. The main memory 1004 and the processing device 1002 also constitute machine-readable storage media. The machine-readable storage medium 1024, the data storage system 1018, and/or the main memory 1004 may correspond to the memory subsystem 110 of FIG. 1.In one embodiment, the instructions 1026 include instructions to implement the functionality corresponding to the error component 116, the memory mapping component 118, the memory management component 121, and/or the sequencer components 140A to 140N of FIG. Although the machine-readable storage medium 1024 is shown as a single medium in the example embodiment, the term "machine-readable storage medium" should be considered to include a single medium or multiple media storing one or more sets of instructions. The term "machine-readable storage medium" should also be considered to include any medium capable of storing or encoding a set of instructions for execution by a machine and causing the machine to perform any one or more of the methods of the present disclosure. Therefore, the term "machine-readable storage medium" should be considered as including but not limited to solid-state memory, optical media, and magnetic media.Some parts of the previous detailed description have been presented in terms of algorithms and symbolic representations of operations on data bits in computer memory. These algorithm descriptions and representations are the most effective way for those skilled in the data processing field to convey the main idea of their work to other technicians in the field. Algorithms are here and generally considered to be self-consistent sequences of operations that lead to the desired result. Operations are operations that require physical manipulation of physical quantities. Usually (but not necessarily), these quantities take the form of electrical or magnetic signals that can be stored, combined, compared, and otherwise manipulated. Sometimes, mainly for general reasons, it has proven convenient to refer to these signals as bits, values, elements, symbols, characters, terms, numbers, or the like.However, it should be borne in mind that all these and similar terms should be associated with the appropriate physical quantities and are only convenient labels applied to these quantities. The present disclosure may involve a computer that manipulates and transforms data expressed as physical (electronic) quantities in the registers and memories of a computer system into computer system memories or registers or other data similarly expressed as physical quantities in other such information storage systems The actions and processes of a system or similar electronic computing device.The present disclosure also relates to equipment for performing the operations herein. This device may be specially constructed for the required purpose, or it may comprise a general-purpose computer selectively activated or reconfigured by a computer program stored in the computer. Such computer programs can be stored in computer-readable storage media, such as but not limited to any type of disk (including floppy disk, optical disk, CD-ROM and magneto-optical disk), read only memory (ROM), random access memory (RAM) , EPROM, EEPROM, magnetic or optical card or any type of medium suitable for storing electronic instructions, each of which is connected to the computer system bus.The algorithms and displays presented in this article are not essentially related to any particular computer or other device. Various general-purpose systems may be used with programs in accordance with the teachings herein, or it may prove convenient to construct more specialized equipment to perform the methods described. The structure of a variety of these systems will be presented as set forth in the description below. In addition, the present disclosure is not described with reference to a specific programming language. It will be appreciated that a variety of programming languages may be used to implement the teachings of the present disclosure as described herein.The present disclosure may be provided as a computer program product or software, which may include a machine-readable medium having instructions stored thereon, and the instructions may be used to program a computer system (or other electronic device) to perform a process according to the present disclosure. A machine-readable medium includes any mechanism for storing information in a form readable by a machine (eg, a computer). In some embodiments, machine-readable (e.g., computer-readable) media includes machine (e.g., computer)-readable storage media, such as read-only memory ("ROM"), random access memory ("RAM"), magnetic disk Storage media, optical storage media, flash memory components, etc.In the foregoing specification, the embodiments of the present disclosure have been described with reference to specific exemplary embodiments thereof. It will be apparent that various modifications can be made to the present disclosure without departing from the broader spirit and scope of the embodiments of the present disclosure as set forth in the appended claims. Therefore, the description and drawings should be viewed in an illustrative sense rather than a restrictive sense. |
Systems, apparatuses, and methods to response to detected attacks in an autonomous system based on context of the autonomous system are described. In particular, the disclosure provides an intrusion detection system receiving contexts and contracts dictating particular response guide rails from a higher level components or stack on the autonomous system. The intrusion detection system is arranged to respond to attacks according to the contract without intervention by the higher level components or stack. |
An apparatus for intrusion detection, comprising:means for receiving a context associated with an intrusion detection system of an autonomous system;means for receiving a contract based on the context, the contract comprising an indication of acceptable actions for the autonomous system;means for detecting an attack on the autonomous system;means for generating, responsive to the attack, at least one command according to the contract; andmeans for sending the command to a subsystem of the autonomous system.The apparatus of claim 1, the autonomous system an autonomous vehicle, the context to comprise an indication of a roadway on which the autonomous vehicle is traveling.The apparatus of claim 2, the context comprising an indication of a plurality of segments of the roadway.The apparatus of any one of claims 2 to 3, the context to comprise and indication of a geometry of the roadway and a characteristic of a shoulder of the roadway.The apparatus of any one of claims 1 to 4, the contract comprising an indication of one or more acceptable behaviors for the autonomous vehicle, and one or more nominal mitigation actions, the apparatus further comprising means for generating the at least one command based on the one or more nominal mitigation actions.The apparatus of any one of claims 1 to 5, the contract comprising an indication of one or more acceptable behaviors for the autonomous vehicle, one or more nominal mitigation actions, and one or more emergency mitigation actions, the apparatus comprising means for generating the at least one command based on the one or more emergency mitigation actions.The apparatus of any one of claims 1 to 6, the autonomous vehicle comprising a plurality of electronic control units (ECUs) coupled to an in-vehicle network (IVN), the attack comprising a masquerading attack or a bus-off attack initiated by a one of the plurality of ECUs.The apparatus of claim 7, the command comprising disconnecting the one of the plurality of ECUs from the IVN and sending a control signal to at least one other of the plurality of ECUs.A computing implemented method, comprising:receiving, at an intrusion detection system (IDS) of an autonomous system, a context associated with the autonomous system;receiving, at the IDS, a contract based on the context, the contract comprising an indication of acceptable actions for the autonomous system;detecting, by the IDS, an attack on the autonomous system;generating, by the IDS responsive to the attack, at least one command according to the contract; andsending, from the IDS, the command to a subsystem of the autonomous system.The computing implemented method of claim 9, the autonomous system an autonomous vehicle, the context to comprise an indication of a roadway on which the autonomous vehicle is traveling.The computing implemented method of claim 10, the context comprising an indication of a plurality of segments of the roadway.The computing implemented method of any one of claims 10 to 11, the context to comprise and indication of a geometry of the roadway and a characteristic of a shoulder of the roadway.The computing implemented method of any one of claims 9 to 12, the contract comprising an indication of one or more acceptable behaviors for the autonomous vehicle, and one or more nominal mitigation actions, the method comprising generating the at least one command based on the one or more nominal mitigation actions.The computing implemented method of any one of claims 9 to 13, the contract comprising an indication of one or more acceptable behaviors for the autonomous vehicle, one or more nominal mitigation actions, and one or more emergency mitigation actions, the method comprising generating the at least one command based on the one or more emergency mitigation actions.A non-transitory computer-readable storage medium, the computer-readable storage medium including instructions that when executed by a computer, cause the computer to carry out the method of any one of claims 9 to 14. |
BACKGROUNDModern computing increasingly includes automations and many systems today are referred to as autonomous. For example, automotive vehicles have a number of autonomous features from automated breaking and lane keeping assist to full automated driving features. Such systems often have a security component to protect the system from unauthorized or malicious tampering. These security components are often located in the lower levels of the systems computing structure. However, as the security features are often located at lower levels of the computing structure, these security features often do not have sufficient information to respond to attacks. Likewise, where the security components are relocated to higher levels of the computing structure, the security components often do not have sufficient control over the system during an attack.BRIEF DESCRIPTION OF THE SEVERAL VIEWS OF THE DRAWINGSTo easily identify the discussion of any particular element or act, the most significant digit or digits in a reference number refer to the figure number in which that element is first introduced.FIG. 1 illustrates an autonomous system 100 in accordance with non-limiting example(s) of the present disclosure.FIG. 2A illustrates an autonomous vehicle system 200 in accordance with non-limiting example(s) of the present disclosure.FIG. 2B illustrates the autonomous vehicle system 200 in accordance with non-limiting example(s) of the present disclosure.FIG. 2C illustrates the autonomous vehicle system 200 in accordance with non-limiting example(s) of the present disclosure.FIG. 3 illustrates a routine 300 to receive context and contracts to respond to an attack on an autonomous system in accordance with non-limiting example(s) of the present disclosure.FIG. 4A illustrates an image 400a associated with a context of an autonomous system in accordance with non-limiting example(s) of the present disclosure.FIG. 4B illustrates an image 400b associated with a context of an autonomous system in accordance with non-limiting example(s) of the present disclosure.FIG. 4C illustrates an image 400c associated with a context of an autonomous system in accordance with non-limiting example(s) of the present disclosure.FIG. 4D illustrates an image 400d associated with a context of an autonomous system in accordance with non-limiting example(s) of the present disclosure.FIG. 4E illustrates an image 400e associated with a context of an autonomous system in accordance with non-limiting example(s) of the present disclosure.FIG. 4F illustrates an image 400f associated with a context of an autonomous system in accordance with non-limiting example(s) of the present disclosure.FIG. 4G illustrates an image 400g associated with a context of an autonomous system in accordance with non-limiting example(s) of the present disclosure.FIG. 5 illustrates a routine 500 to respond to an attack on an autonomous system based on context in accordance with non-limiting example(s) of the present disclosure.FIG. 6 illustrates a storage device 600 in accordance with non-limiting example(s) of the present disclosure.FIG. 7 illustrates a system 700, in accordance with non-limiting example(s) of the present disclosure.FIG. 8 illustrates an in-vehicle communication architecture 800 in accordance with non-limiting example(s) of the present disclosure.DETAILED DESCRIPTIONVarious embodiments of the present disclosure provide for an autonomous system where context-based responses are repeatedly generated. These context-based responses can be carried out in the event that the autonomous system is attacked. For example, in the case of an autonomous vehicle (AV) acceptable steering and braking actions can repeatedly be generated based on the context of the vehicle (e.g., road trajectory in next 500 feet, or the like). These acceptable steering and braking actions can be executed by low level actuation and control components of the vehicle in the event that the vehicle is attacked or compromised. As such, safety of the vehicle is not entirely dependent upon the higher-level autonomous control systems remaining functional in the event of an attack.FIG. 1 illustrates an autonomous system 100 in accordance with non-limiting example(s) of the present disclosure. In general, as used herein an autonomous system is a system where certain actions or behaviors are automated. Said differently, an autonomous system is a system where control and actuation signals are generated based on a policy and output from sensors. The policy can be a decision tree, a model, or other decision policy. For example, autonomous system 100 includes a high-level sensing and control stack 102 and low-level sensing and control stack 104 separated by security components 106. In general, security components 106 can be an intrusion detection system (IDS) arranged to monitor autonomous system 100 for unauthorized and/or malicious activity. Responsive to detecting malicious activity, security components 106 can be arranged to take action(s) to mitigate effects of the detected unauthorized and/or malicious activity.High-level sensing and control stack 102 includes planner 108 and high-level controllers 110 while low-level sensing and control stack 104 includes sensing components 112 and actuation and control components 114. Although not shown, high-level sensing and control stack 102 can further include a processor and memory comprising instructions executable by the processor. For example, the memory can include planner 108 instructions, which when executed by the processor cause the autonomous system 100 to receive information related to a perception 116 and to generate a trajectory plan 118 from the perception 116 information and/or outputs from sensing components 112. Further, the memory can include high level controllers 110 instructions, which when executed by the processor cause the autonomous system 100 to generate trajectory control 120 from trajectory plan 118. The actuation and control components 114 can use the trajectory control 120 as input to cause actuation and/or control behaviors in the autonomous system 100.FIG. 2A illustrates an autonomous vehicle system 200 in accordance with non-limiting example(s) of the present disclosure. Autonomous vehicle system 200 includes components similar to that described above with respect to autonomous system 100 and FIG. 1 . However, autonomous vehicle system 200 is provided as an example of an autonomous automotive system. It is noted, that the present disclosure is applicable to any of a variety of different types of autonomous systems, including autonomous vehicles. However, an autonomous vehicle example is reused throughout this disclosure for ease of explanation and consistency. This is not intended to be limiting.Autonomous vehicle system 200 includes autonomous vehicle stack 202 and vehicle stack 204, separated by intrusion detection system 206. Like security components 106, intrusion detection system 206 can be arranged to monitor and take action to mitigate unauthorized and/or malicious activity. In particular, intrusion detection system 206 can include a processor 212 and memory 214. Memory 214 includes instructions 216, which, when executed by the processor 212 cause the processor to implement the functions described herein. Autonomous vehicle stack 202 includes planner 108 and high-level controllers 110 while vehicle stack 204 includes sensing components 112, actuation and control components 114, and ECUs 208.Processor 212 can include any of a variety of processing circuitry and/or processors, such as, for example, commercial central processing units, application specific integrated circuits, microcontrollers, or the like. Processor 212 can be a microprocessor or a commercial processor and can include one or multiple processing core(s) and can also include cache memory.Memory 214 can be based on any of a wide variety of information storage technologies. For example, memory 214 can be based on volatile technologies requiring the uninterrupted provision of electric power or non-volatile technologies that do not require and possibly including technologies entailing the use of machine-readable storage media that may or may not be removable. Thus, each of these storages may include any of a wide variety of types (or combination of types) of storage devices, including without limitation, read-only memory (ROM), random-access memory (RAM), dynamic RAM (DRAM), Double-Data-Rate DRAM (DDR-DRAM), synchronous DRAM (SDRAM), static RAM (SRAM), programmable ROM (PROM), erasable programmable ROM (EPROM), electrically erasable programmable ROM (EEPROM), flash memory, polymer memory (e.g., ferroelectric polymer memory), ovonic memory, phase change or ferroelectric memory, silicon-oxide-nitride-oxide-silicon (SONOS) memory, magnetic or optical cards, one or more individual ferromagnetic disk drives, or a plurality of storage devices organized into one or more arrays (e.g., multiple ferromagnetic disk drives organized into a Redundant Array of Independent Disks array, or RAID array). Additionally, memory 214 can include memory storage devices.In general, each of ECUs 208 include circuitry arranged to generate messages and transmit the messages onto in vehicle network 210 and/or consume messages from in vehicle network 210. The ECUs 208 can be any of a variety of devices, such as, for example, sensor devices, actuator devices, microprocessor control devices, or the like and will often be different types of ECU devices. ECUs 208 include circuitry arranged to manipulate voltage levels on in vehicle network 210 to communicate messages via the in vehicle network 210.It is to be appreciated that modern vehicles have many (often hundreds) of ECUs (e.g., ECUs 208). These ECUs are communicatively coupled via an in-vehicle network (IVN) 210, such as, as a CAN bus. There are multiple ECUs for engine control, transmission, airbags, antilock brakes, cruise control, electric power steering, audio systems, power windows, power doors, power mirror adjustment, battery, recharging systems for hybrid/electric cars, environmental control systems, auto start stop systems, blind spot monitoring, lane keeping assist systems, collision avoidance systems, and more.Additionally, many modern vehicles can include auxiliary control systems that couple to the ECUs and in-vehicle network via a gateway. Attackers can attempt to force the auxiliary control system or the gateway off the bus in a similar manner as described above. The present disclosure is directed towards responding to attacks in a context-based manner, as described herein. Although the examples and communication networks described herein use vehicles as an example, the present disclosure can be implemented in a variety of contexts, such as, for example, industrial networks, vehicular networks, manufacturing networks, retail operation networks, warehousing networks, or the like. Further, although vehicular networks are often used in this description as an example, the claims are not limited to in-vehicle networks.Like high-level sensing and control stack 102, autonomous vehicle stack 202 can further include a processor and memory comprising instructions executable by the processor. For example, the memory can include planner 108 instructions, which when executed by the processor cause the autonomous vehicle system 200 to receive information related to a perception 116 (e.g., images of a road on which the vehicle of which 200 is a part is driving, or the like) and to generate a trajectory plan 118 from the perception 116 information and/or outputs from sensing components 112. Further, the memory can include high level controllers 110 instructions, which when executed by the processor cause the autonomous vehicle system 200 to generate trajectory control 120 from trajectory plan 118. The actuation and control components 114 can use the trajectory control 120 as input to cause actuation and/or control behaviors in the autonomous vehicle system 200.In a real-world setting, autonomous vehicle system 200 is susceptible to a number of different attack vectors. For example, an attacker can masquerade as a sensor sending false or inaccurate (e.g., steering angle signals, engine RPM signal, or the like) or can masquerade as a command signal and send false commands (e.g, acceleration command, steering angle adjustment command, or the like). Additionally, an attacker can attempt to force components off the IVN 210, such as ones of the ECUs 208 or even the entire autonomous vehicle stack 202. This is referred to as a bus-off attack.Intrusion detection system 206 can be arranged to detect masquerading attacks and bus-off attacks and respond accordingly. For example, in conventional autonomous vehicles, a malicious ECU may be disconnected from the IVN 210. As another example, the autonomous vehicle stacks 202 can be configured with redundant signals to tolerate a number of attacks. However, where the autonomous vehicle stack 202 itself is disconnected from the IVN 210, it is unable to respond to detection of malicious activity by the intrusion detection system 206. Furthermore, where critical ones of ECUs 208 are disconnected from the IVN 210, these ECUs cannot perform their function. For example, if a steering or braking control ECU is disconnected from the IVN 210 steering or breaking control will no longer be available in autonomous vehicle system 200.As such, the present disclosure provides to repeatedly generate context-based responses that are acceptable in the event of an attack. These context-based responses can be carried out by components of the vehicle stack 204, for example, in the event that autonomous vehicle stack 202 is disconnected or compromised. As such, limited, but safe control of the vehicle in which autonomous vehicle system 200 is implemented can be achieved in the event of attacks on the vehicle.For example, FIG. 2B illustrates autonomous vehicle system 200 providing contexts 218 and contexts 220 to intrusion detection system 206. This is described in greater detail with respect to FIG. 3 .FIG. 3 depicts a routine 300, in accordance with non-limiting example(s) of the present disclosure. The routines and logic flows described herein, including routine 300, and other logic flows or routines described herein, are representative of exemplary methodologies for performing novel aspects of the disclosed architecture. While, for purposes of simplicity of explanation, the one or more methodologies shown herein, for example, in the form of a flow chart or flow diagram, are shown and described as a series of acts, it is to be understood and appreciated that the methodologies are not limited by the order of acts, as some acts may, in accordance therewith, occur in a different order and/or concurrently with other acts from that shown and described herein. For example, those skilled in the art will understand and appreciate that a methodology could alternatively be represented as a series of interrelated states or events, such as in a state diagram. Moreover, not all acts illustrated in a methodology may be required for a novel implementation.Routine 300 can be implemented by an intrusion detection system (IDS), such as by intrusion detection system 206 of autonomous vehicle system 200. Typically, routine 300 can be implemented prior to intrusion detection system 206 detecting an attack. Routine 300 can begin at block 302 "receive context from AV stack" where an IDS can receive context information from an AV stack. For example, processor 212 can execute instructions 216 to receive information elements comprising indications of contexts 218 and 220 from planner 108 and high-level controllers 110. In some examples, contexts can be indications of environmental, traffic, and road condition contexts. For example, FIG. 4A illustrates an image 400a. During operation of autonomous vehicle system 200, image 400a can be received by autonomous vehicle stacks 202 (e.g., via perception 116, or the like) and contexts associated with image 400a generated and provided to intrusion detection system 206. As a specific example, contexts associated with image 400a can be (1) straight road for miles, (2) excellent road conditions, (3) inappropriate shoulder to pull over, (4) no traffic in front of vehicle, and (4) no oncoming traffic. Said differently, intrusion detection system 206 can receive contexts 218 and/or contexts 220 from autonomous vehicle stack 202 comprising an indication of the above listed contexts for image 400a. As another example, FIG. 4B illustrates image 400b, which could be received (e.g., via perception 116, or the like) by autonomous vehicle stack 202. Contexts associated with image 400b can be (1) curved road by a cliff, (2) cliff on right hand side, and (3) and upcoming left-hand curve. As a third example, FIG. 4C illustrates image 400c, which could also be received (e.g., via perception 116, or the like) by autonomous vehicle stack 202. Contexts associated with image 400c can be (1) straight road for 50 meters, (2) 30 degree left hand curve, and (3) straight road for 70 meters.Continuing to block 304 "receive contracts based on contexts" where an IDS can receive contracts based on the contexts received at block 302. For example, processor 212 can execute instructions 216 to receive contracts 222 from contexts 218 and contexts 220. It is noted, that in some examples, the contracts 222 are generated by "higher-level" components of the autonomous vehicle system 200, such as, the autonomous vehicle stack 202 while in other examples, the intrusion detection system 206 receives the contexts 218 and 220 and then generates the contracts 222 from the contexts as detailed herein. Examples are not limited in this respect.In some examples, contracts 222 can include both an indication of an expectation relative to the contexts as well as an acceptable action relative to the contexts. The contracts 222 (e.g., expectation and acceptable action) can be used to both detect an attack and also respond when an attack is detected. This is explained in greater detail below. However, returning to FIG. 4A, FIG. 4B, and FIG. 4C , examples of contracts associated with the contexts for each image are provided herein. For example, processor 212 can execute instructions 216 to generate contracts 222 from contexts derived based on image 400a to include an expectation that (1) the vehicle will drive straight, (2) only slight steering is anticipated, and (3) acceleration and braking area acceptable. Furthermore, contracts 222 can include an indication that acceptable actions in the event of an attack are (1) longitudinal actions are prioritized over lateral actions (2) steering is not acceptable, (3) deviation to oncoming lane unlikely to cause a crash, and (4) actuate braking. As another example, processor 212 can execute instructions 216 to generate contracts 222 from contexts derived based on image 400b to include an expectation that (1) the vehicle will drive straight until the left turn, (2) no abrupt steering to the right at any time, (3) small fluctuations in speed, and (4) no major accelerations. Furthermore, contracts 222 can include an indication that acceptable actions in the event of an attack are (1) steering is not acceptable for 80 meters, (2) steering to the left at 80 meters, (3) no accelerations, and (4) braking allowed. As a third example, processor 212 can execute instructions 216 to generate contracts 222 from contexts derived based on image 400c to include an expectation that the vehicle will drive straight for 50 meters, (2) soft steering (small angle) to the left at 50 meters, (3) soft braking at 50 meters, (4) no acceleration, (5) steering to center after left hand turn at 50 meters, and (6) no steering after the left hand turn at 50 meters. Furthermore, contracts 222 can include an indication that acceptable actions in the event of an attack are (1) no steering to the right, (2) small accelerations before and after the left hand turn at 50 meters, and (3) braking and steering to the shoulder are acceptable but there are obstacles.With some examples, contexts 218 and 220 may just be an image (e.g., image 400a, or the like) while in other examples, contexts 218 and 220 may be a description of the context (e.g., as outlined above). Still, in some examples, the contracts 222, derived from contexts 218 and 220 can be more complex than described above. For example, the following table details a number of possible indications that can be generated for contracts 222 from contexts 218 and 220, where the image 400a is indicated by contexts 218 and 220.ContextsPossible AttackNominalEmergencyContractsResponseContractResponseLongitudinal1. Brake controller unavailable. 2. Attacker applying soft brake. 3. Powertrain controller unavailable.Permitted: (1) Apply soft braking; (2) Reduce gearing; (3) Increase speed. Forbidden: (1) Sudden braking; (2) Exceed speed limit.1. Issue soft braking control command. 2. If attacker is applying soft braking, allow progression without immediate response. 3. Issue command(s) to either (1) decelerate or (2) maintain speed.Permitted: (1) Increase speed; (2) Decrease speed. Forbidden: None.1. Issue command to brake on behalf of the brake controller. If situation requires, can apply maximum force. 2. Issue additional brake commands to counteract attacker. 3. Issue commands to maintain or reduce speed.Lateral1. AV stack or lateral controller unavailable. 2. Attacker steering to the left.Permitted: Soft steering within the lane. Forbidden: (1) Abrupt steering; (2) Steering towards unpaved shoulder; (3) Steering towards opposing lane.1. Correct trajectory if needed. 2. Disconnect steering ECU and issue commands to softly steer to the right.Permitted: (1) Steer to the right shoulder preferred; (2) Steer to the opposing lane if needed.1. Correct trajectory if needed. 2. Steer to the right, even if the vehicle enters the unpaved shoulder.In some examples, the contexts 218 and 220 can be just an image (e.g., a copy of image 400a, or the like) while in other examples, contexts 218 and 220 can include both the image and a description of the context as outlined herein. Furthermore, multiple different acceptable actions can be provided. For example, as illustrated in the above table actions for a nominal response (e.g., actions to keep the current trajectory) as well as an emergency response (e.g., actions to minimize impacts on safety) are provided.With further examples, the contexts 218 and 220 can be split into segments (represented as distance, time, or the like). For example, FIG. 4D illustrates image 400d depicting a context split into segments 402a, 402b, and 402c. During operation, intrusion detection system 206 can repeatedly receive context (e.g. contexts 218 and 220) from autonomous vehicle stack 202 and regenerate contracts 222 from the newly or recently received contexts. Said differently, routine 300 can be repeated (e.g., on a fixed period, at set points such as distance traveled, segment end reached, or the like). FIG. 4E illustrates image 400e depicting an update, or progression, to the context depicted in FIG. 4D after a period of time 404 has elapsed. As can be seen, segment 402a is shaded, indicative that the segment 402a has been traversed while new segment 402d is depicted.In some example, segments are represented by periods of time (e.g., segments 402a, etc. FIG. 4D and FIG. 4E ) while in other examples, segments are represented by distance. For example, FIG. 4F illustrates image 400f depicting a context split into segments 406a and 406b, which are represented in distance as opposed to time to traverse the segment.FIG. 4G illustrates image 400g depicting an update, or progression, to the context depicted in FIG. 4F after a period of time 408 has elapsed. As can be seen, segment 406a is partially shaded, indicative that a part of segment 406a has been traversed. Furthermore, an update to the remaining distance of segment 406a is shown.It is to be appreciated, that routine 300 will be repeated during operation. For example, updated contexts 218 and 200 and well as contracts 222 are generated (e.g., at set intervals, after set distance traveled, upon reaching landmarks, or the like). For example, contexts 218 and 220 can be updated after a set distance is traveled or upon reaching various roadway marks (e.g., stop sign, yield sign, speed limit sign, or the like).FIG. 5 illustrates a routine 500 that can be implemented by an IDS (e.g., intrusion detection system 206, or the like) to mitigate effects of an attack. Routine 500 can begin at block 502 "detect an attack" where an attack can be detected. For example, processor 212 can execute instructions 216 to detect an attack on the autonomous vehicle system 200. In some examples, an attack can be detected based on detecting a masquerading attack, a bus-off attack, attacks against sensing components, or the like. In other examples, processor 212 can execute instructions 216 to detect a sudden change in the received contexts 218 and 220 to detect an attack. For example, a sudden change in the context (e.g., snapshot of road different from prior snapshot) can indicate an attack on the autonomous vehicle stack 202.Continuing to block 504 "report attack to the AV stack" where the IDS reports the attack to the AV stack. For example, processor 212 can execute instructions 216 to report the attack to the autonomous vehicle stack 202. In particular, processor 212 can execute instructions 216 to send an information element to autonomous vehicle stack 202 including indications of the attack (e.g., malicious ECU, affected ECU, type of attack, etc).Continuing to block 506 "issue command according to contract to mitigate attack affects" where the IDS can issue commands according to the contract (e.g., most recent contract, or the like) to mitigate the effects of the attack detected at block 502. For example, processor 212 can execute instructions 216 to cause commands to be issued according to the contracts 222. For example, if the contracts 222 state to issue commands to the brake controller, the processor 212 can execute instructions 216 to issue commands to the brake controller as required by the contracts 222.FIG. 6 illustrates an example of a storage device 600. Storage device 600 may comprise an article of manufacture, such as, any non-transitory computer readable medium or machine readable medium, such as an optical, magnetic or semiconductor storage. Storage device 600 may store various types of computer executable instructions 602, such as instructions to implement routine 300 and/or routine 500. Examples of a computer readable or machine readable storage medium may include any tangible media capable of storing electronic data, including volatile memory or non-volatile memory, removable or non-removable memory, erasable or non-erasable memory, writeable or rewriteable memory, and so forth. Examples of computer executable instructions may include any suitable type of code, such as source code, compiled code, interpreted code, executable code, static code, dynamic code, object-oriented code, visual code, and the like. The examples are not limited in this context.FIG. 7 illustrates an embodiment of a system 700. System 700 is a computer system with multiple processor cores such as a distributed computing system, supercomputer, high-performance computing system, computing cluster, mainframe computer, mini-computer, client-server system, personal computer (PC), workstation, server, portable computer, laptop computer, tablet computer, handheld device such as a personal digital assistant (PDA), or other device for processing, displaying, or transmitting information. Similar embodiments may comprise, e.g., entertainment devices such as a portable music player or a portable video player, a smart phone or other cellular phone, a telephone, a digital video camera, a digital still camera, an external storage device, or the like. Further embodiments implement larger scale server configurations. In other embodiments, the system 700 may have a single processor with one core or more than one processor. Note that the term "processor" refers to a processor with a single core or a processor package with multiple processor cores. In at least one embodiment, the computing system 700 is representative of the components of the autonomous vehicle system 200. More generally, the computing system 700 is configured to implement all logic, systems, logic flows, methods, apparatuses, and functionality described herein. As a specific example, system 700 can be implemented as part of intrusion detection system 206 and arranged to implement the IDS feature of receiving contexts and contracts and responding to attacks according to the contracts as described herein.As used in this application, the terms "system" and "component" and "module" are intended to refer to a computer-related entity, either hardware, a combination of hardware and software, software, or software in execution, examples of which are provided by the exemplary system 700. For example, a component can be, but is not limited to being, a process running on a processor, a processor, a hard disk drive, multiple storage drives (of optical and/or magnetic storage medium), an object, an executable, a thread of execution, a program, and/or a computer. By way of illustration, both an application running on a server and the server can be a component. One or more components can reside within a process and/or thread of execution, and a component can be localized on one computer and/or distributed between two or more computers. Further, components may be communicatively coupled to each other by various types of communications media to coordinate operations. The coordination may involve the uni-directional or bi-directional exchange of information. For instance, the components may communicate information in the form of signals communicated over the communications media. The information can be implemented as signals allocated to various signal lines. In such allocations, each message is a signal. Further embodiments, however, may alternatively employ data messages. Such data messages may be sent across various connections. Exemplary connections include parallel interfaces, serial interfaces, and bus interfaces.As shown in this figure, system 700 comprises a motherboard or system-on-chip(SoC) 702 for mounting platform components. Motherboard or system-on-chip(SoC) 702 is a point-to-point (P2P) interconnect platform that includes a first processor 704 and a second processor 706 coupled via a point-to-point interconnect 768 such as an Ultra Path Interconnect (UPI). In other embodiments, the system 700 may be of another bus architecture, such as a multi-drop bus. Furthermore, each of processor 704 and processor 706 may be processor packages with multiple processor cores including core(s) 708 and core(s) 710, respectively. While the system 700 is an example of a two-socket (2S) platform, other embodiments may include more than two sockets or one socket. For example, some embodiments may include a four-socket (4S) platform or an eight-socket (8S) platform. Each socket is a mount for a processor and may have a socket identifier. Note that the term platform refers to the motherboard with certain components mounted such as the processor 704 and chipset 732. Some platforms may include additional components and some platforms may include sockets to mount the processors and/or the chipset. Furthermore, some platforms may not have sockets (e.g. SoC, or the like).The processor 704 and processor 706 can be any of various commercially available processors, including without limitation an Intel® Celeron®, Core®, Core (2) Duo®, Itanium®, Pentium®, Xeon®, and XScale® processors; AMD® Athlon®, Duron® and Opteron® processors; ARM® application, embedded and secure processors; IBM® and Motorola® DragonBall® and PowerPC® processors; IBM and Sony® Cell processors; and similar processors. Dual microprocessors, multi-core processors, and other multi processor architectures may also be employed as the processor 704 and/or processor 706. Additionally, the processor 704 need not be identical to processor 706.Processor 704 includes register registers 712, integrated memory controller (IMC) 720 and point-to-point (P2P) interface 724 and P2P interface 728. Similarly, the processor 706 includes register registers 714, IMC 722 as well as P2P interface 726 and P2P interface 730. IMC 720 and IMC 722 couple the processors processor 704 and processor 706, respectively, to respective memories (e.g., memory 716 and memory 718). Memory 716 and memory 718 may be portions of the main memory (e.g., a dynamic random-access memory (DRAM)) for the platform such as double data rate type 3 (DDR3) or type 4 (DDR4) synchronous DRAM (SDRAM). In the present embodiment, the memories memory 716 and memory 718 locally attach to the respective processors (i.e., processor 704 and processor 706). In other embodiments, the main memory may couple with the processors via a bus and shared memory hub.System 700 includes chipset 732 coupled to processor 704 and processor 706. Furthermore, chipset 732 can be coupled to storage device 750, for example, via an interface (I/F) 738. The I/F 738 may be, for example, a Peripheral Component Interconnect-enhanced (PCI-e).Processor 704 couples to a chipset 732 via P2P interface 728 and P2P 734 while processor 706 couples to a chipset 732 via P2P interface 730 and P2P 736. Direct media interface (DMI) 774 and DMI 776 may couple the P2P interface 728 and the P2P 734 and the P2P interface 730 and P2P 736, respectively. DMI 774 and DMI 776 may be a high-speed interconnect that facilitates, e.g., eight Giga Transfers per second (GT/s) such as DMI 3.0. In other embodiments, the processor 704 and processor 706 may interconnect via a bus.The chipset 732 may comprise a controller hub such as a platform controller hub (PCH). The chipset 732 may include a system clock to perform clocking functions and include interfaces for an I/O bus such as a universal serial bus (USB), peripheral component interconnects (PCIs), serial peripheral interconnects (SPIs), integrated interconnects (I2Cs), and the like, to facilitate connection of peripheral devices on the platform. In other embodiments, the chipset 732 may comprise more than one controller hub such as a chipset with a memory controller hub, a graphics controller hub, and an input/output (I/O) controller hub.In the depicted example, chipset 732 couples with a trusted platform module (TPM) 744 and UEFI, BIOS, FLASH circuitry 746 via I/F 742. The TPM 744 is a dedicated microcontroller designed to secure hardware by integrating cryptographic keys into devices. The UEFI, BIOS, FLASH circuitry 746 may provide pre-boot code.Furthermore, chipset 732 includes the I/F 738 to couple chipset 732 with a high-performance graphics engine, such as, graphics processing circuitry or a graphics processing unit (GPU) 748. In other embodiments, the system 700 may include a flexible display interface (FDI) (not shown) between the processor 704 and/or the processor 706 and the chipset 732. The FDI interconnects a graphics processor core in one or more of processor 704 and/or processor 706 with the chipset 732. Additionally, ML accelerator 754 coupled to chipset 732 via I/F 738. ML accelerator 754 can be circuitry arranged to execute ML related operations (e.g., training, inference, etc.) for ML models. In particular, ML accelerator 754 can be arranged to execute mathematical operations and/or operands useful for machine learning.Various I/O devices 758 and display 752 couple to the bus 770, along with a bus bridge 756 which couples the bus 770 to a second bus 772 and an I/F 740 that connects the bus 770 with the chipset 732. In one embodiment, the second bus 772 may be a low pin count (LPC) bus. Various devices may couple to the second bus 772 including, for example, a keyboard 760, a mouse 762 and communication devices 764.Furthermore, an audio I/O 766 may couple to second bus 772. Many of the I/O devices 758 and communication devices 764 may reside on the motherboard or system-on-chip(SoC) 702 while the keyboard 760 and the mouse 762 may be add-on peripherals. In other embodiments, some or all the I/O devices 758 and communication devices 764 are add-on peripherals and do not reside on the motherboard or system-on-chip(SoC) 702.FIG. 8 illustrates an in-vehicle communication architecture 800 according to one or more embodiments of the disclosure. For example, one or more vehicular devices, components, or circuits, such as circuitry 802 and/or circuitry 804, may communicate with each other via a communication framework 806, which may be an in-vehicle network, such as a CAN bus, implemented to facilitate the context-based attacking mitigation techniques described herein.The in-vehicle communication architecture 800 includes various common communications elements, such as a transmitter, receiver, transceiver, and so forth. The embodiments, however, are not limited to implementation by the in-vehicle communication architecture 800. As shown in this figure, the vehicular circuitry 802 and circuitry 804 may each be operatively connected to one or more respective data devices, such as, data device 808 and/or data device 810 that can be employed to store information local to the respective circuitry 802 and/or circuitry 804, such as fingerprints, distributions, densities, voltage signals, or the like. It may be understood that the circuitry 802 and circuitry 804 may be any suitable vehicular component, such as sensor, an ECU, microcontroller, microprocessor, processor, ASIC, field programmable gate array (FPGA), any electronic device, computing device, or the like. Moreover, it may be understood that one or more computing devices (containing at least a processor, memory, interfaces, etc.) may be connected to the communication framework 806 in a vehicle.Further, the communication framework 806 may implement any well-known communications techniques and protocols. As described above, the communication framework 806 may be implemented as a CAN bus protocol or any other suitable in-vehicle communication protocol. The communication framework 806 may also implement various network interfaces arranged to accept, communicate, and connect to one or more external communications networks (e.g., Internet). A network interface may be regarded as a specialized form of an input/output (I/O) interface. Network interfaces may employ connection protocols including without limitation direct connect, Ethernet (e.g., thick, thin, twisted pair 10/100/1000 Base T, and the like), token ring, wireless network interfaces, cellular network interfaces, IEEE 802.7a-x network interfaces, IEEE 802.16 network interfaces, IEEE 802.20 network interfaces, and the like. Further, multiple network interfaces may be used to engage with various communications network types. The communication framework 806 may employ both wired and wireless connections.The components and features of the devices described above may be implemented using any combination of: processing circuitry, discrete circuitry, application specific integrated circuits (ASICs), logic gates and/or single chip architectures, etc. Further, the features of the devices may be implemented using microcontrollers, programmable logic arrays and/or microprocessors or any combination of the foregoing where suitably appropriate. It is noted that hardware, firmware and/or software elements may be collectively or individually referred to herein as "logic" or "circuit."Some embodiments may be described using the expression "one embodiment" or "an embodiment" along with their derivatives. These terms mean that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment. The appearances of the phrase "in one embodiment" in various places in the specification are not necessarily all referring to the same embodiment. Further, some embodiments may be described using the expression "coupled" and "connected" along with their derivatives. These terms are not necessarily intended as synonyms for each other. For example, some embodiments may be described using the terms "connected" and/or "coupled" to indicate that two or more elements are in direct physical or electrical contact with each other. The term "coupled," however, may also mean that two or more elements are not in direct contact with each other, but yet still co-operate or interact with each other.It is emphasized that the Abstract of the Disclosure is provided to allow a reader to quickly ascertain the nature of the technical disclosure. It is submitted with the understanding that it will not be used to interpret or limit the scope or meaning of the claims. In addition, in the foregoing Detailed Description, it can be seen that various features are grouped together in a single embodiment for the purpose of streamlining the disclosure. This method of disclosure is not to be interpreted as reflecting an intention that the claimed embodiments have more features than are expressly recited in each claim. Rather, as the following claims reflect, the described subject matter lies in less than all features of a single disclosed embodiment. Thus, the following claims are hereby incorporated into the Detailed Description, with each claim standing on its own as a separate embodiment. In the appended claims, the terms "including" and "in which" are used as the plain-English equivalents of the respective terms "comprising" and "wherein," respectively. Moreover, the terms "first," "second," "third," and so forth, are used merely as labels, and are not intended to impose numerical requirements on their objects.What has been described above includes examples of the disclosed architecture. It is, of course, not possible to describe every conceivable combination of components and/or methodology, but one of ordinary skill in the art may recognize that many further combinations and permutations are possible. Accordingly, the novel architecture is intended to embrace all such alterations, modifications and variations that fall within the spirit and scope of the appended claims.The following examples pertain to further embodiments, from which numerous permutations and configurations will be apparent.Example 1. A computing apparatus comprising: a processor at an intrusion detection system of an autonomous system; and memory storing instructions, which when executed by the processor configure the apparatus to: receive a context associated with the autonomous system; receive a contract based on the context, the contract comprising an indication of acceptable actions for the autonomous system; detect an attack on the autonomous system; generate, responsive to the attack, at least one command according to the contract; and send the command to a subsystem of the autonomous system.Example 2. The computing apparatus of claim 1, the autonomous system an autonomous vehicle, the context to comprise an indication of a roadway on which the autonomous vehicle is traveling.Example 3. The computing apparatus of claim 2, the context comprising an indication of a plurality of segments of the roadway.Example 4. The computing apparatus of claim 2, the context to comprise and indication of a geometry of the roadway and a characteristic of a shoulder of the roadway.Example 5. The computing apparatus of claim 4, the contract comprising an indication of one or more acceptable behaviors for the autonomous vehicle, and one or more nominal mitigation actions, the instructions when executed by the processor configure the apparatus to generate the at least one command based on the one or more nominal mitigation actions.Example 6. The computing apparatus of claim 4, the contract comprising an indication of one or more acceptable behaviors for the autonomous vehicle, one or more nominal mitigation actions, and one or more emergency mitigation actions, the instructions when executed by the processor configure the apparatus to generate the at least one command based on the one or more emergency mitigation actions.Example 7. The computing apparatus of claim 2, the autonomous vehicle comprising a plurality of electronic control units (ECUs) coupled to an in-vehicle network (IVN), the attack comprising a masquerading attack or a bus-off attack initiated by a one of the plurality of ECUs.Example 8. The computing apparatus of claim 7, the command comprising disconnecting the one of the plurality of ECUs from the IVN and sending a control signal to at least one other of the plurality of ECUs.Example 9. A method, comprising: receiving, at an intrusion detection system (IDS) of an autonomous system, a context associated with the autonomous system; receiving, at the IDS, a contract based on the context, the contract comprising an indication of acceptable actions for the autonomous system; detecting, by the IDS, an attack on the autonomous system; generating, by the IDS responsive to the attack, at least one command according to the contract; and sending, from the IDS, the command to a subsystem of the autonomous system.Example 10. The method of claim 9, the autonomous system an autonomous vehicle, the context to comprise an indication of a roadway on which the autonomous vehicle is traveling.Example 11. The method of claim 10, the context comprising an indication of a plurality of segments of the roadway.Example 12. The method of claim 10, the context to comprise and indication of a geometry of the roadway and a characteristic of a shoulder of the roadway.Example 13. The method of claim 12, the contract comprising an indication of one or more acceptable behaviors for the autonomous vehicle, and one or more nominal mitigation actions, the method comprising generating the at least one command based on the one or more nominal mitigation actions.Example 14. The method of claim 12, the contract comprising an indication of one or more acceptable behaviors for the autonomous vehicle, one or more nominal mitigation actions, and one or more emergency mitigation actions, the method comprising generating the at least one command based on the one or more emergency mitigation actions.Example 15. The method of claim 10, the autonomous vehicle comprising a plurality of electronic control units (ECUs) coupled to an in-vehicle network (IVN), the attack comprising a masquerading attack or a bus-off attack initiated by a one of the plurality of ECUs.Example 16. The method of claim 15, the command comprising disconnecting the one of the plurality of ECUs from the IVN and sending a control signal to at least one other of the plurality of ECUs.Example 17. A non-transitory computer-readable storage medium, the computer-readable storage medium including instructions that when executed by an intrusion detection system (IDS) of an autonomous system, cause the IDS to: receive a context associated with the autonomous system; receive a contract based on the context, the contract comprising an indication of acceptable actions for the autonomous system; detect an attack on the autonomous system; generate, responsive to the attack, at least one command according to the contract; and send the command to a subsystem of the autonomous system.Example 18. The computer-readable storage medium of claim 17, the autonomous system an autonomous vehicle, the context to comprise an indication of a roadway on which the autonomous vehicle is traveling.Example 19. The computer-readable storage medium of claim 18, the context comprising an indication of a plurality of segments of the roadway.Example 20. The computer-readable storage medium of claim 18, the context to comprise and indication of a geometry of the roadway and a characteristic of a shoulder of the roadway.Example 21. The computer-readable storage medium of claim 20, the contract comprising an indication of one or more acceptable behaviors for the autonomous vehicle, and one or more nominal mitigation actions, the computer-readable storage medium including instructions that when executed by the IDS, cause the IDS to generate the at least one command based on the one or more nominal mitigation actions.Example 22. The computer-readable storage medium of claim 20, the contract comprising an indication of one or more acceptable behaviors for the autonomous vehicle, one or more nominal mitigation actions, and one or more emergency mitigation actions, the computer-readable storage medium including instructions that when executed by the IDS, cause the IDS to generate the at least one command based on the one or more emergency mitigation actions.Example 23. The computer-readable storage medium of claim 18, the autonomous vehicle comprising a plurality of electronic control units (ECUs) coupled to an in-vehicle network (IVN), the attack comprising a masquerading attack or a bus-off attack initiated by a one of the plurality of ECUs.Example 24. The computer-readable storage medium of claim 23, the command comprising disconnecting the one of the plurality of ECUs from the IVN and sending a control signal to at least one other of the plurality of ECUs.Example 25. A computing apparatus comprising: a processor at an intrusion detection system, comprising: means for receiving, at an intrusion detection system (IDS) of an autonomous system, a context associated with the autonomous system; means for receiving, at the IDS, a contract based on the context, the contract comprising an indication of acceptable actions for the autonomous system; means for detecting, by the IDS, an attack on the autonomous system; generating, by the IDS responsive to the attack, at least one command according to the contract; and means for sending, from the IDS, the command to a subsystem of the autonomous system.Example 26. The apparatus of claim 25, the autonomous system an autonomous vehicle, the context to comprise an indication of a roadway on which the autonomous vehicle is traveling.Example 27. The apparatus of claim 26, the context comprising an indication of a plurality of segments of the roadway.Example 28. The apparatus of claim 26, the context to comprise and indication of a geometry of the roadway and a characteristic of a shoulder of the roadway.Example 29. The method of claim 28, the contract comprising an indication of one or more acceptable behaviors for the autonomous vehicle, and one or more nominal mitigation actions, the method comprising generating the at least one command based on the one or more nominal mitigation actions.Example 30. The apparatus of claim 28, the contract comprising an indication of one or more acceptable behaviors for the autonomous vehicle, one or more nominal mitigation actions, and one or more emergency mitigation actions, the apparatus comprising means for generating the at least one command based on the one or more emergency mitigation actions.Example 31. The apparatus of claim 26, the autonomous vehicle comprising a plurality of electronic control units (ECUs) coupled to an in-vehicle network (IVN), the attack comprising a masquerading attack or a bus-off attack initiated by a one of the plurality of ECUs.Example 32. The apparatus of claim 31, the command comprising disconnecting the one of the plurality of ECUs from the IVN and sending a control signal to at least one other of the plurality of ECUs. |
The invention relates to an integrated circuit system including a memory array including strings of memory cells and a method for forming the memory array. The method includes forming a string of vertically extending channel material into a stack including vertically alternating first and second layers. Pads are formed laterally outward of individual ones of the channel material strings in one of the first layers and in one of the second layers. The liner is isotropically etched to form void spaces in the one second layer above the one first layer. Individual ones of the void spaces are laterally between the individual strings of channel material and the second layer material in the one second layer. A conductive doped semiconductive material is formed against sidewalls of channel material of the channel material string in the one first layer and extends upwardly into the void space in the one second layer. |
1.A method for forming a memory array including strings of memory cells, comprising:forming a vertically extending string of channel material into a stack including vertically alternating first and second layers, the first layer of material and the second layer of material having different compositions;forming pads laterally outside of respective ones of the strings of channel material in one of the first layers and in one of the second layers;The liner is isotropically etched to form void spaces in the one second layer over the first layer, individual ones of the void spaces laterally between the individual string of channel material and all the void spaces. between said second layer materials in said one second layer;conductively doped semiconducting material is formed against the sidewalls of the channel material of the string of channel material in the one first layer and extending up to the void in the one second layer in space; andheating the conductively doped semiconducting material to cause the dopant of increased conductivity therein to diffuse laterally from the void space into laterally adjacent the channel material and up to the dopant above the void space in the channel material.2.The method of claim 1, wherein the gasket is insulating.3.The method of claim 1, wherein the pad is electrically conductive.4.The method of claim 1, wherein the pad is semiconducting.5.The method of claim 1, wherein the liner comprises a nitride.6.6. The method of claim 5, wherein the liner consists essentially of or consists of the nitride.7.The method of claim 1, wherein the liner comprises an oxide.8.6. The method of claim 5, wherein the liner consists essentially of or consists of the oxide.9.The method of claim 1 including forming pads to individually extend directly below the individual strings of channel material.10.10. The method of claim 9, wherein in the finished configuration, the pads individually extend directly below the individual strings of channel material.11.A method for forming a memory array including strings of memory cells, comprising:forming a conductor layer including a conductor material on the substrate;forming a lower portion of a stack that will include vertically alternating first and second layers above the conductor layers, the stack including laterally spaced memory block regions, the first layer of material and the second layer the materials of the layers have different compositions, the lowermost of the first layers in the lower portion comprising a sacrificial material;The vertically alternating first and second layers of the upper portion of the stack are formed over the lower portion and a channel is formed through the upper portion into the sacrificial material in the lower portion open;forming liners in respective ones of the channel openings on lateral sides of the sacrificial material, the liners extending upwardly over the sacrificial material;A string of channel material is formed in the channel opening, the string of channel material extending through the first and second layers in the upper portion to the lowermost layer in the lower portion a first layer, each of the strings of channel material being laterally inward of each of the pads;forming horizontally elongated trenches into the stack, the trenches individually between laterally immediately adjacent ones in the memory block region and extending to the lowermost first layer;isotropically etching the sacrificial material from the lowermost first layer through the trench to expose the pads; isotropically etching the exposed pads to expose the pads in the lowermost first layer A void space is formed over the layer, the void space being individually lateral between the individual strings of channel material and the second layer of material in the immediate vicinity of the upper portion of the material. in said second layer below the lowermost first layer;A conductively doped semiconductive material is formed against the sidewalls of the channel material of the string of channel material, the conductively doped semiconductive material connecting the channel material of the individual string of channel material with the the conductor materials of the conductor layer are directly electrically coupled together, the conductively doped semiconductive material extending upwardly into the void space; andThe conductively doped semiconducting material is heated to cause the dopant of increased conductivity therein to diffuse laterally from the void space into the laterally adjacent channel material and up to all above the void space. in the channel material.12.12. The method of claim 11, wherein the pads have respective tops in the lower portion.13.11. The method of claim 11 including removing all material of the pad over the conductor layer prior to forming the conductively doped semiconducting material.14.12. The method of claim 11 including leaving the material of the pad in the conductor layer and forming the conductively doped semiconducting material directly over it.15.The method of claim 11, comprising:removing all material of the liner over the conductor layer prior to forming the conductively doped semiconducting material; andThe material of the pad is left in the conductor layer and the conductively doped semiconducting material is formed directly over it.16.12. The method of claim 11, wherein the liner is formed prior to forming the upper portion of the stack.17.A method for forming a memory array including strings of memory cells, comprising:forming a conductor layer including a conductor material on the substrate;forming a lower portion of a stack that will include vertically alternating first and second layers above the conductor layers, the stack including laterally spaced memory block regions, the first layer of material and the second layer the materials of the layers have different compositions, the lowermost of the first layers in the lower portion comprising a sacrificial material;Struts are formed in the lower portion, the struts individually positioned horizontally where individual strings of channel material will be formed, each of the struts comprising a laterally inner material and a lining on the laterally outer portion of the laterally inner material a pad extending upwardly over the sacrificial material;forming the vertically alternating first and second layers of the upper portion of the stack over the lower portion and the strut;forming channel openings into the stack, the channel openings individually extending to the individual pillars;removing the lateral inner material of the pillars through the channel openings to extend the channel openings deeper into the stack;forming individual ones of the strings of channel material in individual ones of the extended channel openings and in the voids therein resulting from the removal and laterally inside individual ones of the pads;forming horizontally elongated trenches into the stack, the trenches individually between laterally immediately adjacent ones in the memory block region and extending to the lowermost first layer;isotropically etching the sacrificial material from the lowermost first layer through the trench to expose the pads; isotropically etching the exposed pads to expose the pads in the lowermost first layer A void space is formed over the layer, the void space being individually lateral between the individual strings of channel material and the second layer of material in the immediate vicinity of the upper portion of the material. in said second layer below the lowermost first layer;A conductively doped semiconductive material is formed against the sidewalls of the channel material of the string of channel material, the conductively doped semiconductive material connecting the channel material of the individual string of channel material with the the conductor materials of the conductor layer are directly electrically coupled together, the conductively doped semiconductive material extending upwardly into the void space; andheating the conductively doped semiconducting material to cause the dopant of increased conductivity therein to diffuse laterally from the void space into laterally adjacent the channel material and up to the dopant above the void space in the channel material.18.18. The method of claim 17, wherein the pads have respective tops in the lower portion.19.18. The method of claim 17 including forming the pads to individually extend directly below the laterally inner material.20.18. The method of claim 17 including removing all material of the liner over the conductor layer prior to forming the conductively doped semiconducting material.21.18. The method of claim 17 including leaving the liner material in the conductor layer and forming the conductively doped semiconducting material directly over it.22.21. The method of claim 21, wherein the backing material remaining is in the shape of an upwardly open container in vertical cross-section.23.The method of claim 17, comprising:removing all material of the liner over the conductor layer prior to forming the conductively doped semiconducting material; andThe material of the pad is left in the conductor layer and the conductively doped semiconducting material is formed directly over it.24.An integrated circuit system including a memory array containing strings of memory cells, comprising:laterally spaced memory blocks individually including first vertical stacks including alternating insulating and conductive layers, strings of memory cells including strings of channel material extending through the insulating layers and the conductive layers , the conductive layers individually comprise horizontally elongated conductive lines; andA second vertical stack next to the first vertical stack, the second vertical stack including an upper portion and a lower portion, the upper portion including alternating first and second insulating layers, the lower portion including :the lowermost insulator layer, which is directly above the conductor material of the conductor layer;a first material comprising polysilicon directly above the lowermost insulator layer;an insulator material directly over the first material comprising polysilicon; andA second material comprising polysilicon is directly above the insulator material.25.25. The integrated circuit system of claim 24, wherein the first material comprising polysilicon and the second material comprising polysilicon have the same composition relative to each other.26.25. The integrated circuit system of claim 24, wherein the first material comprising polysilicon consists or consists essentially of undoped polysilicon.27.25. The integrated circuit system of claim 24, wherein the first material comprising polysilicon consists or consists essentially of conductively doped polysilicon.28.25. The integrated circuit system of claim 24, wherein the second material comprising polysilicon consists or consists essentially of undoped polysilicon.29.25. The integrated circuit system of claim 24, wherein the second material comprising polysilicon consists or consists essentially of conductively doped polysilicon.30.25. The integrated circuit system of claim 24, wherein the materials of the insulator material and the lowermost insulator material have the same composition relative to each other.31.31. The integrated circuit system of claim 30, wherein the same composition comprises silicon dioxide.32.31. The integrated circuit system of claim 31, wherein the same composition consists or consists essentially of undoped silicon dioxide.33.The integrated circuit system of claim 24, wherein,the first material comprising polysilicon and the second material comprising polysilicon have the same composition relative to each other; andThe materials of the insulator material and the lowermost insulator material have the same composition with respect to each other, and the composition is a composition different from the composition of the first material including polysilicon and the second material.34.The integrated circuit system of claim 24, wherein the first vertical stack comprises:insulating material immediately below the horizontally elongated conductive line in the lowermost of the conductive layers; andThe insulating material includes a curved surface on each side of an individual of the strings of channel material in vertical cross-section.35.35. The integrated circuit system of claim 34, wherein the curved surface comprises a portion that is horizontal.36.36. The integrated circuit system of claim 35, wherein the portion is completely horizontal.37.An integrated circuit system including a memory array containing strings of memory cells, comprising:laterally spaced memory blocks individually including vertical stacks including alternating insulating and conductive layers, strings of memory cells including strings of channel material extending through the insulating layers and the conductive layers, the The conductive layers individually include horizontally elongated conductive lines; andinsulating material immediately below the horizontally elongated conductive line in the lowermost of the conductive layers, the insulating material comprising the individual ones of the string of channel materials in vertical cross-section Curved surface on each side.38.38. The integrated circuit system of claim 37, wherein the curved surface comprises a portion that is horizontal.39.39. The integrated circuit system of claim 38, wherein the portion is completely horizontal. |
Integrated circuit system including memory array containing memory cell strings and for
Method of forming a memory arraytechnical fieldEmbodiments disclosed herein relate to integrated circuit systems including memory arrays including strings of memory cells and methods for forming memory arrays including strings of memory cells.Background techniqueMemory is a type of integrated circuit and is used in computer systems to store data. Memory can be fabricated as one or more arrays of individual memory cells. Memory cells can be written to or read from using digit lines (which may also be referred to as bit lines, data lines, or sense lines) and access lines (which may also be referred to as word lines). Sense lines can conductively interconnect memory cells along columns of the array, and access lines can conductively interconnect memory cells along rows of the array. Each memory cell can be uniquely addressed by a combination of sense and access lines.The memory cells can be volatile, semi-volatile or non-volatile. Non-volatile memory cells can store data for long periods of time without power. Non-volatile memory is conventionally designated as memory having a retention time of at least about 10 years. Volatile memory is dissipated and thus refreshed/rewritten to maintain data storage. Volatile memory can have retention times of a few milliseconds or less. Regardless, the memory cells are configured to retain or store memory in at least two different selectable states. In the binary system, states are treated as "0" or "1". In other systems, at least some of the individual memory cells may be configured to store more than two levels or states of information.Field effect transistors are one type of electronic component that can be used in memory cells. These transistors include a pair of conductive source/drain regions with semiconducting channel regions therebetween. A conductive gate is adjacent to the channel region and separated therefrom by a thin gate insulator. Applying an appropriate voltage to the gate allows current to flow from one of the source/drain regions through the channel region to the other. When the voltage is removed from the gate, current flow through the channel region is largely prevented. Field effect transistors may also include additional structures, such as reversible programmable charge storage regions, as part of the gate construction between the gate insulator and the conductive gate.Flash memory is one type of memory and has numerous uses in modern computers and devices. For example, modern personal computers may have a BIOS stored on a flash memory chip. As another example, it is increasingly common for computers and other devices to utilize flash memory in solid state drives in place of conventional hard drives. As yet another example, flash memory is popular in wireless electronic devices because it enables manufacturers to support new communication protocols as they become standardized, and provides the ability to remotely upgrade the device for enhanced features.NAND may be the basic architecture for integrated flash memory. A NAND cell includes at least one selection device coupled in series to a series combination of memory cells (where the series combination is commonly referred to as a NAND string). The NAND architecture can be configured in a three-dimensional arrangement that includes vertically stacked memory cells that individually include reversibly programmable vertical transistors. Control or other circuitry may be formed below the vertically stacked memory cells. Other volatile or non-volatile memory array architectures may also include vertically stacked memory cells that individually include transistors.Memory arrays may be arranged in memory pages, memory blocks and partial blocks (eg, sub-blocks), and memory planes, eg, as in US Patent Application Publication Nos. 2015/0228651, 2016/0267984, and 2017/0140833 shown and described in any of the . The memory blocks can at least partially define the longitudinal profiles of individual word lines in individual word line layers of vertically stacked memory cells. Connections to these word lines may occur in so-called "staircase structures" at the ends or edges of the vertically stacked memory cell array. A stepped structure includes individual "steps" (alternatively referred to as "steps" or "stairs") that define the contact regions of individual word lines, on which vertically extending conductive vias contact to provide electrical access to the word lines.SUMMARY OF THE INVENTIONIn one aspect, the present disclosure relates to a method for forming a memory array including strings of memory cells, comprising forming vertically extending strings of channel material into stacks including vertically alternating first and second layers, the The material of the first layer and the material of the second layer have different compositions; individual ones of the string of channel materials in one of the first layers and in one of the second layers forming spacers laterally outside of the first layers; isotropically etching the spacers to form void spaces in the one second layer over the first layer, individual ones of the void spaces between the individual string of channel material and the second layer of material in the one second layer; abutting the sidewall of the channel material of the string of channel material in the one first layer forming a conductively doped semiconducting material and extending upwardly into the void space in the one second layer; and heating the conductively doped semiconducting material so that the conductivity-increasing dopant therein is removed from all the The void space diffuses laterally into the laterally adjacent channel material and diffuses upward into the channel material above the void space.In another aspect, the present disclosure relates to a method for forming a memory array including strings of memory cells, comprising: forming a conductor layer including a conductor material on a substrate; forming a lower portion of the stack that will be included in the conductors vertically alternating first and second layers above layers, the stack includes laterally spaced memory block regions, the material of the first layer and the material of the second layer have different compositions, in the lower portion The lowermost one of the first layers includes a sacrificial material; the vertically alternating first and second layers of the upper portion of the stack are formed over the lower portion and formed through the upper portion channel openings partially into the sacrificial material in the lower portion; pads are formed in individual ones of the channel openings on lateral sides of the sacrificial material, the pads over the sacrificial material extending upward; forming a string of channel material in the channel opening, the string of channel material extending through the first layer and the second layer in the upper portion to all of the lower portion the lowermost first layer, with individual ones of the strings of channel material laterally inward of individual ones of the pads; forming horizontally elongated trenches into the stack, the trenches individually in Immediately laterally in the memory block regions and extending to the lowermost first layer; isotropically etching the sacrificial material from the lowermost first layer through the trench to expose the a liner; isotropically etching the exposed liner to form void spaces above the lowermost first layer, the void spaces individually lateral to the individual string of channel material and the second between layers of material, said second layer of material in said second layer immediately below said lowermost first layer in said upper portion; said trench against said string of channel material sidewalls of the channel material form a conductively doped semiconducting material that directly electrically couples the channel material of the individual string of channel material and the conductor material of the conductor layer together , the conductively doped semiconducting material extends upwardly into the void space; and heating the conductively doped semiconducting material to cause the conductivity-increasing dopant therein to diffuse laterally from the void space to laterally adjacent into the channel material and diffuse up into the channel material over the void space.In another aspect, the present disclosure relates to a method for forming a memory array including strings of memory cells, comprising: forming a conductor layer including a conductor material on a substrate; forming a lower portion of the stack that will be included in the conductors vertically alternating first and second layers above layers, the stack includes laterally spaced memory block regions, the material of the first layer and the material of the second layer have different compositions, in the lower portion the lowermost ones of the first layers include sacrificial material; pillars are formed in the lower portion, the pillars are individually located horizontally where individual strings of channel material will be formed, the individual ones of the pillars include a laterally inner material and a pad on the laterally outer side of the laterally inner material, the pad extending upwardly over the sacrificial material; over the lower portion and the struts forming the upper portion of the stack the vertically alternating first and second layers; forming channel openings into the stack that individually extend to the individual pillars; removing the pillars through the channel openings the lateral inner material to extend the channel openings deeper into the stack; in individual ones of the extended channel openings and in the voids therein resulting from the removal and in the The lateral interiors of individual ones of the pads form individual ones of the strings of channel material; horizontal elongated trenches are formed into the stack, the trenches individually immediately laterally adjacent in the memory block region between and extending to the lowermost first layer; isotropically etch the sacrificial material from the lowermost first layer through the trench to expose the liner; isotropically etch all the the exposed liner to form void spaces above the lowermost first layer, the void spaces individually laterally between the individual strings of channel material and the second layer of material, the second layer material in the second layer immediately below the lowermost first layer in the upper portion; forming conductive doping against the sidewalls of the channel material of the string of channel material a semiconducting material that electrically couples the channel material of the individual strings of channel material directly and the conductor material of the conductor layer together, the conductively doped semiconducting material extending upwardly into the void space; and heating the conductively doped semiconducting material to cause an increased conductivity therein dopant to diffuse laterally from the void space into laterally adjacent the channel material and diffuse upwardly into the channel material above the void space.In another aspect, the present disclosure relates to an integrated circuit system including a memory array including strings of memory cells, including laterally spaced memory blocks that individually include first vertical stacks including alternating insulating layers and a conductive layer, strings of memory cells including strings of channel material extending through the insulating layer and the conductive layers, the conductive layers individually including horizontally elongated conductive lines; and a second vertical stack in the first Next to a vertical stack, the second vertical stack includes an upper portion and a lower portion, the upper portion includes alternating first and second insulating layers, the lower portion includes: a lowermost insulator layer, which is in the conductor layer a first material comprising polysilicon directly above said lowermost insulator layer; an insulator material directly above said first material comprising polysilicon; and a second material comprising polysilicon, It is directly above the insulator material.In another aspect, the present disclosure relates to an integrated circuit system including a memory array including strings of memory cells, including laterally spaced memory blocks that individually include vertical stacks including alternating insulating and conductive layers , a string of memory cells includes strings of channel material extending through the insulating layer and the conductive layer, the conductive layers individually including horizontally elongated conductive lines; and insulating material immediately adjacent to the conductive layers in the conductive layer Below the horizontal elongated conductive line in the lowermost, the insulating material includes a curved surface on each side of an individual of the string of channel material in vertical cross-section.Description of drawingsFIG. 1 is a schematic cross-sectional view of a portion of a substrate in process and taken through line 1 - 1 in FIG. 2 in accordance with an embodiment of the present invention.FIG. 2 is a schematic cross-sectional view taken through line 2-2 in FIG. 1 .3-27 are schematic sequential cross-sectional, expanded, enlarged, and/or partial views of the constructions of FIGS. 1 and 2, or portions thereof, or alternative embodiments, in a process in accordance with some embodiments of the present invention.Detailed waysEmbodiments of the invention encompass methods for forming memory arrays that include strings of memory cells, such as arrays of NAND or other memory cells that may have at least some peripheral control circuitry under the array (eg, under-array CMOS). Embodiments of the present invention encompass so-called "gate last" or "replacement gate" processing, so-called "gate first" processing, and other existing or future developed processing independent of when transistor gates are formed. Embodiments of the invention also encompass existing or future developed integrated circuit systems that include memory arrays that include strings of memory cells independent of the fabrication method, eg, including NAND architectures. A first example method embodiment is described with reference to FIGS. 1-27 , which may be considered “gate last” or “replacement gate,” and begins with FIGS. 1 and 2 .1 and 2 show a construction 10 having an array or array region 12 in which vertically extending strings of transistors and/or memory cells are to be formed. Construction 10 includes base substrate 11 having any one or more of conductive/conductor/conductive, semiconductive/semiconductor/semiconductive, or insulating/insulator/insulating (ie, herein electrical) materials. Various materials have been formed vertically over the base substrate 11 . The material may be alongside the material depicted in Figures 1 and 2, vertically inward, or vertically outward. For example, other partial or all fabrication components of the integrated circuit system may be provided somewhere over, around, or within base substrate 11 . Control and/or other peripheral circuitry for operating components within an array of vertically extending strings of memory cells (eg, array 12 ) may also be fabricated and may or may not be wholly or partially within the array or sub-array. In addition, multiple sub-arrays may also be fabricated and operated with respect to each other independently, in series, or otherwise. In this document, a "sub-array" can also be regarded as an array.Conductor layer 16 including conductor material 17 has been formed over substrate 11 . Conductor material 17 includes upper conductor material 43 that is directly above and electrically coupled (eg, directly against) lower conductor material 44 , which has a different composition than upper conductor material 43 . In one embodiment, the upper conductor material 43 includes a conductively doped semiconducting material (eg, n-type doped or p-type doped polysilicon). In one embodiment, the lower conductor material 44 includes a metallic material (eg, a metal silicide such as WSix). Conductor layer 16 may include portions of control circuitry (eg, under-array peripheral circuitry and/or common source lines or pads) used to control access to transistors and/or memory cells to be formed within array 12 and write access.In one embodiment, lower portion 18L of stack 18* has been formed over substrate 11 and conductor layer 16 (* is used as a suffix to include all such same-numbered components that may or may not have other suffixes) . Stack 18* will include vertically alternating conductive layers 22* and insulating layers 20*, wherein the material of layer 22* has a different composition than the material of layer 20*. Stack 18* includes laterally spaced memory block regions 58 that will include laterally spaced memory blocks 58 in a finished circuit configuration. In this document, "block" is generic to include "sub-blocks". Memory block region 58 and the resulting memory block 58 (not shown) may be considered longitudinally elongated and oriented, such as along direction 55 . At this processing point, the memory block 58 may not be recognizable.Conductive layer 22* (alternatively referred to as the first layer) may include no conductive material, and insulating layer 20 (alternatively referred to as the second layer) may include no insulative material, or "gate last" as originally described herein in connection with The process of the "replacement gate" example method embodiment is insulating at this time. In one embodiment, the lower portion 18L includes a lowermost layer 20z of the second layer 20* that is directly above (eg, directly abuts) the conductor material 17 . The lowermost second layer 20z is insulating (eg, includes material 24, which includes silicon dioxide), and may be sacrificial. The lowermost one 22z of the first layers 22* is directly above (eg, directly abutting) the lowermost second layer 20z. The lowermost first layer 22z includes a sacrificial material 77 (eg, silicon nitride or polysilicon). In one embodiment, the next lowermost layer 20x of the second layers 20* is directly above the lowermost first layer 22z (eg, including material 24). In one embodiment, conductive layer 21 comprising conductive material 47 (eg, conductively doped polysilicon) is directly over the next lowermost second layer 20x, and the next lowermost second layer 20w is over conductive layer 21 above. Alternatively, and by way of example only, lower portion 18L may have a top first layer 22* or 21 (not shown), regardless of whether layer 20w is present.In one embodiment, sacrificial pillars 60 have been formed in lower portion 18L, and in one embodiment into conductor layer 16 . The sacrificial pillars 60 are positioned horizontally (ie, in x,y coordinates) in which individual strings of channel material will be formed. By way of example and only for brevity, sacrificial struts 60 are shown arranged in groups or columns of staggered rows of four and five struts 60 per row. The sacrificial struts 60 include a laterally inner material 15 (eg, polysilicon, or a thin TiN liner with elemental tungsten, which is radially inward) and a laterally outward liner 90 of the laterally inner material 15 (eg, with the liner 90 facing upwards) Extends over sacrificial material 77 (eg, at least into material 24 of second layer 20w). The struts 60 may taper radially inward (not shown) to move deeper into the lower stack portion 18L. In one embodiment and as shown, the pads 90 are formed to individually extend directly below the lateral inner material 15 . The liner 90 is insulating in one embodiment, conductive in one embodiment, and semiconducting in one embodiment. In one embodiment, the liner 90 comprises a nitride (eg, silicon nitride, refractory metal nitride, non-refractory metal nitride, etc.), and in one embodiment an oxide (eg, silicon dioxide, metal oxides, etc.).3 and 4, vertically alternating first layers 22U and second layers 20U of upper portion 18U of stack 18* have been formed over lower portion 18L. The first layer 22U and the second layer 20U include different constituent materials 26 and 24 (eg, silicon nitride and silicon dioxide), respectively. The example upper portion 18U is shown starting from the first layer 22 above the lower portion 18L, however this may alternatively start from the second layer 20 (not shown). Furthermore, and by way of example, the lower portion 18L may be formed with one or more first and/or second layers as its top. In any event, only a small number of layers 20* and 22* are shown, while it is more likely that upper portion 18U (and thus stack 18*) includes tens, hundreds or more, etc. of layers 20 and 22. Additionally, other circuitry, which may or may not be part of peripheral and/or control circuitry, may be between conductor layer 16 and stack 18*. By way of example only, multiple vertically alternating layers of conductive and insulating material of such circuitry may be below the lowermost of the conductive layers 22* and/or above the uppermost of the conductive layers 22*. For example, one or more select gate layers (not shown) may be between conductor layer 16 and lowermost conductive layer 22*, and one or more select gate layers may be between the uppermost layers of conductive layer 22* superior. Alternatively or additionally, at least one of the depicted uppermost and lowermost conductive layers 22* may be a select gate layer. Channel openings 25 have been formed (eg, by etching) through second layer 20 and first layer 22 in upper portion 18U to sacrificial pillars 60 . The openings 25 may taper radially inward to move deeper in the stack 18 (not shown).5 shows lateral inner material 15 (not shown) of struts 60 (not indicated by numbers) removed through openings 25 (eg, using a mixture of ammonia and hydrogen peroxide or a mixture of sulfuric acid and hydrogen peroxide, where material 15 is W) , thereby extending the channel opening 25 deeper into the stack 18*.Transistor channel material may be formed vertically in individual channel openings along the insulating and conductive layers, thus including individual strings of channel material that are directly electrically coupled to the conductive material in the conductor layer. Individual memory cells of the formed example memory arrays may include gate regions (eg, control gate regions) and memory structures laterally between the gate regions and channel material. In one such embodiment, the memory structure is formed to include a charge blocking region, a storage material (eg, charge storage material), and an insulating charge channel material. The storage material of the individual memory cells (eg, floating gate material such as doped or undoped silicon or charge trapping material such as silicon nitride, metal dots, etc.) is vertically along the individual ones of the charge blocking regions. An insulating charge channel material (eg, a bandgap engineered structure with a nitrogen-containing material [eg, silicon nitride] sandwiched between two insulator oxides [eg, silicon dioxide]) laterally separates the channel material from the storage material between.6-9 show one embodiment in which charge blocking material 30, storage material 32, and charge channel material 34 have been formed in individual channel openings 25 vertically along insulating layer 20 and conductive layer 22. FIG. Transistor materials 30, 32, and 34 (eg, memory cell materials) may be obtained by depositing their respective thin layers, eg, over stack 18* and within individual channel openings 25, and then planarizing these thin layers at least back to the 10 of stack 18*. top surface to form.Channel material 36 as operable string of channel material 53 is also formed in individual extended channel openings 25 vertically along insulating layer 20 and conductive layer 22 . Strings of channel material 53 are also in voids (not numerically designated) that result from removal of lateral inner material 15 (not shown in extended channel openings 25 ) and lateral interiors of individual pads 90 . In Figures 6 and 7, materials 30, 32, 34 and 36 are collectively shown and designated only as material 37 for reasons of scale. Example channel materials 36 include appropriately doped crystalline semiconductor materials, such as one or more of silicon, germanium, and so-called III/V semiconductor materials (eg, GaAs, InP, GaP, and GaN). An example thickness of each of materials 30, 32, 34, and 36 is 25 to 100 angstroms. A punch etch may be performed to remove materials 30 , 32 , and 34 from the base (not shown) of channel opening 25 to expose conductor layer 16 such that channel material 36 abuts conductor material 17 (not shown) of conductor layer 16 directly ). This punch etch may occur for each of materials 30, 32, and 34 individually, or may only occur for some materials. Alternatively, and by way of example only and as shown, punch etching may not be performed, and channel material 36 may be electrically coupled directly to conductor material 17 of conductor layer 16 only through a separate conductive interconnect (not shown). A radially central solid dielectric material 38 (eg, spin dielectric, silicon dioxide, and/or silicon nitride) is shown in the extended channel opening 25 . Alternatively, and by way of example only, the radially central portion in the extension channel opening 25 may include void space (not shown) and/or lack solid material (not shown). Regardless, and in one embodiment, the liners 90 have been formed to individually extend directly below the individual strings of channel material 53 and, in one such embodiment, will remain in the finished configuration, which will continue obvious in the discussion.In some embodiments, construction 10 may be considered to include a first region (eg, as shown by Figures 6 and 7) and a second region 70 next to the first region (eg, as shown in Figure 10). The second region 70 may laterally contact the first region (not shown), or may be laterally spaced from the first region (eg, laterally adjacent but not touching, or laterally distant and not touching). The second region 70 may be within one or more of the memory block regions (not shown). In some embodiments, construction 10 may be considered to include a first vertical stack (eg, stack 18* in FIG. 7 ) and a second vertical stack (eg, stack 18* in second region 70 ), wherein the second The stack includes an upper portion 18U and a lower portion 18L.11 and 12, horizontal elongated trenches 40 have been formed into stack 18* (eg, by anisotropic etching), and individually between laterally immediately adjacent memory block regions 58, extending to the lowermost first level 22z (at least so far). Sacrificial etch stop lines (not shown) having the same generally horizontal profile as trenches 40 may be individually formed in conductive layer 21 (when present) prior to forming upper portion 18U. The trenches 40 may then be formed by etching the materials 24 and 26 to stop on or within the material of the individual sacrificial lines, and then digging out the remaining material of such lines, forming similarly as described above and using the pillars 60 as Etch stop (whether or not pad 90 is formed in such an etch stop line). The trenches 40 are optionally lined with a liner material 78 (eg, hafnium oxide, aluminum oxide, silicon dioxide, silicon nitride, etc., and not shown). Liner material 78 may be partially or fully sacrificial and desirably have a composition different from that of materials 24 and 26 . After deposition of the liner material 78, it may be substantially removed from above the horizontal surface, eg, by subjecting it to a maskless anisotropic spacer-like etch.10, 13 and 14, sacrificial material 77 (not shown) has been isotropically etched through trench 40 from lowermost first layer 22z (eg, using liquid or vapor H3PO4 as the primary etchant, where material 77 is nitrogen Silicon, or use tetramethylammonium hydroxide [TMAH], where material 77 is polysilicon), to expose pads 90 around string 53 of channel material. In one embodiment, this isotropic etch occurs in the first region (eg, FIGS. 13 and 14 ), but not in the second region 70 ( FIG. 10 ), such as if trench 40 is not in the second region 70, or the sacrificial material 77 is not otherwise etched in the second region 70.15 and 16, the exposed liner 90 has been isotropically etched to form void spaces 75 above the lowermost first layer 22z, which are individually lateral to the individual string of channel material 53 and the second layer of material 24, and in one embodiment, in the second layer 20w immediately below the lowermost first layer 22* in the upper portion 18U. After this isotropic etch, some material of line 90 may remain over void space 75 (as shown), or all may be removed by this isotropic etch (not shown).A conductively doped semiconducting material is formed against the sidewalls of the channel material of the string of channel material and in the void space. For example, referring to Figures 17-19, these show example subsequent processing in which material 30 (eg, silicon dioxide), material 32 (eg, silicon nitride), and material 34 (eg, silicon dioxide or silicon dioxide and A combination of silicon nitride) has been etched to expose the sidewalls 41 of the channel material 36 of the string of channel material 53 in the lowermost first layer 22z and in the void space 75 . In one embodiment, the remaining material of the liner 90 above the void space 75 may also be removed by this etching or otherwise (not shown), or in another embodiment, this may remain (as shown) . Any of materials 30, 32, and 34 in layer 22z may be considered sacrificial materials therein. As an example, consider embodiments in which liner material 78 is one or more insulating oxides (other than silicon dioxide) and memory cell materials 30, 32, and 34 are individually one of silicon dioxide and silicon nitride layers or many. In this example, the depicted construction can be produced by using modified or different chemistries to selectively sequentially etch silicon dioxide and silicon nitride relative to the others. For example, a 100:1 (by volume) solution of water and HF will selectively etch silicon dioxide relative to silicon nitride, while a 1000:1 (by volume) solution of water and HF will selectively etch silicon dioxide relative to silicon nitride. Silicon dioxide selectively etches silicon nitride. Thus, and in this example, where it is desired to achieve the example configurations shown by Figures 17 and 18, such etching chemistries may be used in an alternating fashion. One skilled in the art can select other chemistries for etching other different materials where a configuration as shown in Figures 17 and 18 is desired. Some or all of the insulating material (eg, 24 , and not shown in FIGS. 17 and 18 ) from layers 20x and 20z (when present, and not shown as having been removed) may shift when other materials are removed removed, may be removed individually, or may be partially or fully retained (not shown). Additionally, the uppermost portion of void space 75 in second layer 20w may be widened by such etching (not shown). In one embodiment and as shown, removal of the lowermost second layer 20z and the next lowermost second layer 20x has occurred in the first region (eg, FIG. 17 ) and has not occurred in the second region 70 (Fig. 19).Referring to FIGS. 20 and 21 , a conductively doped semiconducting material 42 (eg, conductively doped polysilicon) has been formed in the lowermost first layer 22z and it extends upwardly (eg, and downwardly) into the void space 75 . The conductively doped semiconducting material 42 thereby directly electrically couples the channel material 36 of the individual string 53 of channel material and the conductor material 17 of the conductor layer 16 together. Subsequently, and by way of example, conductive material 42 has been removed from trench 40 as sacrificial liner material 78 (not shown). Before forming conductive material 42, sacrificial liner material 78 (not shown) may be removed. Regardless, at some point, the conductively doped semiconducting material 42 is heated to diffuse the conductivity-increasing dopant therein laterally from the void space 75 to the channel laterally adjacent (eg, at least from the upper void space 75 ) material 36 and diffuse up into the channel material 36 over void space 75 . This heating may occur during dedicated annealing steps and/or during inherent subsequent processing, and may include, at least in part, the act of forming the conductively doped semiconducting material 42 itself. Those skilled in the art can select appropriate processing conditions to induce this diffusion (eg, substrate temperature of about 400°C to about 1,110°C for about 15 seconds to 1 hour).In one embodiment, all material (not shown) of liner 90 over conductor layer 16 may be removed prior to forming conductively doped semiconducting material 42 . In one embodiment and as shown, the material of the liner 90 is left in the conductor layer 16 and the conductively doped semiconducting material 42 is formed directly over the conductor layer 16, and in one such embodiment, the remaining This gasket material has an upwardly open container shape in vertical cross-section (eg, the gasket material of Figures 19 and 20).The embodiment depicted by Figures 1-21 has the top of the liner 90 in the lower portion 18L, in one such embodiment in the second layer 20w, and in any event, where the liner is formed prior to forming the upper portion 18U. Alternatively, by way of example, liner 90 may be formed after upper portion 18U is formed and/or have a liner top (neither of which is shown) over lower portion 18U. Specifically, again by way of example only, the material 15 of the sacrificial pillars 60 (FIG. 2) may not be formed. Rather, the upper portion 18U may be formed with a channel opening 25 that initially extends to the lowermost first layer 22z. The material of the liner 90 can then be deposited. This material may then be recessed vertically so that the top is positioned as shown in FIG. 2, vertically recessed so that the top is positioned in upper portion 18U (not shown), or may not be recessed vertically at all (not shown).Referring to Figures 22-27, this has been achieved by, for example, being ideally selective with respect to other exposed materials (eg, using liquid or vapor H3PO4 as the primary etchant, wherein material 26 is silicon nitride and the other material includes one or more oxides) or polysilicon) is isotropically etched away through trench 40 to remove material 26 (not shown) of conductive layer 22 . Material 26 (not shown) in conductive layer 22 in example embodiments is sacrificial and has been replaced with conductive material 48 and subsequently removed from trench 40, thus forming individual conductive lines 29 (eg, word lines) and vertically extending strings 49 of individual transistors and/or memory cells 56.Before forming the conductive material 48, a thin insulating liner (eg, Al2O3 and not shown) may be formed. The approximate locations of transistors and/or memory cells 56 are indicated in parentheses in FIG. 25, and some are indicated with dashed outlines in FIGS. 22-24 and 26, where in the depicted example, transistors and/or memory cells 56 are substantially be annular or annular. Alternatively, the transistors and/or memory cells 56 may not completely surround with respect to the individual channel openings 25, such that each channel opening 25 may have two or more vertically extending strings 49 (eg, in individual conductive layers) Multiple transistors and/or memory cells around individual channel openings, possibly multiple word lines per channel opening in individual conductive layers (not shown). The conductive material 48 can be considered to have terminations 50 corresponding to the control gate regions 52 of individual transistors and/or memory cells 56 (FIG. 25). Control gate regions 52 in the depicted embodiment include individual portions of individual conductive lines 29 . Materials 30 , 32 , and 34 may be viewed as memory structure 65 laterally between control gate region 52 and channel material 36 . In one embodiment and as shown with respect to the example "gate last" process, conductive material 48 of conductive layer 22* is formed after openings 25/27 and/or trenches 40 are formed. Alternatively, the conductive material of the conductive layer may be formed (not shown) prior to forming the channel openings 25 and/or trenches 40, eg, with respect to "gate first" processing.Charge blocking regions (eg, charge blocking material 30 ) are between storage material 32 and individual control gate regions 52 . The charge block may have the following functions in the memory cell: in program mode, the charge block prevents the flow of charge carriers from the storage material (eg, floating gate material, charge trapping material, etc.) toward the control gate, and in erase mode , the charge block prevents the flow of charge carriers from the control gate into the storage material. Thus, the charge block can be used to block charge transfer between the control gate region and the storage material of the individual memory cells. Example charge blocking regions as shown include insulator material 30 . By way of further example, the charge blocking region may include a lateral (eg, radial) outer portion of a storage material (eg, material 32 ), wherein the storage material is insulating (eg, between insulating storage material 32 and conductive material 48 ) without any difference in composition materials). Regardless, as an additional example, without any separate constituent insulator material 30, the interface of the storage material and conductive material of the control gate may be sufficient to serve as a charge blocking region. Additionally, the interface of the conductive material 48 and the material 30 (when present) in combination with the insulator material 30 may serve as a charge blocking region, and may alternatively or additionally be lateral outer regions of insulating memory material (eg, silicon nitride material 32). Example materials 30 are one or more of hafnium silicon oxide and silicon dioxide.In one embodiment and as shown, the lowermost surface of the channel material 36 of the operable string of channel material 53 never directly abuts any of the conductor materials 17 of the conductor layer 16 . In one embodiment and as shown, the conductive material 42 directly abuts the sidewalls 41 of the string 53 of channel material.An interposer 57 has been formed in the trenches 40 and thereby laterally immediately between the laterally adjacent memory blocks 58 and longitudinally laterally adjacent the memory blocks 58 . The interposer material 57 may provide lateral electrical isolation (insulation) between immediately laterally adjacent memory blocks. This may include one or more of insulating, semiconducting, and conducting materials, and in any event, may facilitate shorting of the conductive layers 22 relative to each other in a finished circuit system configuration. Example insulating materials are one or more of SiO2, Si3N4, Al2O3, and undoped polysilicon. In this document, "undoped" is a material in which there are atoms of conductivity-increasing impurities ranging from 0 atoms/cm3 to 1 x 1012 atoms/cm3. In this document, "doping" is a material having conductivity-increasing impurities in it in excess of 1 x 1012 atoms/cm3, and "conductive doping" is a material having conductivity in it at least 1 x 1018 atoms/cm3 A material that adds atoms of impurities. The interposer 57 may include through-array vias (not shown).In one embodiment and as shown, the formation of conductive material 48 occurs in the first region (FIGS. 22 and 23), rather than with respect to the second vertical stack 18* in the second region 70 (FIG. 27). Thus, in one embodiment, the resulting second vertical stack 18* in the second region 70 includes an upper portion 18U that includes alternating first insulating layers 20 and second insulating layers 22 (eg, in FIG. 27 layer 22 is Insulation). The lower portion 18L of the second vertical stack 18* comprises:the lowermost insulator layer (eg, 20z) directly above the conductor material (eg, 17) of the conductor layer (eg, 16);a first material (eg, 77) comprising polysilicon directly over the lowermost insulator layer;an insulator material (eg, 24 of layer 20x) directly over the first material comprising polysilicon; andA second material (eg, 47) comprising polysilicon directly above the insulator material.Any other attributes or aspects as shown and/or described herein with respect to other embodiments may be used in the embodiments shown and described with reference to the above embodiments.In one embodiment, a method for forming a memory array (eg, 12) including strings (eg, 49) of memory cells (eg, 56) includes forming a string (eg, 53) of vertically extending channel material For stacks (eg, 18*) including vertically alternating first layers (eg, 20*) and second layers (eg, 22*), regardless of whether conductor layer 16, upper portion 18U, lower portion 18L, and/or sacrificial layers are included strut 60). The material of the first layer (eg, 26 or 48) has a different composition than the material of the second layer (eg, 24). Pads (eg, 90) are formed laterally outside of respective ones of the strings of channel material in one of the first layers and in one of the second layers. The liner is isotropically etched to form void spaces (eg, 75) in the one second layer over the one first layer. Individual ones of the void spaces are laterally between the individual strings of channel material and the one of the second layers of material. A conductively doped semiconducting material (eg, 42 ) is formed against the sidewalls of the channel material of the string of channel material in the one first layer and extends up into the one second layer of the void space. heating the conductively doped semiconducting material such that the conductivity-increasing dopant therein diffuses laterally from the void space into laterally adjacent channel material and up into the channel material above the void space . Any other attributes or aspects as shown and/or described herein with respect to other embodiments may be used.Alternative embodiment configurations may result from the method embodiments described above or otherwise. In any event, embodiments of the present invention encompass memory arrays independent of the method of manufacture. However, such memory arrays may have any of the properties as described herein in method embodiments. Likewise, the method embodiments described above may incorporate, form and/or have any of the attributes described with respect to the device embodiments.In one embodiment, an integrated circuit system (eg, 10) including a memory array (eg, 12) containing strings (eg, 46) of memory cells (eg, 56) includes laterally spaced memory blocks (eg, 58 ), the memory blocks individually include first vertical stacks 18* (eg, the first vertical stacks of FIGS. 22 and 23) that include alternating insulating layers (eg, 20*) and conductive layers (eg, , 22*), a string of memory cells (eg, 56) (eg, 49) includes a string of channel material (eg, 53) extending through the insulating and conductive layers. The conductive layers individually include horizontally elongated conductive lines (eg, 29). The second vertical stack (eg, 18* in the second zone 70) is next to the first vertical stack. The second vertical stack includes an upper portion (eg, 18U) and a lower portion (eg, 18L). The upper portion includes alternating first insulating layers 20 and second insulating layers 22 (eg, insulating layer 22 in Figure 27). The next section includes:the lowermost insulator layer (eg, 20z) directly above the conductor material (eg, 17) of the conductor layer (eg, 16);a first material (eg, 77) comprising polysilicon directly over the lowermost insulator layer;an insulator material (eg, 24 of layer 20x) directly over the first material comprising polysilicon; andA second material (eg, 47) comprising polysilicon directly above the insulator material.In one embodiment, the first material comprising polysilicon and the second material comprising polysilicon have the same composition relative to each other. In one embodiment, the first material comprising polysilicon consists or consists essentially of undoped polysilicon. In one embodiment, the first material comprising polysilicon consists or consists essentially of conductively doped polysilicon. In one embodiment, the second material comprising polysilicon consists or consists essentially of undoped polysilicon. In one embodiment, the second material comprising polysilicon consists or consists essentially of conductively doped polysilicon. In one embodiment, the materials of the insulator material and the lowermost insulator material have the same composition relative to each other. In one embodiment, the same composition comprises, consists of, or consists essentially of silica. Any other attributes or aspects as shown and/or described herein with respect to other embodiments may be used.In one embodiment, an integrated circuit system (eg, 10) including a memory array (eg, 12) containing strings (eg, 49) of memory cells (eg, 56) includes laterally spaced memory blocks (eg, 56 ), the memory blocks individually include vertical stacks (eg, 18*) including alternating insulating layers (eg, 20*) and conductive layers (eg, 22*), strings of memory cells (eg, 56) ( For example, 49) includes a string of channel material (eg, 53) extending through the insulating and conductive layers. The conductive layers individually include horizontally elongated conductive lines (eg, 29). The insulating material (eg, 24) immediately below the horizontal elongated conductive line 29 in the lowermost of the conductive layers is included in the string of channel material in the vertical cross-section (eg, Figures 23, 26). A jog surface (eg, 95 in Figure 26) on each side of the individual. In this document, a "curved surface" is characterized and defined by abrupt changes in direction [at least 15°] compared to the surface immediately above and below the curved surface. In one embodiment, the curved surface comprises a portion that is horizontal (eg, 97 in Figure 26), and in one such embodiment it is completely horizontal. Any other attributes or aspects as shown and/or described herein with respect to other embodiments may be used.Method embodiments of the present invention may result in greater conductive doping in channel material 36 due to upwardly extending material 42 in void space 75 .The above process or configuration may be considered relative to an array of components formed as or on a single stack or platform of such components over or as part of an underlying base substrate (although a single stack/platform can have multiple layers). Control circuitry and/or other peripheral circuitry for operating or accessing such components within the array may also be formed anywhere as part of the completed construction and, in some embodiments, may be beneath the array (eg, the array lower CMOS). In any event, one or more additional such stacks/platforms may be provided or fabricated above and/or below the stacks/platforms shown in the figures or described above. Furthermore, the arrays of components may be the same or different relative to each other in different stacks/platforms, and the different stacks/platforms may have the same thickness or different thicknesses relative to each other. Interposers (eg, additional circuitry and/or dielectric layers) may be provided between vertically adjacent stacks/platforms. Furthermore, different stacks/platforms can be electrically coupled with respect to each other. Multiple stacks/platforms may be fabricated individually and sequentially (eg, one over the other), or two or more stacks/platforms may be fabricated substantially simultaneously.The assemblies and structures discussed above can be used in integrated circuits/circuitry and can be incorporated into electronic systems. Such electronic systems may be used, for example, in memory modules, device drivers, power modules, communication modems, processor modules, and application-specific modules, and may include multi-layer, multi-chip modules. Electronic systems can be any of a wide range of systems, such as, for example, cameras, wireless devices, displays, chipsets, set-top boxes, games, lighting, vehicles, clocks, televisions, cell phones, personal computers, automobiles , industrial control systems, aircraft, etc.In this document, unless otherwise indicated, "vertical", "higher", "upper", "lower", "top", "atop", "bottom", "above", "Below," "below," "below," "up," and "down" generally refer to a vertical direction. "Horizontal" refers to a general direction (ie, within 10 degrees) along the main substrate surface, and may be relative to the direction in which the substrate is handled during fabrication, and vertical is a direction that is generally orthogonal to horizontal. Reference to "perfectly horizontal" refers to a direction along the main substrate surface (ie, no angle to the main substrate surface), and may be relative to the direction of the manufacturing process substrate. Furthermore, "vertical" and "horizontal" as used herein are generally directions that are perpendicular to each other and are independent of the orientation of the substrate in three-dimensional space. In addition, "vertically extending" and "vertically extending" refer to directions that are angularly spaced at least 45° from perfect horizontal. In addition, references to field effect transistors "vertically extending", "vertically extending", "horizontal extending", "horizontal extending" and the like refer to the operation by which current flows between the source/drain regions Orientation along the channel length of the transistor. For bipolar junction transistors, "vertically extending", "vertically extending", "horizontal extending" and "horizontal extending" and the like refer to how current flows between the emitter and collector in operation Orientation along the length of the base. In some embodiments, any components, features and/or regions that extend vertically extend vertically or within 10° of vertical.Furthermore, "directly above," "directly below," and "directly below" require that the two regions/materials/components in question overlap at least some laterally (ie, horizontally) with respect to each other. Furthermore, the use of "above" without the preceding "positive" only requires that a portion that is above other portions of the stated zone/material/component is vertically outside the other portion (ie, different from the two stated zones). It doesn't matter if there is any lateral overlap of /materials/components). Similarly, the use of "below" and "below" without the prefix "positive" only requires that some parts below/below other parts of the stated area/material/component are vertically inside the other parts (ie, Regardless of whether there is any lateral overlap of the two regions/materials/components stated).Any of the materials, regions, and structures described herein may be homogeneous or heterogeneous, and in any case may be continuous or non-uniform over any material overlying any of the materials, regions, and structures continuously. Where one or more example ingredients are provided for any material, the material may include, consist essentially of, or consist of such one or more ingredients. Furthermore, unless otherwise indicated, each material may be formed using any suitable existing or future developed technique, of which atomic layer deposition, chemical vapor deposition, physical vapor deposition, epitaxial growth, diffusion doping, and ion implantation are examples.Additionally, "thickness" itself (without a preceding directional adjective) is defined as the average straight-line distance perpendicularly across a given material or region to the closest surface of an immediately adjacent material or region of different composition. Additionally, the various materials or regions described herein may have a substantially constant thickness or a variable thickness. If of variable thickness, unless otherwise indicated, the thickness refers to the average thickness, and since the thickness is variable, this material or region will have some minimum thickness and some maximum thickness. As used herein, "different components" only require that those parts of the two stated materials or regions that may be directly abutting each other are chemically and/or physically different (eg, if such materials or regions are not uniform Qualitative). If the two stated materials or regions are not directly against each other, then "different composition" only requires that those parts of the stated two materials or regions that are closest to each other are chemically and/or physically different ( if the material or region is not homogeneous). In this document, materials, regions or structures are stated to be "directly against" each other when they are in at least some physical touching contact with respect to each other. Conversely, "above," "on," "adjacent," "along," and "abut" without the prefix "positive" encompass "directly abut" and where intervening materials, regions or structures result in the stated material, region or A configuration in which the structures are in no physical touching contact with each other.In this context, if in normal operation current can flow continuously from one region-material-component to another region-material-component, and when sufficient subatomic positive and/or negative charges are generated, primarily through subatomic positive and/or negative Movement of negative charges to effect the flow, then the regions-material-components are "electrically coupled" with respect to each other. Another electronic component can be between and electrically coupled to the region-material-component. In contrast, when a region-material-component is referred to as "directly electrically coupled," there are no intervening electronic components (eg, no diodes, transistors, resistors, transducers, switches, etc.) between the directly electrically coupled region-material-components , fuse, etc.).Any use of "row" and "column" in this document is for convenience to distinguish one series or orientation of features from another series or orientation of features and components along which have been formed or can be formed. "Row" and "column" are used synonymously with respect to any series of regions, components and/or features unrelated to functionality. In any event, the rows may be straight and/or curved and/or parallel and/or non-parallel relative to each other, as may the columns. Furthermore, the rows and columns may intersect at 90° or at one or more other angles (ie, other than right angles) relative to each other.The composition of any of the conductive/conductor/conductive materials herein may be metallic materials and/or conductively doped semiconducting/semiconductor/semiconducting materials. A "metallic material" is any one or combination of an elemental metal, any mixture or alloy of two or more elemental metals, and any one or more conductive metal compounds.As used herein, any use of the "selectivity" of etching, etched, removed, removed, deposited, formed, and/or formed is the use of one stated material relative to another stated material in accordance with The act of doing so at a rate of at least 2:1 by volume. Furthermore, any use of selective deposition, selective growth or selective formation refers to deposition, growth or formation at a rate of at least 2:1 by volume of at least the first 75 angstroms of deposition, growth or formation relative to another stated material, grow or form a material.The use of "or" herein encompasses either and both unless otherwise indicated.in conclusionIn some embodiments, a method for forming a memory array including strings of memory cells includes forming vertically extending strings of channel material into stacks including vertically alternating first and second layers. The material of the first layer and the material of the second layer have different compositions. A pad is formed laterally outside of respective ones of the strings of channel material in one of the first layers and in one of the second layers. The liner is isotropically etched to form void spaces in the one second layer over the first layer. Individual ones of the void spaces are laterally between the individual strings of channel material and the one of the second layers of material. conductively doped semiconducting material is formed against the sidewalls of the channel material of the string of channel material in the one first layer and extending up to the void in the one second layer in space. heating the conductively doped semiconducting material to cause the dopant of increased conductivity therein to diffuse laterally from the void space into laterally adjacent the channel material and up to the dopant above the void space in the channel material.In some embodiments, a method for forming a memory array including strings of memory cells includes forming a conductor layer including a conductor material on a substrate. A lower portion of the stack is formed, which will include vertically alternating first and second layers above the conductor layers. The stack includes laterally spaced memory blocks. The material of the first layer and the material of the second layer have different compositions. The lowermost of the first layers in the lower portion includes a sacrificial material. The vertically alternating first and second layers of the upper portion of the stack are formed over the lower portion and a channel is formed through the upper portion into the sacrificial material in the lower portion Open your mouth. Pads are formed in individual ones of the channel openings on lateral sides of the sacrificial material. The liner extends upwardly over the sacrificial material. A string of channel material is formed in the channel opening, the string of channel material extending through the first and second layers in the upper portion to the lowermost layer in the lower portion level one. Individual ones of the strings of channel material are laterally interior to individual ones of the pads. Horizontally elongated trenches are formed into the stack, the trenches individually between laterally immediate neighbors in the memory block region and extending to the lowermost first layer. The sacrificial material is isotropically etched from the lowermost first layer through the trench to expose the pad. isotropically etching the exposed liner to form void spaces above the lowermost first layer, the void spaces individually laterally between the individual strings of channel material and the second layer of material In between, the second layer of material is in the second layer immediately below the lowermost first layer in the upper portion. forming conductively doped semiconducting material against the sidewalls of the channel material of the string of channel material, the conductively doped semiconducting material connecting the channel material of the individual string of channel material to the conductor layer of the conductor materials are directly electrically coupled together. The conductively doped semiconductive material extends upwardly into the void space. heating the conductively doped semiconducting material to cause the dopant of increased conductivity therein to diffuse laterally from the void space into laterally adjacent channel material and up into the channel material above the void space .In some embodiments, a method for forming a memory array including strings of memory cells includes forming a conductor layer including a conductor material on a substrate. A lower portion of the stack is formed, which will include vertically alternating first and second layers above the conductor layers. The stack includes laterally spaced memory blocks. The material of the first layer and the material of the second layer have different compositions. The lowermost of the first layers in the lower portion includes a sacrificial material. Posts are formed in the lower portion, the posts being individually positioned horizontally where individual strings of channel material will be formed. Individual ones of the struts include a laterally inner material and a pad on the laterally outer portion of the laterally inner material. The liner extends upwardly over the sacrificial material. Vertically alternating first and second layers of the upper portion of the stack are formed over the lower portion and the struts. Channel openings are formed into the stack, the channel openings individually extending to the individual pillars. The lateral inner material of the pillars is removed through the channel openings to extend the channel openings deeper into the stack. Individual ones of the strings of channel material are formed in individual ones of the extended channel openings and in the voids therein resulting from the removal and in the lateral interior of individual ones of the pads. Horizontally elongated trenches are formed into the stack, the trenches individually between laterally immediate neighbors in the memory block region and extending to the lowermost first layer. The sacrificial material is isotropically etched from the lowermost first layer through the trench to expose the pad. isotropically etching the exposed liner to form void spaces above the lowermost first layer, the void spaces individually laterally between the individual strings of channel material and the second layer of material In between, the second layer of material is in the second layer immediately below the lowermost first layer in the upper portion. forming conductively doped semiconducting material against the sidewalls of the channel material of the string of channel material, the conductively doped semiconducting material connecting the channel material of the individual string of channel material to the conductor layer The conductor materials are directly electrically coupled together. The conductively doped semiconductive material extends upwardly into the void space. heating the conductively doped semiconducting material to cause the dopant of increased conductivity therein to diffuse laterally from the void space into laterally adjacent channel material and up into the channel material above the void space .In some embodiments, an integrated circuit system including a memory array includes strings of memory cells including laterally spaced memory blocks that individually include first vertical stacks including alternating insulation layer and conductive layer. The strings of memory cells include strings of channel material extending through the insulating layer and the conductive layer. The conductive layers individually include horizontally elongated conductive lines. A second vertical stack is next to the first vertical stack. The second vertical stack includes an upper portion and a lower portion. The upper portion includes alternating first and second insulating layers. The lower portion includes a lowermost insulator layer directly above the conductor material of the conductor layer. A first material including polysilicon is directly above the lowermost insulator layer. An insulator material is directly above the first material comprising polysilicon. A second material including polysilicon is directly above the insulator material.In some embodiments, an integrated circuit system including a memory array includes strings of memory cells including laterally spaced memory blocks that individually include vertical stacks including alternating insulating and conductive layers . The strings of memory cells include strings of channel material extending through the insulating layer and the conductive layer. The conductive layers individually include horizontally elongated conductive lines. The insulating material immediately below the horizontal elongated conductive line in the lowermost of the conductive layers includes a fold on each side of an individual of the string of channel material in vertical cross-section. curved surface.In accordance with statute, the subject matter disclosed herein has been described in language more or less specific with respect to structural and methodological features. It is to be understood, however, that the claims are not limited to the specific features shown and described, as the components disclosed herein include example embodiments. Accordingly, the claims should be accorded their full scope literally and properly construed in accordance with the doctrine of equivalents. |
A method of forming an integrated circuit with a semiconductor substrate is provided. A gate dielectric is formed on the semiconductor substrate, and a gate is formed on the gate dielectric. A super-saturated doped source silicide metallic layer is formed on the semiconductor substrate adjacent the gate and the gate dielectric. The silicide metallic layer incorporates a substantially uniformly distributed dopant therein in a substantially uniform super-saturated concentration. The silicide metallic layer is reacted with the semiconductor substrate therebeneath to form a salicide layer and outdiffuse the dopant from the salicide layer into the semiconductor substrate therebeneath. The outdiffused dopant in the semiconductor substrate is then activated to form a shallow source/drain junction beneath the salicide layer. An interlayer dielectric is then deposited above the semiconductor substrate, and contacts are formed in the interlayer dielectric to the salicide layer. |
The invention claimed is:1. A method of forming an integrated circuit comprising:providing a semiconductor substrate;forming a gate dielectric on the semiconductor substrate;forming a gate on the gate dielectric;forming at least one super-saturated doped source silicide metallic layer on the semiconductor substrate adjacent the gate and the gate dielectric, the silicide metallic layer incorporating a substantially uniformly distributed dopant therein in a substantially uniform super-saturated concentration;reacting the silicide metallic layer with the semiconductor substrate therebeneath to form a salicide layer and outdiffuse the dopant from the salicide layer into the semiconductor substrate therebeneath;activating the outdiffused dopant in the semiconductor substrate to form a shallow source/drain junction beneath the salicide layer;depositing an interlayer dielectric above the semiconductor substrate; andforming a contact in the interlayer dielectric to the salicide layer.2. The method as claimed in claim 1 wherein forming the super-saturated doped source silicide metallic layer on the semiconductor substrate further comprises sputtering to form the silicide metallic layer super-saturated doped source having the substantially uniform distribution and concentration of the dopant throughout the silicide metallic layer.3. The method as claimed in claim 1 further comprising forming a metallic cap layer on the silicide metallic layer to form an alloyed metallic bi-layer that caps and deters outdiffusion of the super-saturated dopant through the top surface of the silicide metallic layer.4. The method as claimed in claim 1 further comprising forming an amorphous layer in the surface of the semiconductor substrate adjacent the gate and the gate dielectric prior to outdiffusing the dopant from the salicide layer into the semiconductor substrate.5. The method as claimed in claim 1 wherein forming the contact to the salicide layer uses at least one material selected from tantalum, titanium, tungsten, copper, gold, silver, an alloy thereof, a compound thereof, and a combination thereof.6. A method of forming an integrated circuit comprising:providing a semiconductor substrate;forming a gate dielectric on the semiconductor substrate;forming a gate on the gate dielectric;forming at least one super-saturated doped source silicide metallic layer on the semiconductor substrate adjacent the gate and the gate dielectric, the silicide metallic layer incorporating a dopant therein in a super-saturated concentration;forming a metallic cap layer on the silicide metallic layer to form an alloyed metallic bi-layer that caps and deters outdiffusion of the super-saturated dopant through the top surface of the silicide metallic layer;reacting the silicide metallic layer with the semiconductor substrate therebeneath to form a salicide layer and outdiffuse the dopant from the salicide layer into the semiconductor substrate therebeneath;activating the outdiffused dopant in the semiconductor substrate to form a shallow source/drain junction beneath the salicide layer;depositing an interlayer dielectric above the semiconductor substrate; andforming a contact in the interlayer dielectric to the salicide layer.7. The method as claimed in claim 6 wherein forming the super-saturated doped source silicide metallic layer on the semiconductor substrate further comprises sputtering to form the silicide metallic layer super-saturated doped source having a substantially uniform distribution and concentration of the dopant throughout the silicide metallic layer.8. The method as claimed in claim 6 further comprising forming an amorphous layer in the surface of the semiconductor substrate adjacent the gate and the gate dielectric prior to outdiffusing the dopant from the salicide layer into the semiconductor substrate.9. A method of forming an integrated circuit comprising:providing a semiconductor substrate;forming a gate dielectric on the semiconductor substrate;forming a gate on the gate dielectric;forming an amorphous layer in the surface of the semiconductor substrate adjacent the gate and the gate dielectric;forming at least one super-saturated doped source silicide metallic layer on the semiconductor substrate amorphous layer adjacent the gate and the gate dielectric, the silicide metallic layer incorporating a dopant therein in a super-saturated concentration;reacting the silicide metallic layer with the semiconductor substrate amorphous layer therebeneath to form a salicide layer and outdiffuse the dopant from the salicide layer into the semiconductor substrate amorphous layer therebeneath;activating the outdiffused dopant in the semiconductor substrate amorphous layer to form a shallow source/drain junction beneath the salicide layer;depositing an interlayer dielectric above the semiconductor substrate; andforming a contact in the interlayer dielectric to the salicide layer.10. The method as claimed in claim 9 wherein forming the super-saturated doped source silicide metallic layer on the semiconductor substrate further comprises sputtering to form the silicide metallic layer super-saturated doped source having a substantially uniform distribution and concentration of the dopant throughout the silicide metallic layer.11. The method as claimed in claim 9 further comprising forming a metallic cap layer on the silicide metallic layer to form an alloyed metallic bi-layer that caps and deters outdiffusion of the super-saturated dopant through the top surface of the silicide metallic layer.12. The method as claimed in claim 9 wherein forming the contact to the salicide layer uses at least one material selected from tantalum, titanium, tungsten, copper, gold, silver, an alloy thereof, a compound thereof, and a combination thereof.13. A method of forming an integrated circuit comprising:providing a semiconductor substrate;forming a gate dielectric on the semiconductor substrate;forming a gate on the gate dielectric;forming an amorphous layer in the surface of the semiconductor substrate adjacent the gate and the gate dielectric;sputtering to form super-saturated doped source silicide metallic layers on the semiconductor substrate amorphous layer adjacent the gate, the silicide metallic layers incorporating a substantially uniformly distributed dopant therein in a substantially uniform super-saturated concentration throughout the silicide metallic layers;forming metallic cap layers on the silicide metallic layers to form alloyed metallic bi-layers that cap and deter outdiffusion of the super-saturated dopant through the top surfaces of the silicide metallic layers;reacting the silicide metallic layers with the semiconductor substrate amorphous layer therebeneath to form salicide layers and outdiffuse the dopant from the salicide layers into the semiconductor substrate amorphous layer therebeneath;activating the outdiffused dopant in the semiconductor substrate amorphous layer to form shallow source/drain junctions beneath the salicide layers;depositing an interlayer dielectric above the semiconductor substrate; andforming contacts in the interlayer dielectric to the salicide layers.14. The method as claimed in claim 13 wherein forming the contacts to the salicide layers uses at least one material selected from tantalum, titanium, tungsten, copper, gold, silver, an alloy thereof, a compound thereof, and a combination thereof. |
BACKGROUND1. Technical FieldThe present invention relates generally to semiconductor technology, and more specifically to shallow junction formation in semiconductor integrated circuit devices.2. Background ArtAt the present time, electronic products are used in almost every aspect of life, and the heart of these electronic products is the integrated circuit. Integrated circuits are used in everything from airplanes and televisions to wristwatches.Integrated circuits are made in and on silicon wafers by extremely complex systems that require the coordination of hundreds or even thousands of precisely controlled processes to produce a finished semiconductor wafer. Each finished semiconductor wafer has hundreds to tens of thousands of integrated circuits, each wafer worth hundreds or thousands of dollars.Integrated circuits are made up of hundreds to millions of individual components. One common component is the semiconductor transistor. The most common and important semiconductor technology presently used is silicon-based, and the most preferred silicon-based semiconductor device is a Complementary Metal Oxide Semiconductor ("CMOS") transistor.The principal elements of a CMOS transistor generally consist of a silicon substrate having shallow trench oxide isolation regions cordoning off transistor areas. The transistor areas contain polysilicon gates on silicon oxide gates, or gate oxides, over the silicon substrate. The silicon substrate on both sides of the polysilicon gate is slightly doped to become conductive. These lightly doped regions of the silicon substrate are referred to as "shallow source/drain junctions", which are separated by a channel region beneath the polysilicon gate. A curved silicon oxide or silicon nitride spacer, referred to as a "sidewall spacer", on the sides of the polysilicon gate allows deposition of additional doping to form more heavily doped regions of the shallow source/drain ("S/D") junctions, which are called "deep S/D junctions". The shallow and deep S/D junctions together are collectively referred to as "S/D junctions".To complete the transistor, a silicon oxide dielectric layer is deposited to cover the polysilicon gate, the curved spacer, and the silicon substrate. To provide electrical connections for the transistor, openings are etched in the silicon oxide dielectric layer to the polysilicon gate and the S/D junctions. The openings are filled with metal to form electrical contacts. To complete the integrated circuits, the contacts are connected to additional levels of wiring in additional levels of dielectric material to the outside of the dielectric material.In operation, an input signal to the gate contact to the polysilicon gate controls the flow of electric current from one S/D contact through one S/D junction through the channel to the other S/D junction and to the other S/D contact.Transistors are fabricated by thermally growing a gate oxide layer on the silicon substrate of a semiconductor wafer and forming a polysilicon layer over the gate oxide layer. The oxide layer and polysilicon layer are patterned and etched to form the gate oxides and polysilicon gates, respectively. The gate oxides and polysilicon gates in turn are used as masks to form the shallow S/D regions by ion implantation of boron or phosphorus impurity atoms into the surface of the silicon substrate. The ion implantation is followed by a high-temperature anneal above 700[deg.] C. to activate the implanted impurity atoms to form the shallow S/D junctions.A silicon nitride layer is deposited and etched to form sidewall spacers around the side surfaces of the gate oxides and polysilicon gates. The sidewall spacers, the gate oxides, and the polysilicon gates are used as masks for the conventional S/D regions by ion implantation of boron or phosphorus impurity atoms into the surface of the silicon substrate into and through the shallow S/D junctions. The ion implantation is again followed by a high-temperature anneal above 700[deg.] C. to activate the implanted impurity atoms to form the S/D junctions.After formation of the transistors, a silicon oxide dielectric layer is deposited over the transistors and contact openings are etched down to the S/D junctions and to the polysilicon gates. The contact openings are then filled with a conductive metal and interconnected by formation of conductive wires in other interlayer dielectric ("ILD") layers.As transistors have decreased in size, it has been found that the electrical resistance between the metal contacts and the silicon substrate or the polysilicon has increased to the level where it negatively impacts the performance of the transistors. To lower the electrical resistance, a transition material is formed between the metal contacts and the silicon substrate or the polysilicon. The best transition materials have been found to be cobalt silicide (CoSi2) and titanium silicide (TiSi2).The silicides are formed by first applying a thin layer of the cobalt or titanium on the silicon substrate above the S/D junctions and the polysilicon gates. The semiconductor wafer is subjected to one or more annealing steps at temperatures above 800[deg.] C. and this causes the cobalt or titanium to selectively react with the silicon and the polysilicon to form the metal silicide. The process is generally referred to as "siliciding". Since the shallow trench oxide and the sidewall spacers will not react to form a silicide, the silicides are aligned over the S/D junctions and the polysilicon gates so the process is also referred to as "self-aligned siliciding", or "saliciding".Salicidation technology is vital for improving the operating speed of modem semiconductor devices with sub-micron feature sizes. The salicide technology is widely use to increase the packing density of integrated circuits and to reduce the circuit interconnect resistance for high-speed operation. With the continuous decrease in device sizes (transistors becoming narrower and thinner and transistor channels becoming shorter), salicidation problems like junction punchthrough, current leakage, and contact resistance continue to reduce product yields and reliability.In general, salicidation results in high junction leakage due to metal penetration into the silicon substrate. The penetration of the metal "spikes" the junction, causing the current leakage.Residual metal from the salicidation process can also cause leakage. The silicide across the sidewall spacers may not be totally removed after the salicidation. The residual metal can cause a bridge between adjacent circuit features, like the gate and the S/D regions, causing current leakage.Nevertheless, as device dimensions continue to be scaled to smaller and smaller dimensions, it is necessary to scale down extension junction depths as well. Furthermore, shallow junctions are increasingly needed to control adverse charge-sharing effects (two dimensional short channel effects) in advanced devices such as metal oxide field effect transistors. Extended ultra-shallow S/D junctions can improve such negative effects, can suppress the short channel effect, and can improve device operating speeds.However, existing shallow S/D junction fabrication technologies, such as ion implantation followed by rapid thermal annealing, have not succeeded in solving all the problems related to fabricating increasingly shallow S/D junctions, and to connecting metal contacts to them.Solutions to these problems have been long sought but prior developments have not taught or suggested any solutions and, thus, solutions to these problems have long eluded those skilled in the art.DISCLOSURE OF THE INVENTIONThe present invention provides a method of forming an integrated circuit. A gate dielectric is formed on a semiconductor substrate, and a gate is formed on the gate dielectric. A super-saturated doped source silicide metallic layer is formed on the semiconductor substrate adjacent the gate and the gate dielectric. The silicide metallic layer incorporates a substantially uniformly distributed dopant therein in a substantially uniform super-saturated concentration. The silicide metallic layer is reacted with the semiconductor substrate therebeneath to form a salicide layer and outdiffuse the dopant from the salicide layer into the semiconductor substrate therebeneath. The outdiffused dopant in the semiconductor substrate is then activated to form a shallow source/drain junction beneath the salicide layer. An interlayer dielectric is then deposited above the semiconductor substrate, and contacts are formed in the interlayer dielectric to the salicide layer. This method significantly improves the formation of very shallow source/drain junctions for integrated circuits.Certain embodiments of the invention have other advantages in addition to or in place of those mentioned above. The advantages will become apparent to those skilled in the art from a reading of the following detailed description when taken with reference to the accompanying drawings.BRIEF DESCRIPTION OF THE DRAWINGSFIG. 1 is a view of an integrated circuit in an intermediate stage of fabrication in accordance with the present invention;FIG. 2 is the structure of FIG. 1 with an insulating layer and silicide metallic layers formed thereon;FIG. 3 is the structure of FIG. 2 following formation of salicide layers and shallow source/drain junctions;FIG. 4 is the structure of FIG. 3 after formation of a sidewall spacer;FIG. 5 is the structure of FIG. 4 during ion implantation to form deep source/drain junctions;FIG. 6 is the structure of FIG. 5 after deposition of a dielectric layer over the silicide, the sidewall spacer, and shallow trench isolation;FIG. 7 is the structure of FIG. 6 after formation of metal contacts;FIG. 8 illustrates a variation on the method for forming the shallow source/drain junctions;FIGS. 9-11 illustrate another variation on the method for forming the shallow source/drain junctions;FIG. 12 shows a structure combining the variations illustrated in FIGS. 8-11;FIG. 13 illustrates still another variation on the method for forming the shallow source/drain junctions; andFIG. 14 is a simplified flow chart of the method of manufacturing the integrated circuit in accordance with the present invention.DETAILED DESCRIPTION OF THE INVENTIONIn the following description, numerous specific details are given to provide a thorough understanding of the invention. However, it will be apparent to one skilled in the art that the invention may be practiced without these specific details. In order to avoid obscuring the present invention, some well-known configurations and process steps are not disclosed in detail. In addition, the drawings showing embodiments of the device are semi-diagrammatic and not to scale and, particularly, some of the dimensions are for the clarity of presentation and may be exaggerated in the drawing FIGS. The same numbers will be used in all the drawing FIGS. to relate to the same elements.The term "horizontal" as used herein is defined as a plane parallel to a substrate or wafer. The term "vertical" refers to a direction perpendicular to the horizontal as just defined. Terms, such as "on", "above", "below", "bottom", "top", "side" (as in "sidewall"), "higher", "lower", "over", and "under", are defined with respect to the horizontal plane.Referring now to FIG. 1, therein is shown a semiconductor integrated circuit, and in particular a transistor 100, in an intermediate stage of fabrication in accordance with the present invention.To form the intermediate stage, a gate dielectric layer, such as silicon oxide, and a conductive gate layer, such as polysilicon, have been deposited on a semiconductor substrate 102 of a material such as silicon. The layers are patterned and etched to form a gate dielectric 104 and a gate 106. The semiconductor substrate 102 has been further patterned, etched, and filled with a silicon oxide material to form a shallow trench isolation ("STI") 108.Referring now to FIG. 2, therein is shown the structure of FIG. 1 having an insulating layer 202 formed on the sides of the gate 106. The insulating layer 202 is formed by depositing an insulating film over the structure of FIG. 1 and anisotropically etching the horizontal surfaces. A deposition process 204 is then used to form silicide metallic layers 206, 208, and 210 in accordance with the present invention. The silicide metallic layers 206 and 210 are formed on the surface of the semiconductor substrate 102, and the silicide metallic layer 208 is formed on the gate 106. Advantageously, the silicide metallic layers 206 and 210 can be formed adjacent the gate 106.The silicide layer deposition process forms the silicide metallic layers 206, 208, and 210 by depositing silicide metal that is doped with a dopant species to be used for subsequently forming very shallow source/drain ("S/D") junctions. For example, the silicide metallic layers may be cobalt (for forming cobalt silicide, CoSi2), nickel (for forming nickel silicide, NiSi2), or platinum (for forming platinum silicide, PtSi). The dopant impurity atoms that are incorporated in the silicide metallic layers 206, 208, and 210 as they are formed may be, for example, arsenic (As), boron (B), or phosphorus (P).The doped silicide metallic layers 206, 208, and 210 may be formed, for example, by a sputtering process in which both the silicide metal and the dopant are incorporated into the sputter target. In one embodiment, the dopant concentration in the sputter target may be sufficiently high that the sputtered dopant is incorporated into the deposited silicide metallic layers 206, 208, and 210 in a super-saturated concentration (i.e., the dopant concentration is higher than its solid solubility). Such a super-saturated dopant concentration in the deposited silicide metallic layer provides a super-saturated doped source. The super-saturated doped source facilitates subsequent solid-source outdiffusion of the dopant to form very shallow junctions, as further described herein. The use of sputtering to form the super-saturated doped source results in a uniform distribution and concentration of the dopant throughout the silicide metallic layers.The doped silicide metallic layers 206, 208, and 210 may alternatively be formed by other suitable means, such as, for example, deposition of undoped silicide metallic layers followed by ion implantation of the dopant in a concentration similarly sufficient to create a super-saturated dopant concentration in the silicide metallic layers.Referring now to FIG. 3, therein is shown the structure of FIG. 2 following a heating and annealing step that reacts the deposited silicide metallic layers 206, 208, and 210 (FIG. 2) with the semiconductor substrate 102 and the gate 106 therebeneath. The reaction forms salicide layers 302, 304, and 306 and causes the dopant to outdiffuse from the salicide layers 302 and 306 into the semiconductor substrate 102 therebeneath as it incorporates the silicide metallic layers into the salicide layers. Since the solid solubility of dopants in silicide is very low, the dopants readily eject out of the silicide and go into the silicon substrate therebeneath. The dopant that remains behind in the salicide layers 302 and 306 will also be uniformly distributed and concentrated therein as a residual result from the prior uniform distribution and concentration of the dopant throughout their predecessor silicide metallic layers.The thermal outdiffusion of the dopant into the semiconductor substrate 102 activates the outdiffused dopant atoms in the semiconductor substrate 102 to form very shallow S/D junctions immediately beneath the salicide layers 302 and 306. The process forms shallow S/D junctions 308 and 310 that are directly beneath and follow the contour of the salicide layers 302 and 306. This advantageously reduces junction leakage.Referring now to FIG. 4, therein is shown the structure of FIG. 3 after formation of a sidewall spacer 402. The sidewall spacer 402, generally of silicon nitride, is a deposited layer that is etched in conventional manner to form a conventional curved shape as shown.Referring now to FIG. 5, therein is shown the structure of FIG. 4 during an ion implantation 502 to form deep S/D junctions 504 and 506.The sidewall spacer 402, the gate 106, and the STI 108 act as masks for the formation of the deep S/D junctions 504 and 506 by the ion implantation 502 of arsenic, boron, or phosphorus impurity atoms into the surface of the semiconductor substrate 102 and into and through the shallow S/D junctions 308 and 310, respectively. The ion implantation 502 is followed by a high-temperature anneal above 700[deg.] C. to activate the implanted impurity atoms to form the deep S/D junctions 504 and 506.Referring now to FIG. 6, therein is shown the structure of FIG. 5 after deposition of a dielectric layer 602 over the salicide layers 302, 304, and 306, the sidewall spacer 402, and the STI 108.In various embodiments, the dielectric layer 602 is of dielectric materials such as silicon oxide (SiOx), tetraethylorthosilicate (TEOS), borophosphosilicate (BPSG) glass, etc., with dielectric constants from 4.2 to 3.9, or low dielectric constant dielectric materials such as fluorinated tetraethylorthosilicate (FTEOS), hydrogen silsesquioxane (HSQ), bis-benzocyclo-butene (BCB), tetramethylorthosilicate (TMOS), octamethyleyclotetrasiloxane (OMCTS), hexamethyldisiloxane (HMDS), trimethylsilil borxle (SOB), diaceloxyditerliarybutosiloxane (DADBS), trimethylsilil phosphate (SOP), etc., with dielectric constants below 3.9 to 2.5. Available ultra-low dielectric constant dielectric materials, having dielectric constants below 2.5, include commercially available Teflon-AF, Teflon microemulsion, polimide nanofoams, silica aerogels, silica xerogels, and mesoporous silica. Stop layers and capping layers (where used) are of materials such as silicon nitride (SixNx) or silicon oxynitride (SiON).Referring now to FIG. 7, therein is shown the structure of FIG. 6 after formation of metal contacts 702, 704, and 706. The metal contacts 702, 704, and 706 are respectively electrically connected to the salicide layers 302, 304, and 306, and respectively to the deep S/D junction 504, the gate 106, and the deep S/D junction 506.In various embodiments, the metal contacts 702, 704, and 706 are of metals such as tantalum (Ta), titanium (Ti), tungsten (W), alloys thereof, and compounds thereof. In other embodiments, the metal contacts 702, 704, and 706 are of metals such as copper (Cu), gold (Au), silver (Ag), alloys thereof, and compounds thereof with one or more of the above elements with diffusion barriers around them.Referring now to FIG. 8, therein is shown a variation on the method for forming the shallow S/D junctions 308 and 310. The variation adds an additional step between the steps illustrated above for FIGS. 2-3.Thus, as shown in FIG. 8, following formation of the silicide metallic layers 206, 208, and 210 as illustrated in FIG. 2, a metallic cap layer 802 is formed on the tops thereof by a deposition process 804. The metallic cap layer 802 and the silicide metallic layers 206, 208, and 210 combine to form an alloyed metallic bi-layer that caps and deters outdiffusion of the super-saturated dopant through the top surfaces of the silicide metallic layers 206, 208, and 210.For example, the silicide metallic layers 206, 208, and 210 formed by sputtering NixAsy, NixBy, or NixPy would be caped by a layer of Ni. The silicide metallic layers 206, 208, and 210 formed by sputtering CoxAsy, CoxBy, or CoxPy would be caped by a layer of Co. The silicide metallic layers 206, 208, and 210 formed by sputtering PtxAsy, PtxBy, or PtxPy would be caped by a layer of Pt. Of course, other combinations of materials, including combinations of dissimilar silicide and capping metals, may be used as appropriate.The method of FIG. 8 then continues with the steps illustrated for FIGS. 3-7, beginning with the heating and annealing step that reacts the deposited silicide metallic layers 206, 208, and 210 to form the salicide layers 302, 304, and 306, and outdiffuses the dopant from the salicide layers 302 and 306 into the semiconductor substrate 102. The thermal outdiffusion of the dopant into the semiconductor substrate 102 again activates the outdiffused dopant atoms to form the shallow S/D junctions 308 and 310. The metallic cap layer 802 deters outdiffusion of the super-saturated dopant through the top surfaces of the silicide metallic layers 206, 208, and 210 during the heating and annealing process.Referring now to FIGS. 9-11, therein is shown another variation on the method for forming the shallow S/D junctions 308 and 310. The variation adds an additional step between the steps illustrated above for FIGS. 1-2.Thus, as shown in FIG. 9, following formation of the gate dielectric 104 and the gate 106 as illustrated in FIG. 1, a self-aligned amorphous layer 902 is formed in the surface of the semiconductor substrate 102 on each side of and adjacent the gate 106 and the gate dielectric 104. The amorphous layer 902 may be formed, for example, by a damaging ion implantation 904 of an inert species such as germanium (Ge), silicon (Si), or Argon (A). The implantation damage creates a disordered (amorphous) layer that presents much greater resistance to diffusion of dopants, thereby effectively limiting subsequent dopant diffusion to a very narrow penetration depth into the amorphous layer 902.FIG. 10 corresponds to FIG. 2, showing formation of the insulating layer 202 and deposition of the doped silicide metallic layers 206, 208, and 210.Alternatively, the doped silicide metallic layers 206, 208, and 210 may be deposited prior to the formation of the amorphous layer 902 by the damaging ion implantation 904 (FIG. 9).FIG. 11 corresponds to FIG. 3, showing formation of the salicide layers 302, 304, and 306 and outdiffusion of the dopant from the salicide layers 302 and 306 into the amorphous layer 902 therebeneath. As before, the thermal outdiffusion of the dopant into the amorphous layer 902 activates the outdiffused dopant atoms in the amorphous layer 902 to form very shallow S/D junctions 308' and 310'.The method of FIGS. 9-11 then continues with the steps illustrated for FIGS. 4-7, beginning with formation of the sidewall spacer 402 (FIG. 4).Referring now to FIG. 12, therein is shown a structure combining the metallic cap layer 802, as taught in connection with the method and structure described in conjunction with FIG. 8, with the dopant diffusion depth-limiting properties of the amorphous layer 902, as taught in connection with the method and structure described in conjunction with FIGS. 9-11.The combination of FIG. 12 may be fabricated, for example, by forming the metallic cap layer 802 on the tops of the silicide metallic layers 206, 208, and 210 subsequent to the process steps illustrated in conjunction with FIG. 10. The metallic cap layer 802 and the silicide metallic layers 206, 208, and 210 then combine to form an alloyed metallic bi-layer that caps and deters outdiffusion of the super-saturated dopant through the top surfaces of the silicide metallic layers 206, 208, and 210. This is followed, as before, with a heating and annealing step that reacts the deposited silicide metallic layers 206, 208, and 210 (cf. FIGS. 10-11). The heating and annealing reaction forms the salicide layers 302, 304, and 306 (FIG. 12), causes the dopant to outdiffuse from the salicide layers 302 and 306 into the amorphous layer 902 therebeneath, and activates the outdiffused dopant atoms in the amorphous layer 902 to form the very shallow S/D junctions 308' and 310'. The metallic cap layer 802 deters outdiffusion of the super-saturated dopant through the top surfaces of the silicide metallic layers 206, 208, and 210 during the heating and annealing process.The fabrication of the transistor 100 according to the method described in conjunction with FIG. 12 is then completed in the same manner as described previously in conjunction with FIGS. 4-7, beginning with formation of the sidewall spacer 402 (FIG. 4).Referring now to FIG. 13, therein is shown a variation on the method for forming the shallow S/D junctions 308 and 310 (FIG. 5). The variation shown in (FIG. 13) replaces the steps illustrated above for FIGS. 3-4, eliminating the heating and annealing step that formed the shallow S/D junctions 308 and 310 (FIG. 3) at that stage of the process. By eliminating this heating and annealing step, heat cycles are preserved or saved, thereby reducing costs and improving operational efficiencies.Thus, as shown in FIG. 13, following formation of the silicide metallic layers 206, 208, and 210 as illustrated in FIG. 2, a sidewall spacer 402 is formed in the same manner as described with respect to FIG. 4. The sidewall spacer 402, generally of silicon nitride, is thus a deposited layer that is etched in conventional manner to form the conventional curved shape as shown.The method of FIG. 13 then continues with the steps illustrated for FIGS. 5-7. In this case, the high temperature anneal that follows the ion implantation 502 (FIG. 5) not only activates the implanted impurity atoms to form the deep S/D junctions 504 and 506, but it also reacts the deposited silicide metallic layers 206, 208, and 210 (FIG. 13) with the semiconductor substrate 102 and the gate 106 therebeneath. The reaction forms the salicide layers 302, 304, and 306 (FIG. 5) and outdiffuses the dopant from the salicide layers 302 and 306 into the semiconductor substrate 102. The thermal outdiffusion of the dopant from the salicide layers 302 and 306 into the semiconductor substrate 102 provides activated outdiffused dopant atoms that form the shallow S/D junctions 308 and 310.The method of FIG. 13 thus saves energy and heat cycles by utilizing the single annealing step described in connection with FIG. 5 to form the salicide layers 302, 304, and 306, the shallow S/D junctions 308 and 310, and activate and form the deep S/D junctions 504 and 506.It will be understood that the dopant can be expected to be activated immediately in the semiconductor substrate during the step of reacting the silicide metallic layer with the semiconductor substrate therebeneath to form the salicide layer and outdiffuse the dopant from the salicide layer into the semiconductor substrate therebeneath. This is due to the lack of damage to the semiconductor substrate crystalline structure that would have resulted from a dopant ion implantation (for which subsequent annealing is then conducted). Thus, the step of activating the outdiffused dopant in the semiconductor substrate to form shallow source/drain junctions beneath the salicide layer may be accomplished by the salicide formation process itself without the additional thermal annealing process, as well as by thermal annealing steps such as described above.Referring now to FIG. 14, therein is shown a simplified flow chart of a method 1400 in accordance with the present invention. The method 1400 includes: providing a semiconductor substrate in a step 1402; forming a gate dielectric on the semiconductor substrate in a step 1404; forming a gate on the gate dielectric in a step 1406; forming at least one super-saturated doped source silicide metallic layer on the semiconductor substrate adjacent the gate and the gate dielectric, the silicide metallic layer incorporating a substantially uniformly distributed dopant therein in a substantially uniform super-saturated concentration, in a step 1408; reacting the silicide metallic layer with the semiconductor substrate therebeneath to form a salicide layer and outdiffuse the dopant from the salicide layer into the semiconductor substrate therebeneath, in a step 1410; activating the outdiffused dopant in the semiconductor substrate to form a shallow source/drain junction beneath the salicide layer in a step 1412; depositing an interlayer dielectric above the semiconductor substrate in a step 1414; and forming a contact in the interlayer dielectric to the salicide layer in a step 1416.While the invention has been described in conjunction with a specific best mode, it is to be understood that many alternatives, modifications, and variations will be apparent to those skilled in the art in light of the aforegoing description. Accordingly, it is intended to embrace all such alternatives, modifications, and variations that fall within the spirit and scope of the included claims. All matters hither-to-fore set forth or shown in the accompanying drawings are to be interpreted in an illustrative and non-limiting sense. |
Some embodiments provide a method including determining one or more areas of a display to remain active responsive to received user input, determining one or more areas to be dimmed responsive to the received user input, and dimming the one or more areas of the display to be dimmed to reduce a power consumption of the display, to allow users focus on a task execution, to keep privacy but limited content display on the screen dynamically. The user input may include mouse cursor, keyboard, touch, eye position or movement information, voice commands, or a power policy of an electronic device including the display. The dimming may include dimming pixels of the display by applying a mask with pixel blending to the display image prior to the image going to a hardware controller. |
A method, comprising:identifying one or more areas of a display to be focus areas in response to received user input;identifying one or more areas of the display to be dimmed in response to the received user input; andapplying one or more mask layers to the input graphics of the display, wherein the one or more mask layers correspond to the focus areas and the areas to be dimmed, wherein the one or more mask layers include a transparency value for each pixel wherein the transparency value is used to dim the pixels on the display, to reduce power consumption of the display.The method of claim 1, wherein the one or more mask layers are applied as an input to a graphics driver driving the display.The method of claim 1, wherein the transparency level for each pixel of the one or more mask layers ranges from fully opaque to fully transparent.The method of any one of claims 1-3, wherein applying the one or more mask layer further includes: blending the transparency level of the one or more mask layers with a display pixel.The method of claim 4, wherein the received user input includes a selected one of: cursor information received from a pointing device; keystroke information received from a keyboard; touch information received from a touch screen; a position of an eye of the user indicating a location on the display; a voice command from the user; manual input received from the user; a power policy-setting of an electronic device that includes the display; and software setting of the electronic device that includes the display.The method of claim 4, wherein the display includes multiple displays; and wherein the focus areas are distributed across the multiple displays.A machine readable medium storing a set of instructions to be executed by at least one processing unit of an electronic device associated with a display, the set of instructions, when executed, to:identify one or more areas of the display to be focus areas in response to received user input;identify one or more areas of the display to be dimmed in response to the received user input; andapply one or more mask layers to the input graphics of the display, wherein the one or more mask layers correspond to the focus areas and the areas to be dimmed, wherein the one or more mask layers include a transparency value for each pixel wherein the transparency value is used to dim the pixels on the display, to reduce power consumption of the display.The machine readable medium of claim 7, wherein the one or more mask layers is applied as an input to a graphics driver driving the display.The machine readable medium of claim 7, wherein the transparency level for each pixel of the one or more mask layers ranges from fully opaque to fully transparent.The machine readable medium of claim 7, wherein to apply the one or more mask layers further includes: to blend the transparency level of the one or more mask layers with the display pixel.The machine readable medium of claim 7, wherein the received user input includes a selected one of: cursor information received from a pointing device; keystroke information received from a keyboard; touch information received from a touch screen; a position of an eye of the user indicating a location on the display; a voice command from the user; manual input received from the user; a power policy-setting of an electronic device that includes the display; and software setting of the electronic device that includes the display.The machine readable medium of any one of claim 7-11, wherein the display includes multiple displays; and wherein the focus areas are distributed across the multiple displays.A system, comprising:a display;a partial panel screen dimming module coupled with the display, the module to:identify one or more areas of the display to be focus areas in response to received user input;identify one or more areas of the display to be dimmed in response to the received user input; andapply one or more mask layers to the input graphics of the display, wherein the one or more mask layers correspond to the focus areas and the areas to be dimmed, wherein the one or more mask layers include a transparency value for each pixel wherein the transparency value is used to dim the pixels on the display, to reduce power consumption of the display.The system of claim 13, wherein the one or more mask layers is applied as an input to a graphics driver driving the display.The system of claim 13, wherein the transparency level for each pixel of the one or more mask layers from fully opaque to fully transparent.The system of claim 13, wherein to apply the one or more mask layers further includes: to blend the transparency level of the one or more mask layers with a display pixel.The system of any one of claims 13-16, wherein the received user input includes a selected one of: cursor information received from a pointing device; keystroke information received from a keyboard; touch information received from a touch screen; a position of an eye of the user indicating a location on the display; a voice command from the user; manual input received from the user; a power policy-setting of an electronic device that includes the display; and software setting of the electronic device that includes the display.The machine readable medium of claim 17, wherein the display includes multiple displays; and wherein the focus areas are distributed across the multiple displays. |
PARTIAL PANEL SCREEN DIMMINGRELATED APPLICATIONThis application (more specifically, the common portion) claims priority to U.S. Application number 16/800,944 filed February 25, 2020, entitled SOFTWARE BASED PARTIAL DISPLAY DIMMING.BACKGROUNDThe present disclosure relates to the reduction of power consumption in electronic devices, and more specifically to the reduction of electrical power consumed by a display of an electronic device.In many electronic devices, such as laptop and notebook computers and mobile devices such as smart phones, a display of the electronic device is one of the highest power consuming components of the electronic device. These types of electronic devices are typically powered by battery power during use at least some of the time. Thus, this relatively high-power consumption of the display in such electronic devices reduces the battery life when the electronic device is being operated on battery power, where the battery life is the time for which the battery can power the electronic device.BRIEF DESCRIPTION OF THE DRAWINGSFigure 1 is a functional diagram illustrating a display power-reduction system and process according to one embodiment of the present disclosure;Figure 2 illustrates multiple displays in which the process of Figure 1 may change characteristics of multiple windows being presented on each display to reduce power consumption of the displays according to one embodiment;Figure 3 is a flowchart illustrating a desktop composition process according to one embodiment;Figure 4 is a flowchart illustrating a graphics driver process that performs partial display dimming when called by the desktop composition process of Figure 3 according to an embodiment;Figure 5 is a flowchart illustrating a dimming shader process called by the graphics driver process of Figure 4 when partial display dimming is enabled;Figure 6 is a flowchart illustrating a query plugin process utilized by the dimming shader process of Figure 5 to process inputs identifying regions of the display to be dimmed;Figure 7 is a sequence diagram illustrating operation of the various software components that implement a display power-reduction process according to the embodiments of Figures 3-6;Figure 8 is a functional block diagram of an example computer system illustrating a sample environment in which embodiments of the present disclosure may be implanted.Figure 9 shows three examples of a display with various levels of shading of dimming on the display by implementing a mask, in accordance with embodiments.Figure 10 shows an example of a mask applied to dim areas of the display, with various levels of user interaction based on the transparency of the mask, in accordance with embodiments.Figure 11 shows various examples of masks applied to dim areas of the display, in accordance with embodiments.Figure 12 shows examples of displays with masks having different levels of transparency to dim areas of the display, in accordance with embodiments.Figure 13 shows an example of a computer with two displays where masks are used to dim areas of the two displays, in accordance with embodiments.Figure 14 shows another example of a computer with two displays where masks are used to dim areas of the two displays, in accordance with embodiments.Figure 15 shows an example process flow to dim a display using programmatic hardware commands, in accordance with embodiments.Figure 16 shows an example process flow for applying a mask layer to implement dimming on a display, in accordance with embodiments.Figure 17 shows a detailed process flow for dimming and area selection for multiple focus window and single focus window, in accordance with embodiments.Figure 18 shows an example of power savings expectations for non-focus areas that are partially or fully dimmed, in accordance with embodiments.Figure 19 shows a process for partial panel screen dimming, in accordance with embodiments.Figure 20 shows a non-transitory computer readable storage medium that includes instructions to implement one or more processes to cause partial panel screen dimming.DETAILED DESCRIPTIONIn the following description, for purposes of explanation, numerous examples and specific details are set forth in order to provide a thorough understanding of the present disclosure. Such examples and details are not to be construed as unduly limiting the elements of the claims or the claimed subject matter as a whole. It will be evident to one skilled in the art, based on the language of the different claims, that the claimed subject matter may include some or all of the features in these examples, alone or in combination, and may further include modifications and equivalents of the features and techniques described herein.Embodiments described herein may be directed to methods, apparatus, and techniques to implement Partial Panel Screen Dimming to save the backlight power and extend system battery life by adding a mask, identifying one or more regions with different transparency levels on the mask for dimming to be subsequently sent to a graphics composition system for display on one or more displays. These techniques may be applied with a software-based approach.The display is one of the highest power consuming components in a notebook, laptop, phone, or other portable computing system having one or more displays. In legacy devices, display power may consume 44%+/-of the system power consumption, where backlight power may consume 50%or more of that. Different display technologies may consume higher power, for example on organic light-emitting diode (OLED) or high dynamic range (HDR) types of panels. Minimizing the panel backlight power will extend an end-users use of laptops, notebooks, or other portable computing systems that use displays, and also will help extend operational use and battery life.Embodiments described herein may be accomplished using a software approach at an application level, which works on existing hardware and graphics driver stacks. These embodiments may manipulate the display in the input stage, in contrast to the output stage that require specialized hardware components. Embodiments implemented through software provide more flexibility for adapting to different operating systems and to different display panels, such as to 6 bit/8-bit LCD and OLED panels. Embodiments implemented through software will also support different display form factors such as single, dual (physical and virtual, such as foldable) , and secondary displays. Note: as used herein, display and screen may be used interchangeably.The advantage of implementing embodiments in software include decreased cost by not requiring specialized hardware. These embodiments may only use a basic display driver for support, for example a GFX driver. These embodiments may not be deeply coupled with a graphics driver, or composition layer, and may have no system hardware dependency such that architectural changes to system configurations will not affect the implementation of the embodiments. In addition, embodiments may require only a minimal set of operating system (OS) support such as pixel blending and mouse event handling. For example, on a Windowsplatform, embodiments may not require any extra changes to the OS or display drivers. Embodiments may function as a simple background application.In contrast with solutions that work on the hardware side or the graphic driver side, embodiments in this software may just add an input layer to the composition system. The graphics input is naturally supported. This is in contrast to a hardware or driver specific solution, where output is usually protected and less accessible due to security concerns. For example, the output buffer for displaying passwords should prevent access from third party driver or software.Embodiments may include the ability to manipulate display content, plus the flexibility to take advantage of the flexible features set built in the user interface (UI) taking care of software preset and/or user prompting interaction. For example, instantly enabling or disabling partial dimming feature, defining focus/non focus areas, defining a diming level, which may also be referred to as a transparency level. In addition, embodiments may also prevent or restrict unintended user actions.Embodiments using a software approach for partial panel screen dimming may also provide an end-user with additional privacy control. For example, when the user is sharing a screen the user may want to completely dim everything other than the a particular area of the display, for example over a videoconference. In addition, during presentation to others, the end user may want to dim various portions of the screen, to highlight areas to focus on during the presentation.Figure 1 is a functional diagram illustrating a display power-reduction system and process 100 according to one embodiment of the present disclosure. In operation, the display power-reduction process 100 receives user inputs 102 and based on these user inputs controls the rendering or display of content on a display 104 in one or more focus areas 106 on the display, and also controls dimming of the display in one or more non-focus areas 108 of the display to thereby reduce power consumption of the display, as will be explained in more detail below. In this way, the process 100 maintains active the one or more focus areas 106 of the display 104, which are the areas being viewed or are most likely to be viewed by the user, at standard brightness for these areas. The process 100 determines these focus areas 106 based on the user input 102. The process 100 also reduces the brightness of or dimming of the brightness of the inactive or non-focus areas 108 of the display 104, which are the area or areas not being viewed or are less likely as being viewed by the user. The process 100 also determines these non-focus areas 108 based on the user inputs 102. This maintaining of the intensity or brightness of the focus areas 106 while dimming the non-focus areas 108 on the display is referred to as “partial dimming” in the present application.The user inputs 102 utilized in the display power-reduction process 100 may include a wide variety of different types of inputs provided by or received from a user, or through settings or from software running in the environment in which the process 100 is being implemented. The process 100 would typically be implemented in a portable electronic device such as, for example, a smart phone, tablet computer, or laptop computer, but is not limited to being implemented in these types of electronic devices. The display power-reduction process 100 may be implemented in any other suitable type of electronic device including a display and which may benefit from reducing the power consumption of the display. In such an environment, the user inputs 102 received in the display power-reduction process 100 may include cursor information received from a mouse, keystroke information received from a keyboard, touch information received from a touch screen of the display 104, a position of or movement of the eyes of the user indicating a location on the display where the user is looking, voice commands from the user or a power policy setting of the electronic device including the display, software running on the electronic device, or through manual input from the user. In some embodiments, where the display 104 includes a touch screen, the process 100 may identify the focus area or areas 106 based on locations on the display 104 that are touched by the user. Alternatively, in some embodiments the process 100 determines the focus area 106 based on where a cursor is positioned on the display 104. These user inputs 102 are provided by way of example, and the display power-reduction process 100 is not limited to utilizing only some or all these user inputs, but may utilize other inputs in addition to or in place of these example user inputs.In some embodiments, the user inputs 102 also include an input that enables and disables execution of the display power-reduction process 100. For example, where the user inputs 102 include a power policy setting, the process 100 may be activated or enabled once a charge level of a battery of the electronic device including the display 104 drops below a selected charge percentage. Similarly, once the charge level of the battery reaches a selected threshold after being charged, the process 100 may then be deactivated or disabled. In some embodiments, the user inputs 102 may include an ON/OFF parameter that is manually selectable or input by the user to thereby enable the user to manually enable and disable execution of the display power reduction process 100. This allows the user to manually select execution of the process 100 independent of the other user input 102. For example, where the user is almost done with a task being performed on the electronic device and the battery reaches a level that causes the process 100 to be executed, the user may, through the ON/OFF parameter, disable the process and finish the task under normal operating conditions of the electronic device.In the display power-reduction process 100, once the user inputs 102 are collected or received, these inputs are processed by a desktop composition module (DCM) 110 to control partial dimming of the display 104. The DCM 110 is a software component that executes as part of an operating system (OS) of the electronic device including the display 104, executes as part of a graphics driver of the electronic device, or executes as part of both the OS and graphics driver. The DCM 110 implements the partial dimming of the display 104 and part of this overall process includes compositing windows manager functionality that composites contents or images of multiple applications executing on the electronic device into a desktop screen or image to be displayed on the display 104. Where the electronic device includes more than one display 104, as will be described in more detail below with reference to Figure 2, the DCM 100 composites images from the running applications into a desktop image that is displayed on these multiple displays.The operation of a compositing windows manager, such as the desktop windows manager (DWM) in the Windows operation system, and a graphics driver will be understood by those skilled in the art, and thus these software component will not be described in detail herein. Aspects of the operation of the graphics driver and compositing windows manager that are part of the overall operation of the DCM 110 will, however, now be briefly described to enable a better understanding of aspects of the partial dimming of the display 104 implemented through the DCM in the process 100. As seen in Figure 1, the electronic device in which the process 100 is implemented includes graphics hardware 112, which includes a graphics processing unit (GPU) (not shown) of the device. The graphics driver is a software component that allows the OS, as well as programs or applications executing on the electronic device, to control the graphics hardware 112 to display desired images on the display 104.Each application executing on the electronic device is displayed in a corresponding window on the desktop displayed on the display 104. An image to be displayed by each executing application is stored in a corresponding off-screen buffer associated with each window on the display 104. During execution of the applications, the images stored in the corresponding off-screen buffers are occasionally updated and the compositing windows manger thereafter processes each of the updated images as part of generating a corresponding composite image to be displayed as the desktop on the display 104. The processing of these respective images in the off-screen buffers may include applying 2D and 3D effects, and may include operations such as blending, fading, scaling, rotation, duplication, bending and contortion, shuffling, blurring, redirecting applications, translating windows into one of a number of displays and virtual desktops, and other graphics-related operations, as will be understood by those skilled in the art. The graphics hardware 112 generates the composite image that is then stored in a display framebuffer 114 as seen in Figure 1, with this stored composite image being stored in either dedicated memory or system memory (not shown) and thereafter being displayed as the desktop on the display 104.Returning to the description of the DCM 110, the DCM includes either a modified compositing windows manager, a modified graphics driver, or a modified compositing manager and graphics driver, to implement partial dimming on the display 104. Each of the compositing windows manger and graphics driver is a software component, and thus modification of these components includes programming instructions added to one or both of these components to implement the partial dimming functionality. In operation, the DCM 100 receives the user inputs 102 and from these user inputs determines one or more focus areas 106 on the display 104 that are to remain active (i.e., the intensity or brightness in these focus areas are maintained) . The DCM also determines, based on the user input 102, one or more non-focus areas 104 of the display 104 which are to be dimmed (i.e., the intensity or brightness in these non-focus areas are to be reduced or dimmed) . The DCM 110 thereafter, through execution of the modified compositing windows manager, modified graphics driver, or modified compositing windows manager and graphics driver, dims the one or more non-focus areas 108 of the display to be dimmed to reduce a power consumption of the display 104.The specific way the DCM 110 controls the dimming of the non-focus areas 108 on the display 104 will depend on the specific type of the display. For example, where the display 104 is an organic LED (OLED) display, the DCM may dim (i.e., reduce the intensity or brightness of) at least some of the pixels of the display 104 in the one or more non-focus areas 108 of the display to be dimmed. This dimming of the non-focus areas 108 may include changing a color of at least some of the pixels of the display 104 in the one or more non-focus areas 108. The color of these pixels may, for example, be changed to a darker color, such as blue or black. Where the display 104 includes segmented LED backlighting, the dimming may include turning OFF one or more segments of the backlighting of the display. For example, the display 104 may be an LCD with mini LED backlighting where dimming is performed by controlling groups of the mini LEDs.Figure 2 illustrates multiple displays 200 and 202 in which the process 100 of Figure 1 may change characteristics of multiple windows W1, W2, W3, W4 being presented on the multiple displays to reduce overall power consumption of the displays according to one embodiment. The windows W1-W3 are presented on the display 200 and window W4 on display 202. In such a multiple display electronic device, the process 100 may implement partial dimming on each of the display 200, 202. Furthermore, in such a multiple display device one of the displays 200, 202 may not be utilized by a user at certain times. For example, assume the window W4 is not being displayed on the display 202 such that no windows are presented on this display. In this situation, the dimming performed by the process 100 may include dimming the entire display 202. The partial dimming implemented by the process 100 may include dimming the entire display for one or more of the displays 200, 202 in a multiple display device.Figure 2 also illustrates that dimming performed by the process 100 in each of the windows W1-W4 may vary in different embodiments. In the example to be discussed, assume the window W4 is not displayed on the display 202 such that no windows are present on this display. In this situation, the windows W1-W3 are present on the display 200 and the window W2 is the active window (i.e., is the focus area on the display 200) . The windows W1 and W3 are inactive or non-focus areas on the display 200 in this example. The process 100 will accordingly dim the windows W1, W3, and Figure 2 shows two examples of how this dimming within a given inactive window (i.e., in non-focus areas) may be performed. In the window W3, the entire window is dimmed. Thus, each of the pixels in the window W3 is set to black or changed to some other darker color to reduce the power consumption of the display 200 due to displaying the window W3. Where the display 200 includes segmented LED backlighting, dimming window W3 may include turning off one or more segments of the backlighting of the display. The window W1 shows another possible way of dimming an inactive window corresponding to a non-focus area of the display. The window W1 includes a border around the perimeter of the window that is not dimmed but remains illuminated by the DCM 110 (Figure 1) while an interior of the window W1 inside this border is dimmed.Other embodiments include other ways of dimming inactive windows (i.e., non-focus areas) on a display. For example, dimming inactive windows or non-focus areas occurs in different ways in further embodiments, such as by changing colors in the inactive windows or non-focus areas, or through gradient dimming within the inactive windows or non-focus areas, or through gradient dimming at edges between the one or more focus areas and the non-focus areas. The inactive windows or non-focus areas may be defined through eye tracking to identify a moving focus area (active window or windows) and non-focus areas (inactive window or windows) in the other areas of the display. In other embodiments, the size of the entire screen being displayed can be shrunk to a smaller area (focus area) on the display, with the remaining area (non-focus area) on the screen being dimmed or turned OFF. In other embodiments a window or windows associated with a given app are defined as the active window or windows and thereby as the focus area that is not dimmed, or is dimmed in a particular manner, while the windows of other apps are defined as non-focus areas and are accordingly dimmed. In another embodiment, portions of each active window of a given app may also be dimmed such as by dimming an edge portion of each active window for the given app, which is illustrated for the window W4 in Figure 2. Thus, where the window W4 is an active window of a particular app running on an electronic device, this active window W4 may be dimmed around the edges of the window as shown. Content being presented by the app is displayed on the interior portion of the active window W4 in this embodiment, which is represented by the interior white portion of the window W4. The dimming around the edge of the active window W4 could alternatively be a gradient dimming, or this dimming could be done through displaying a particular color in the edge portion of the window, or through other suitable dimming techniques that reduce power consumed by the display 202 in displaying the window W4.In another embodiment, a user may provide manual input, such as through touch input, voice input, or keystrokes, to instantly enable the display power reduction process 100 on the corresponding electronic device. The user could similarly disable the process 100 through manual input in this embodiment. Also, in this embodiment, the user could provide other manual input after enabling the process 100 to thereby provide various inputs that control the operation of the process 100, such as providing levels of dimming to be provided. In another embodiment, the user may also manually define focus and non-focus areas, or active and non-active windows through suitable manual input such as touch input, voice input, or keystrokes. For example, the user could through a first type of touch stroke on the display define a focus area or areas and through a second type of touch stroke define non-focus areas on the display.Figure 3 is a flowchart illustrating a desktop composition process 300 that is part of the display power-reduction process 100 according to one embodiment. The process 300 is an example of a process executed by the windows compositing manager, which in the example of Figure 3 is the DWM in the Windows OS. Figures 3-7 illustrate an example embodiment of the DCM 110 implemented in the Windows OS such that the compositing windows manager is DWM and the partial dimming is implemented through a modified graphics driver of the electronic device. The desktop composition process 300 starts at 302 and proceeds immediately to 304 where the DWM makes a Present call, where Present is a function of the DWM that calls the graphics driver. Next, the process 300 at 306 and 308 receives from the graphics driver the partial dimming modified image data of each of the windows being displayed on the desktop. At 308, the process 300 provides the composite image as modified by the partial dimming modified image data to the display framebuffer 114 (Figure 1) for display on the display 104.Figure 4 is a flowchart illustrating a graphics driver process 400 corresponding executed by the graphics driver in response to the Present call from the DWM executing the desktop composition process 300 of Figure 3. The process 400, at 402, starts and then proceeds to 404 in which the graphics driver generates commands for programing the graphics hardware 112 (Figure 1) . Next, at 406, the process 400 determines whether partial dimming of the display 104 is enabled. If the determination at 406 is negative, the process 400 proceeds to 408 and the programmed hardware commands are submitted to the graphics hardware 112. Next, the process 400 at 410 terminates. Where the determination at 406 is positive, the process at 412 executes a dimming shader program or process to perform partial dimming of the desktop image, as will be described in more detail below with reference to Figure 5. The process 400 thereafter terminates at 410.Figure 5 is a flowchart illustrating a dimming shader process 500 called by the graphics driver process 400 of Figure 4 when partial display dimming is enabled as determined at 406 of the process 400. The process 500 starts at 502 and proceeds to 504 where a query function is executed in the form of a query plugin in the example embodiment of Figure 5. The query plugin obtains user inputs 102 from the OS and utilizes these inputs to determine which areas on the display 104 are focus areas 106 (i.e., are not to be dimmed) and which areas are non-focus areas 108 (i.e., are to be dimmed) . Next, the process 500 at 506 maps input and output surfaces using data from the query plugin executed at 504 and these mapped input and output surfaces are utilized to modify the composited desktop image to perform partial dimming on this image. At 508 the process 500 programs the graphics hardware 112 (Figure 1) to perform the determined partial dimming. The process 500 then terminates at 510.Figure 6 is a flowchart illustrating a query plugin process 600 executed by the query plugin executed by the process 500 at 504. The process 600 starts at 602 and to 604 at which the process receives user input 102 in the form of notifications from the OS of the electronic device. The OS maintains information on the size and location of opened windows on the display 104, and the process 600 at 604 retrieves this information as well for use by the graphics driver programming the graphics hardware 112 to perform the desired partial dimming. Next, at 606 the process 600 provides the retrieved user inputs 102 and from the OS to the dimming shader process 500 for use in partial dimming of the display 104.Figure 7 is a sequence diagram illustrating operation of the various software components of Figures 1-6 that implement the desktop composition process 300 of Figure 3 including partial dimming implemented by the graphics driver in this embodiment. In the embodiment of Figure 7, the white boxes illustrate existing components and operation while the gray shaded boxes illustrate new components included to perform the desired partial dimming. Along the top of the sequence diagram of Figure 7 are shown the pertinent software components, namely desktop composition module 700, graphics driver 702, plugin 704 and graphics hardware 706. Each of these components 700-706 corresponds to components previously described with reference to Figures 1-6.As shown in Figure 7, the desktop composition module 700 load the graphics driver 702 at 708 and at 710 the graphics driver initializes the plugin 704. At this point, the partial dimming is not enabled since the partial dimming is only utilized in the electronic device when necessary. As a result, at 712 when the desktop composition module 700 initially makes a Present call to the graphics driver 702, the Present call at 714 from the graphics driver to the graphics hardware 706 results in programming of the graphics hardware in a conventional manner to display the composite desktop image on the display 104 (Figure 1) .At 716, the plugin 704 determines that partial dimming is to be performed and provides a notification to the graphics driver 702 indicating partial dimming is now enabled. As a result, at 718, when the desktop composition module 700 makes a Present Call to the graphics driver 702, a call to the plugin 704, which is indicated as a Present Callback at 720, is made and the plugin 704 returns at 722 dimming inputs to the graphics driver 702. These dimming inputs include the notifications retrieved from the OS as discussed above with reference to Figure 6. Next, at 724 the graphics driver 720 makes a Present call to program of the graphics hardware 706 to perform the required partial dimming and display the composite desktop image on the display 104 (Figure 1) including this partial dimming. At 726, the plugin 704 provides a notification that partial dimming to be disabled, such as would typically occur when the battery of the electronic device has been recharged, and because of this, or for some other reason, the partial dimming is no longer required. For example, the user may manually disable partial dimming, as discussed above. After partial dimming has been disabled at 726, when the desktop composition module 700 makes another Present call at 728 to the graphics driver 702, and the graphics driver makes a Present call at 730 that results in programming of the graphics hardware 706 at 730 in a conventional manner to display the composite desktop image on the display 104 (Figure 1) .Figure 8 is a functional block diagram illustrating an example of a computing system 800 to implement the display power-reduction techniques discussed herein with reference to the embodiments of Figures 1-7. The computing system 800 may be, for example, a mobile device such as a smart phone, laptop computer, ultrabook, tablet computer, a desktop computer, or a server or other type of computer system that would benefit from the display power-reduction techniques of the present application. The computer system 800 would typically be a mobile device running on battery power, which would then utilize the display power-reduction techniques of the present application to extend the life of battery for a given charge by lowering the power consumption of the system. The computer system 800 need not be a mobile device, however, where there is a need to reduce the power consumption of the system even though the deice is not being powered through battery power. Finally, the computer system 800 of Figure 8 illustrates an example of a suitable computing system environment in which embodiments of the present disclosure may be implemented. The computing system 800 is an example of one suitable computing environment should not be considered to suggest any limitation as to the implementations of embodiments of the present disclosure.In the example embodiment of Figure 8, the computing system 800 includes a processor 802, such as a central processing unit, which is configured to execute stored instructions. A memory device 804 stores instructions that are executable by the processor 802, and may be any suitable type of memory such as read only memory (ROM) , dynamic random access memory (DRAM) , static random access memory (SRAM) , flash memory (FLASH) , or a combination these and other different types of memory. The memory device 804 stores instructions executed by the processor 802, including instructions of OS and graphics driver GD loaded into memory, and instructions executed by the processor to implement the display power-reduction processes of Figures 1-7. The processor 802 is coupled to the memory device 804 through a bus 806 of the computer system 800. The processor 802 may be a single core processor, a multi-core processor, a computing cluster, or any number of other configurations. Furthermore, the computing system 800 may include more than one processor 802 and more than one memory device 804.The computing system 800 further includes a graphics processing unit (GPU) 808, and the processor 802 is coupled through the bus 806 to the GPU 808. The GPU 808 performs any number of graphics functions and actions within the computing system 800, such as rendering or manipulating graphics images, graphics frames, videos, or the like, to be displayed to a user of the computing system 800. As described above with reference to Figure 1, the desktop composition module in some embodiments may be implemented as part of the graphics driver GD of the computer system 800, and this graphics driver controls programming and operation of the GPU 808.An image capture device 810, such as a camera, scanner, infrared sensor, or other type of suitable device, is also coupled to the bus 806 to communicate with the processor 802 and memory device 804. The processor 802 is coupled through the bus 806 to one or more displays 812, which may include displays that are internal to or “built-in” component of the computing system 800. The displays 812 may also include display screens that are external to the computing system 800. Examples of such a computing system 800 include mobile computing systems, such as cell or smart phones, tablets, 2-in-1 computers, notebook computers and the like. The display devices 812 may include a computer monitor, television, or projector, among others, that is externally connected to the computing system 800. In some examples of the computing system 800, the display devices 812 may be head-mounted display devices having a display capacity via projection, digital display, filtering incoming light, and the like.The processor 802 is also be connected through the bus 806 to an input/output (I/O) interface 814 configured to connect the computing system 800 to one or more I/O devices 816. The I/O devices 816 may include, for example, a keyboard, a pointing device such as a touchpad or a touchscreen, a storage device, and other types of electronic devices. The I/O devices 816 may include built-in components of the computing system 800 or may be devices that are externally connected to the computing system. In some cases, the I/O devices 816 are touchscreen devices integrated within a display device, such as one or more of the display devices 812.The computing system 800 may also include another storage device or devices 818, which may include a physical memory such as a hard drive, an optical drive, a thumb drive, an array of drives, or any combinations thereof. The storage device 818 may also include remote storage drives. A network interface controller (NIC) 820 connects the computing system 800 to a network 822, which may be a wide area network (WAN) , local area network (LAN) , the Internet, or the like. The computing system 800 is powered through a power supply unit (PSU) 824 that communicates with the processor 802 through the bus 806 to communicate control signals or status signals to the PSU. The PSU 824 includes a rechargeable power source such as a battery in some embodiments, and is coupled to a power source 826 external the computing system 800 to receive electrical power, charge the rechargeable power source when present, and to supply provide electrical power to the other components in the computing system 800. The block diagram of Figure 8 is not intended to indicate that the computing system 800 must include all the components shown. Furthermore, the computing system 800 may include any number of additional components not shown in Figure 8 based on the specific implementation or utilization of the computing system.Embodiments Using Software Implementation with Mask LayerFigure 9 shows three examples of a display with various levels of shading or dimming on the display by implementing a mask, in accordance with embodiments. Diagram 900a shows an example of a input mask layer 902 that is applied over a display image 904. The transparent mask 902 includes an area 912 that is not masked. The display image 904 may be example of a Windows user interface, with areas 906, 908 that show windows controlled by two different applications.The display image 904 is dimmed by adding an input mask layer 902 with different transparency levels with a combination of identifying region (s) , such as areas 906, 908, to go under the mask layer for dimming, and region (s) , such as area 912, to go above the mask layer for full visibility to the graphics composition system. The blended result is shown in composition 920 with areas 906a, 908a dimmed but still accessible by the user, for example if the user were to mouse click in the areas, and an area 912a undimmed. In other embodiments, the areas 906a, 908a may not be available to the user, for example if the user were to mouse click in those areas. In embodiments, an undimmed area may be referred to as a focus area. The composition system and the underling graphics system of the computer system and the display do not need any changes.Modifying the final composited desktop surface is way to dim the display. Mathematically, applying a mask works as the dim function below. However, there is an alternative algorithm based on the blend formula:dim (pixel) =pixel*α=blend (0, pixel, 1-α) ,while blend (a, b, xα) =a*xα+b* (1-xα)Pixel blending is a common graphics operations in modern graphics systems. By adding a mask layer 902 as input with alpha (α) transparency, any level of dimming effect may be achieved at final composition stage 920. Note: in embodiments, the different transparency levels may be represented by the alpha (α) of each pixel, and the alpha value of different pixels could be differentDiagram 900b shows an example of an opaque mask 932, with cut out area 942. In embodiments, if the mask layer 932 is fully opaque, then all layers below it 936, 938 do not need second time rendering. Thus, not only the content of masked area is invisible, but the actual rendering operation could be skipped. As a result, in embodiments, adding an input mask layer may bring extra graphics computation power saving which cannot be achieved in the final composition stage 950. In this example, only the area 942a would need to be rendered and updated. Other areas only need one time rendering, and no update is needed because it is kept as a darker color or black.These and other embodiments allow many ways to define undimmed and dimmed regions by user inputs or software preset. The undimmed regions such as an Active Application Window (s) , predefined fixed or moving areas on display will be defined as voids of mask layer to allow full visibility, and the rest of the areas to be dimmed, either partially or completely. And the input determination from the user may include touch devices, mouse, keyboard, voice control, eye tracking, system power policies, etc. as described with respect to Figure 1.In contrast to the output stage dimming, as described with respect to Figures 1-8, tracks the focus area and/or dimmed area. This partial panel screen dimming, which may also be referred to as input stage dimming, does not need to maintain this information (focus region and/or dimmed region) explicitly. The defined dimmed region (s) under the mask layer are automatically dimmed, the defined undimmed region (s) above the mask layer are automatically undimmed. And furthermore, the depth (layer) of the windows are managed by existing algorithms built-in the OS. For example, the dimming area selection can be achieved by normal application window activation and deactivation. Because embodiments may only define a mask and voids to separate the undimmed and dimmed areas, not capturing any display content may lessen concerns for privacy protection. In addition, on systems that output buffer is protected, output stage dimming may not be easily achieved, unless low level driver or hardware changes involved.In embodiments, implementation of partial panel screen dimming work on the application level. User interactions within dimmed display area can naturally be received and further processed. In contrast with output stage dimming, neither low level graphics driver nor desktop composition module would take care of this interactions.With respect to Figure 9, embodiments that implement partial panel screen dimming include two primary functionalities. First, to define an input mask layer and manage its transparency and depth. Second, to monitor and manage user interactions within the dimming area. The input mask can be within a single layer or distributed to multiple layers. The shape of the mask can be arbitrary, e.g. it does not need to be a square area.Figure 10 shows an example of a mask applied to dim areas of the display, with various levels of user interaction based on the transparency of the mask, in accordance with embodiments. User interface 1050 shows a computer screen with multiple applications running and includes an application window 1052 on top of the background application windows. User interface 1054 shows a computer screen similar to interface 1050, however a transparent mask 1056 has been applied, where the mask 1056 has an open area 1052 to allow the top application to be viewed without dimming. In embodiments, both the top application and background applications may be selected by a user, for example by using a keyboard or a mouse.User interface 1058 shows a computer screen similar to interface 1050, however an opaque mask 1060 has been applied, where the mask 1060 has an open area 1052 to allow the top application to be viewed without dimming. In embodiments, only the top application is able to be viewed through the open area 1052, and may be selected and interacted with by a user.Note that in embodiments, a region of the user interface 1050/1054/1058 may include areas, for example the upper right-hand corner, for a user to double-click to exit the application of the mask. In other embodiments, there may be features, such as an auto hide slider bar that may be used to adjust dimming of the non-focus areas.Figure 11 shows various examples of masks applied to dim areas of the display, in accordance with embodiments. Screen 1102 shows a selected or active application 1104 as the focus display only, with the rest of the display dimmed. Screen 1106 shows a defined box area 1108 as the focus display only, with the rest of the display dimmed. Screen 1110 shows an application window 1112 partially dimmed, where the white/brighter areas are also dimmed. The application window icon bar, non-user interactive regions can be dimmed to darker color or black.Screen 1114 shows a window 1116 where the window size has shrunk and is displayed only, with the other areas being dimmed. The shrunk display position can be anywhere on the panel.Figure 12 shows examples of displays with masks having different levels of transparency to dim areas of the display, in accordance with embodiments. Screen 1202 is shown not dimmed, with application 1204 running in a top window. Screen 1206 shows a mask 1208 applied at 50%transparency, leaving application 1204 with focus and normal brightness. Screen 1210 shows a mask 1212 applied at 100%transparency, or opaque, leaving application 1204 with focus and normal brightness. The dimming areas can be one or multiple regions, and does not have to be on an application windows focus -it could be anywhere on the screen. For example, it can also be on the edges.Figure 13 shows an example of a computer with two displays where masks are used to dim areas of the two displays, in accordance with embodiments. Computer 1302 includes two displays, an upper display 1304 and a lower display 1306. Computer 1308 shows the upper display 1304 with a mask 1310 applied to completely dim the upper display 1304. Computer 1312 shows a mask 1314 applied to the upper display 1304 to dim the upper display 1304 except for application 1316. Computer 1318 shows a mask 1314 applied to the upper display 1304, and a second mask 1320 applied to the lower display 1306 to cause only the application in the window 1322 to be visible. The display remaining on can also be partially dimmed.Figure 14 shows another example of a computer with two displays where masks are used to dim areas of the two displays, in accordance with embodiments. Diagram 1400a shows a computer 1402 that includes two displays, an upper display 1402 and a lower display 1406. Diagram 1400b shows an opaque mask 1408 applied to the lower display 1406 so that only the upper display 1404 can be seen. Diagram 1400c shows an opaque mask 1410 applied to a portion of the upper display 1402, so that only a top portion of the upper display 1404 may be viewed.Figure 15 shows an example process flow to dim a display using programmatic hardware commands, in accordance with embodiments. Process 1500 shows an example of a process that requires specialized hardware, as described with respect to Figures 1-8 above. After program hardware commands are received, then inquiry is made whether partial dimming is enabled. If it is enabled, dimming shader is applied and the resulting images submitted to hardware.In Figures 16-17, green boxes represent added actions to portions of embodiments described herein. Figure 16 shows an example process flow for applying a mask layer to implement dimming on a display, in accordance with embodiments. In contrast to process 1500, process 1600 is a high-level overview process for implementing one or more embodiments of partial panel screen dimming using software. After the process 1600 starts, a determination is made whether partial dimming is enabled. If it is enabled, then an input mask layer is added, and the resulting composition moves to the hardware composition block. The input mask can be within a single layer or distributed to multiple layers.Figure 17 shows a detailed process flow for dimming and area selection for multiple focus window and single focus window, in accordance with embodiments. Process 1700a represents a common application workflow, with which process 1700b and 1700c may interact. Process 1700b includes a set of actions to be taken in order to make a dimming area selection for a multiple focus window mode, for example where multiple areas of the display are active and accessible by the user. Process 1700c includes a set of actions to be taken in order to make a dimming area selection for single focus window mode.Process 1700a may begin with a normal application window. The process may listen to user input, receive, and process the received user input. The input may be received after the results of process 1700b or 1700c. In embodiments, the switch between multiple window mode in single window mode may be done through a user preference setting, for example a design switch button on a user interface. The application window may subsequently be deactivated by losing focus, and subsequently enter an idle state. Subsequent to the idle state, the application window may be activated by capturing focus again, or the application may exit.Process 1700b is an embodiment for a dimming area selection for multiple focus window mode. The process may start a full-screen and transparent window mask, which may be similar to mask 902 or 932 of Figure 9. Subsequently, the process may deactivate the mask. Subsequently, the process may listen to mouse clicks. The process may subsequently capture a mouse down event and open a mouse tunnel. Subsequently, the process may forward the mouse down event to bottom layers. Subsequently, the process may activate an application below the mask which got focus. In embodiments, this application is identified based on mouse location. Subsequently, the application is brought above the mask and activated and made visible to the user. Subsequently, the process may close up the mouse tunnel upon a mouse up event, and then go back to the action of listen to mouse clicks, and end the dim the area selection process for above application. Subsequently, the event handling process for the activated application restored to normal.Process 1700c is an embodiment for a dimming area selection for a single focus window mode. The process may start a full-screen and transparent window mask, which may be similar to mask 902 or 932 of Figure 9. Subsequently, the process may deactivate the mask. Subsequently, the process may listen to mouse clicks. Subsequently, the process may capture a mouse down event. Subsequently, the process may bring the mask itself to the topmost, so that open windows are masked.Subsequently, the process may open a mouse tunnel. Subsequently, the process may forward to mouse down event to bottom layers. Subsequently, the application below the mask which got focus is activated. Subsequently, the application may be brought above the mask, so that only a single window is undimmed after this stage, which is also the input focused window and the activated window. Subsequently, the process may close up the mouse tunnel on a mouse up event. Subsequently, the process returns to the action of listening for mouse clicks, and the dimming area selection is finished for the above application. Subsequently, the event handling of the activated application is restored to normal.Figure 18 shows an example of power savings expectations for non-focus areas that are partially or fully dimmed, in accordance with embodiments. Diagram 1800a shows a display 1802 that has an active area 1804 that is not dimmed, but where other non-focus areas 1806 on the display 1802 are dimmed at a 50%level, or where the mask is at a 50%transparency. In this example, the base brightness is between 105 nits to 395 nits, with the power savings of 1.86 watts (W) to 6.8W. Diagram 1800b shows a display 1808 that has an active area 1804 that is not dimmed, but where other non-focus areas 1810 on the display 1808 are dimmed at 100%level. Where the mask is opaque. In this example, the base brightness is between 105 nits to 395 nits, with the power savings of 2.37W to 8.64W. This example included a test configuration of an OLED 4K 15.6” single display. The power savings included approximately 50%plus the backlight power savings of (1.86W-8.64W) , and approximately 30%system battery life extension. A different panel may give different savings values.With respect to expected or estimated power saving examples, the following may apply. When the partial dimming is applied in a browsing scenario where more than one internet explorer (IE) browser window is opened, the top most IE window is in focus while the rest of the screen is not in focus. The rest of the screen which is not in focus including the out-of-focus IE windows are dimmed.Experimental analysis shows the comparison of the panel backlight power for a 15.6” 4K OLED panel which is set to 105 nits and 395 nits respectively for two sets of scenarios and the backlight power instrumented for power measurement. For each OLED panel brightness setting, three experiments were done. The first experiment does not apply partial dimming. The second experiment applies 100%dimming to the out-of-focus area. The third experiment applies 50%dimming to the out-of-focus area.Results shows 100%dimming gives > 50%panel backlight power savings and 50%dimming gives > 40%panel backlight power savings on this OLED panel with an average >50%savings.Figure 19 shows a process for partial panel screen dimming, in accordance with embodiments. Process 1900 may be performed using hardware, software, and techniques described herein with respect to Figures 1-18.At block 1902, the process may include identifying one or more areas of the display to be focus areas in response to received user input.At block 1904, the process may further include identifying one or more areas of the display to be dimmed in response to the received user input.At block 1906, the process may further include applying one or more mask layers to the input graphics of the display, wherein the one or more mask layers correspond to the focus areas and the areas to be dimmed, wherein the one or more mask layers include a transparency value for each pixel wherein the transparency value is used to dim the pixels on the display, to reduce power consumption of the display.Other techniques may be used for dimming the backlight outside a region of focus. These techniques may be implemented as a multistage operation, and may be related to the processes described with respect to Figure 17.First stage. In the first stage, the pixel values outside the region of focus are changed. This is done during the desktop composition stage.One of the inputs to the dimming operation is the dim level that the user can configure. The user can also specify if the user desires a foveated dimming. In this case, the user can specify a dim gradient so that non-uniform dimming can be achieved. The algorithm can be extended to use other inputs not limited to the above.The region of focus can be user-specified and fixed. It could also be determined by active windows without explicit input for natural user experience.For every pixel in the Desktop Composited surface that is outside the region of focus, the shader or the algorithm uses the aforementioned inputs to change the <R, G, B> color components of the pixel so that they are darker than they were before this operation. This ends the first stage.Second stage. In the second stage, the panel backlight is adjusted based on the frame that is displayed.On pixel backlight panels, such as OLED panels, backlight adjustment is individually done by the panel. Based on the pixel value, the backlight is adjusted in such a way that the user doesn’t notice the change. For darker values, the backlight can be reduced individually to a greater extent.On global backlight panels such as LCD panels which do not support a per pixel panel backlight adjustment, power saving features like IntelDisplay Power Saving Technology (DPST) or Content Adaptive Brightness control (CABC) etc. can be used. In DPST or CABC, depending on the percentage of dark pixels in the frame being displayed and upon meeting a set threshold for that, either the display hardware (in the case of DPST) or the timing controller (TCON) in the panel (in the case of CABC) change the backlight settings of the panel in such a way that the user doesn’t notice the change when the backlight is reduced. This ends the second stage.In either case, when the panel is OLED or LCD, the first stage causes the pixels outside the region of focus to be darker which triggers the backlight reduction in the panel causing partial panel dimming, which ultimately brings the panel backlight power savings. The same techniques can be for a system that has more than one single displayFigure 20 shows a non-transitory computer readable storage medium that includes instructions to implement one or more processes to cause partial panel screen dimming. Diagram 2000 shows a non-transitory computer readable storage medium 2002, the may be implemented in embodiments described herein. For example, the computer readable storage medium may be stored within computing system 800 of Figure 8, in particular the memory 804 or other storage 818. The computer readable storage medium 2002 may contain programming instructions 2004 that may be executed by processor 802 of Figure 8.ADDITIONAL EXAMPLESEach of the following non-limiting examples may stand on its own, or may be combined in various permutations or combinations with one or more of the other examples.Example 1 is a method, comprising: determining one or more areas of a display to remain active in response to received user input; determining one or more areas of the display to be dimmed in response to the received user input; and dimming the one or more areas of the display to be dimmed to reduce a power consumption of the display.Example 2 is the subject matter of Example 1, wherein the received user input comprises at least one of: cursor information received from a mouse; keystroke information received from a keyboard; touch information received from a touch screen; a position of the eyes of the user indicating a location on the display where the user is looking; a voice command from the user; manual input received from a user; or a power policy setting of an electronic device including the display.Example 3 is the subject matter of any one or more of Examples 1-2, wherein the display comprises a plurality of pixels, and wherein dimming the one or more areas of the display to be dimmed comprises dimming at least some of the pixels of the display in the one or more areas of the display to be dimmed.Example 4 is the subject matter of any one or more of Examples 1-3, wherein dimming at least some of the pixels of the display in the one or more areas of the display to be dimmed comprises changing a color of at least some of the plurality of pixels of the display in the one or more areas of the display to be dimmed.Example 5 is the subject matter of any one or more of Examples 1-4, wherein changing a color of at least some of the plurality of pixels of the display in the one or more areas of the display to be dimmed comprises changing the color to black or darker color.Example 6 is the subject matter of any one or more of Examples 1-5, wherein the display includes a plurality of displays and wherein dimming one or more areas of the display to be dimmed comprises dimming one or more areas on each of the plurality of displays.Example 7 is the subject matter of any one or more of Examples 1-6, wherein dimming one or more areas on each of the plurality displays comprises turning off one or more of the plurality of displays.Example 8 is the subject matter of any one or more of Examples 1-7 further comprising enabling and disabling dimming the one or more areas of the display to be dimmed in response to received user input.Example 9 is a non-transitory machine-readable medium storing a program executable by at least one processing unit of an electronic device including a display, the program comprising sets of instructions for: determining one or more areas of the display to remain active in response to received user input; determining one or more areas of the display to be dimmed in response to the received user input; and dimming the one or more areas of the display to be dimmed to reduce a power consumption of the display.Example 10 is the subject matter of Example 9, wherein the program comprises a set of instructions in a desktop composition module of the electronic device.Example 11 is the subject matter of any one or more of Examples 9-10, wherein the electronic device executes the Windows operating system, and wherein the desktop composition module comprises the desktop windows manager (DWM) of the Windows operating system.Example 12 is the subject matter of any one or more of Examples 9-11, wherein the program comprises a set of instructions in a graphics driver of the electronic device.Example 13 is the subject matter of any one or more of Examples 9-12, wherein the program further comprises a set of instructions of a plugin of the graphics driver.Example 14 is the subject matter of any one or more of Examples 9-13, wherein the plugin comprises a set of instructions for receiving, from an operating system of the electronic device, the received user input.Example 15 is a system, comprising: one or more displays; a set of processors; and a non-transitory computer-readable medium storing a set of instructions that when executed by at least one processor in the set of processors cause the at least one processor to: determine one or more areas of the one more displays that are to remain active in response to user input; determine one or more areas of the one more displays that are to be dimmed in response to the user input; and dim the one or more areas of the one or more displays to be dimmed to reduce a power consumption of the one or more displays.Example 16 is the subject matter of Example 15, wherein the set of instructions stored in the non-transitory computer-readable medium comprise instructions in a desktop composition module of the system.Example 17 is the subject matter of any one or more of Examples 15-16, wherein the non-transitory computer-readable medium stores instructions of the Windows operating system, and wherein the desktop composition module comprises the desktop windows manager (DWM) of the Windows operating system.Example 18 is the subject matter of any one or more of Examples 15-17, wherein the set of instructions stored in the non-transitory computer-readable medium further comprise a set of instructions of a graphics driver of the system.Example 19 is the subject matter of any one or more of Examples 15-18, wherein the set of instructions stored in the non-transitory computer-readable medium include a plugin of the graphics driver.Example 20 is the subject matter of any one or more of Examples 15-19, wherein the graphics driver includes a dimming shader program and the dimming shader program includes the plugin comprising a set of instructions for receiving, from an operating system of the system, the user input.The above description illustrates various embodiments of the present disclosure along with examples of how aspects of the particular embodiments may be implemented. The above examples should not be deemed to be the only embodiments and are presented to illustrate the flexibility and advantages of the particular embodiments covered by the following claims. Based on the embodiments described in the present disclosure, other arrangements, embodiments, implementations and equivalents may be employed without departing from the scope of the present disclosure. |
A single chip active memory includes a plurality of memory stripes, each coupled to a full word interface and one of a plurality of processing element (PE) sub-arrays. The large number of couplings between a PE sub-array and its associated memory stripe are managed by placing the PE sub-arrays so that their data paths run at right angle to the data paths of the plurality of memory stripes. The data lines exiting the memory stripes are run across the PE sub-arrays on one metal layer. At the appropriate locations, the data lines are coupled to another orthogonally oriented metal layer to complete the coupling between the memory stripe and its associated PE sub-array. The plurality of PE sub-arrays are mapped to form a large logical array, in which each PE is coupled to four other PEs. Physically distant PEs are coupled using current mode differential logical couplings an drivers to insure good signal integrity at high operational speeds. Each PE contains a small DRAM register array. |
1. A memory device comprising:a substrate, said substrate having integrated thereon, a full word interface; a plurality of memory stripes, each of said memory stripes coupled to the full word interface; a plurality of processing elements; said plurality of processing elements being physically organized into a plurality of arrays, the plurality of arrays having an array order, each of said arrays having a plurality of sub-arrays, said plurality of sub-arrays having a sub-array order and including at least a first sub-array and a second sub-array, wherein the processing elements contained each of the sub-arrays are coupled to the same memory stripe; and wherein the processing elements of the sub-arrays are coupled to each other via a logical mapping to form a logical array of processing elements, in which each processing element of the logical array is coupled to four other processing elements of the logical array. 2. The memory device of claim 1, wherein the logical mapping further comprises, for each one of the plurality of sub-arrays, coupling the processing elements to form a line of processing elements.3. The memory device of claim 2, wherein the mapping further comprises mapping each sub-array as a row of the logical array.4. The memory device of claim 3, wherein the mapping further comprises mapping the sub-arrays in accordance with the sub-array order as rows of the logical array.5. The memory device of claim 3, wherein the logical mapping further comprises:mapping, in accordance with the array order, a first set of sub-arrays taken from the first sub-arrays of the plurality of arrays as a first set of rows of the logical array, and mapping, in accordance with reverse array order, a second set of sub-arrays taken from the second sub-arrays of the plurality of arrays as a second set of rows of the logical array. 6. The memory device of claim 2, wherein the mapping further comprises mapping each sub-array as a column of the logical array.7. The memory device of claim 6, wherein the sub-arrays are divided, in array order, into a plurality of sections.8. The memory device of claim 7, wherein plurality of sections comprise a first quarter, a second quarter, a third quarter, and a fourth quarter; and wherein the mapping further comprises:mapping, in accordance with the sub-array order, the first quarter as a first set of columns of the logical array; mapping, in accordance with reverse sub-array order, the third quarter of sub-arrays as a second set of columns of the logical array; mapping, in accordance with reverse sub-array order, the second quarter of sub-arrays as a third set of columns of the logical array; and mapping, in accordance with the sub-array order, the fourth quarter of sub-arrays as a fourth set of columns of the logical array. 9. The memory device of claim 8 further comprising:electrically coupling processing elements of the first sub-array of the first quarter to processing elements of the last sub-array of the fourth quarter; electrically coupling processing elements of the last sub-array of the first quarter to processing elements of the last sub-array of the third quarter; electrically coupling processing elements of the first sub-array of the third quarter to processing elements of the last sub-array of the second quarter; electrically coupling processing element of the first sub-array of the second quarter to processing elements of the first sub-array of the fourth quarter. 10. The memory device of claim 1, wherein the logical mapping further comprises, for each one of the plurality of sub-arrays, coupling the processing elements to form a rectangular array.11. The memory device of claim 10, wherein the mapping further comprises mapping each sub-array as a rectangular region of the logical array.12. The memory device of claim 11, wherein the mapping further comprises mapping the sub-arrays in accordance with the sub-array order to form columns of rectangular regions of the logical array.13. The memory device of claim 1, wherein the plurality processing elements each further comprises:an arithmetic logic unit; a register file; and an interconnect cell, said interconnect cell coupling the processing element to a memory stripe and to other processing elements. 14. The memory device of claim 13, wherein said register file is a dynamic random access memory (DRAM).15. The memory device of claim 14, wherein the dynamic random access memory contains at least 64-bits of data storage.16. The memory device of claim 13, wherein said interconnect cell further comprises:a pair of signal lines; and a differential driver, said differential driver coupled to the pair of signal lines. 17. The memory device of claim 16, wherein said differential driver is a current mode logic differential driver.18. The memory device of claim 1, further comprising:a plurality of memory data paths, each of said plurality of memory data paths coupled to one of the plurality of memory stripes; a plurality of sub-array data paths, each of said plurality of sub-array data paths coupled to one of the plurality of sub-arrays; wherein the sub-arrays are oriented so that sub-array data paths run at a right angle to the memory data paths. 19. The single chip active memory device of claim 18, wherein the plurality of memory data paths are formed on a first metal layer and the plurality of sub-array data paths are formed on a second metal layer, said first metal layer having an orthogonal orientation to said second metal layer.20. A computer system, comprising:a central processing unit; and a memory device, said memory device coupled to the central processing unit and being formed on a substrate, said substrate having integrated thereon, a full word interface; a plurality of memory stripes, each of said memory stripes coupled to the full word interface; a plurality of processing elements; said plurality of processing elements being physically organized into a plurality of arrays, the plurality of arrays having an array order, each of said arrays having a plurality of sub-arrays, said plurality of sub-arrays having a sub-array order and including at least a first sub-array and a second sub-array, wherein the processing elements contained each of the sub-arrays are coupled to the same memory stripe; and wherein the processing elements of the sub-arrays are coupled to each other via a logical mapping to form a logical array of processing elements, in which each processing element of the logical array is coupled to four other processing elements of the logical array. 21. The computer system of claim 20, wherein the logical mapping further comprises, for each one of the plurality of sub-arrays, coupling the processing elements to form a line of processing elements.22. The computer system of claim 21, wherein the mapping further comprises mapping each sub-array as a row of the logical array.23. The computer system of claim 22, wherein the mapping further comprises mapping the sub-arrays in accordance with the sub-array order as rows of the logical array.24. The computer system of claim 22, wherein the logical mapping further comprises:mapping, in accordance with the array order, a first set of sub-arrays taken from the first sub-arrays of the plurality of arrays as a first set of rows of the logical array; and mapping, in accordance with reverse array order, a second set of sub-arrays taken from the second sub-arrays of the plurality of arrays in as a second set of row of the logical array. 25. The computer system of claim 21, wherein the mapping further comprises mapping each sub-array as a column of the logical array.26. The computer system of claim 25, wherein the sub-arrays are divided into a plurality of sections.27. The computer system of claim 26, wherein the plurality of sections comprise a first quarter, a second quarter, a third quarter, and a fourth quarter; and wherein the mapping further comprises:mapping, in accordance with the sub-array order, the first quarter as a first set of columns of the logical array; mapping, in accordance with reverse sub-array order, the third quarter as a second set of columns of the logical array; mapping, in accordance with reverse sub-array order, the second quarter as a third set of columns of the logical array; and mapping, in accordance with sub-array order, the fourth quarter as a fourth set of columns of the logical array. 28. The computer system of claim 27 further comprising:electrically coupling processing elements of the first sub-array of the first quarter to processing elements of the last sub-array of the fourth quarter; electrically coupling processing elements of the last sub-array of the first quarter to processing elements of the last sub-array of the third quarter; electrically coupling processing elements of the first sub-array of the third quarter to processing elements of the last sub-array of the second quarter; electrically coupling processing element of the first sub-array of the second quarter to processing elements of the first sub-array of the fourth quarter. 29. The computer system of claim 20, wherein the logical mapping further comprises, for each one of the plurality of sub-arrays, coupling the processing elements to form a rectangular array.30. The computer system of claim 29, wherein the mapping further comprises mapping each sub-array as a rectangular region of the logical array.31. The computer system of claim 30, wherein the mapping further comprises mapping the sub-arrays in accordance with sub-array order to form columns of rectangular regions of the logical array.32. The computer system of claim 20, wherein the plurality processing elements each further comprises:an arithmetic logic unit; a register file; and an interconnect cell, said interconnect cell coupling the processing element to a memory stripe and to other processing elements. 33. The computer system of claim 32, wherein said register file is a dynamic random access memory (DRAM).34. The computer system of claim 33, wherein the dynamic random access memory contains at least 64-bits of data storage.35. The computer system of claim 32, wherein said interconnect cell further comprises:a pair of signal lines; and a differential driver, said differential driver coupled to the pair of signal lines. 36. The computer system of claim 35, wherein said differential driver is a current mode differential logic driver.37. The computer system of claim 20, further comprising:a plurality of memory data paths, each of said plurality of memory data paths coupled to one of the plurality of memory stripes; a plurality of sub-array data paths, each of said plurality of sub-array data paths coupled to one of the plurality of sub-arrays; wherein the sub-arrays are oriented so that sub-array data paths run at a right angle to the memory data paths. 38. The single chip active memory device of claim 37, wherein the plurality of memory data paths are formed on a first metal layer and the plurality of sub-array data paths are formed on a second metal layer, said first metal layer having an orthogonal orientation to said second metal layer.39. A memory device comprising:a substrate, said substrate having integrated thereon, an interface, said interface for communicating at a same time, a plurality of bits from a same word of memory between the memory device and an external device; a plurality of memory stripes, each of said memory stripes coupled to the interface; a plurality of processing elements; said plurality of processing elements being physically organized into a plurality of arrays, the plurality of arrays having an array order, each of said arrays having a plurality of sub-arrays, said plurality of sub-arrays having a sub-array order and including at least a first sub-array and a second sub-array, wherein the processing elements contained each of the sub-arrays are coupled to the same memory stripe; and wherein the processing elements of the sub-arrays are coupled to each other via a logical mapping to form a logical array of processing elements, in which each processing element of the logical array is coupled to four other processing elements of the logical array. 40. The memory device of claim 39, wherein the interface communicates at a same time every bit from a same word of memory. |
FIELD OF THE INVENTIONThe present invention relates to the field of massively parallel processing systems, and more particularly to the interconnection among processing elements and between processing elements and memory in a single chip massively parallel processor chip.BACKGROUND OF THE INVENTIONThe fundamental architecture used by all personal computers (PCs) and workstations is generally known as the von Neumann architecture, illustrated in block diagram form in FIG. 1. In the von Neumann architecture, a main central processing unit (CPU) 10 is coupled via a system bus 11 to a memory 12. The memory 12, referred to herein as "main memory", also contains the data on which the CPU 10 operates. In modern computer systems, a hierarchy of cache memories is usually built into the system to reduce the amount of traffic between the CPU 10 and the main memory 12.The von Neumann approach is adequate for low to medium performance applications, particularly when some system functions can be accelerated by special purpose hardware (e.g., 3D graphics accelerator, digital signal processor (DSP), video encoder or decoder, audio or music processor, etc.). However, the approach of adding accelerator hardware is limited by the bandwidth of the link from the CPU/memory part of the system to the accelerator. The approach may be further limited if the bandwidth is shared by more than one accelerator. Thus, the processing demands of large data sets, such as those commonly associated with large images, are not served well by the von Neumann architecture. Similarly, as the processing becomes more complex and the data larger, the processing demands will not be met even with the conventional accelerator approach.It should be noted, however, that the von Neumann architecture has some advantages. For example, the architecture contains a homogenous memory structure allowing large memories to be built from many smaller standard units. In addition, because the processing is centralized, it does not matter where the data (or program) resides in the memory. Finally, the linear execution model is easy to control and exploit. Today's operating systems control the allocation of system memory and other resources using these properties. The problem is how to improve processing performance in a conventional operating system environment where multiple applications share and partition the system resources, and in particular, the main memory.One solution is to utilize active memory devices, as illustrated in FIG. 2, in the computer system. Put simply, active memory is memory that can do more than store data; it can process it too. To the CPU 10 the active memory 15 looks normal except that it can be told to do something with the data contents and without the data being transferred to the CPU or another part of the system (via the system bus 11). This is achieved by distributing an array 14 of processing elements (PEs) 200 throughout the memory structure, which can all operate on their own local pieces of memory in parallel. The array 14 of PEs 200 are coupled to the memory 12 via an high speed connection network 13. In addition, PEs 200 of the array 14 can communication with each other. Thus, active memory encourages a somewhat different view of the computer architecture, i.e., "memory centered" or viewed from the data rather than the processor.In a computer system having active memory, such as illustrated in FIG. 2, the work of the CPU 10 is reduced to the operating system tasks, such as scheduling processes and allocating system resources and time. Most of the data processing is performed within the memory 15. By having a very large number of connections between the main memory 12 and the processing resources, i.e., the array 14 of PEs 200, the bandwidth for moving data in and out of memory 12 is greatly increased. A large number of parallel processors can be connected to the memory 12 and can operate on their own area of memory independently. Together these two features can provide very high performance.There are several different topologies for parallel processors. One example topology is commonly referred to as SIMD (single instruction, multiple data). The SIMD topology contains many processors, all executing the same stream of instructions simultaneously, but on their own (locally stored) data. The active memory approach is typified by SIMD massively parallel processor (MPP) architectures. In the SIMD MPP, a very large number (for example, one thousand) of relatively simple PEs 200 are closely connected to a memory and organized so that each PE 200 has access to its own piece of memory. All of the PEs 200 execute the same instruction together, but on different data.The SIMD MPP has the advantage that the control overheads of the system are kept to a minimum, while maximizing the processing and memory access bandwidths. SIMD MPPs, therefore, have the potential to provide very high performance very efficiently. Moreover, the hardware consists of many fairly simple repeating elements. Since the PEs 200 are quite small in comparison to a reduced instruction set computer (RISC), they are easy to implement into a system design and their benefit with respect to optimization is multiplied by the number of processing elements. In addition, because the PEs 200 are simple, it is possible to clock them fast and without resorting to deep pipelines.In a massively parallel processor array, the design of the interconnections among the processing elements and the interconnections between the PEs 200 and the memory 12 are an important feature. Traditional massively parallel processors utilize a plurality of semiconductor chips for the processor element array 14 and the memory 12. The chips are connected via a simple network of wires. However, as shown in FIG. 3, advances in semiconductor technology now permits a SIMD massively parallel processor with a memory to be integrated onto a single active memory chip 100. Since signals which are routed within a semiconductor chip can travel significantly faster than inter-chip signals, the single chip active memory 100 has the potential of operating significantly faster than a prior art SIMD MPP. However, achieving high speed operation requires more than merely integrating the elements of a traditional prior art SIMD MPP into one active memory chip 100. For example, careful consideration must be given to the way the PEs 200 of the PE array 14 are wired together, since this affects the length of the interconnections between the PEs 200 (thereby affecting device speed), the mapping of the memory from as seen by the PEs 200, the power consumed to drive the interconnection network, and the cost of the active memory chip 100. Accordingly, there is a desire and need for an affordable high speed SIMD MPP active memory chip with an optimized interconnection arrangement between the PEs.SUMMARY OF THE INVENTIONIn one aspect, the present invention is directed to a single chip active memory with a SIMD MPP. The active memory chip contains a full word interface, a memory in the form of a plurality of memory stripes, and a PE array in the form of a plurality of PE sub-arrays. The memory stripes are arranged between and coupled to both the plurality of PE sub-arrays and the full word interface. Each PE sub-array is coupled to the full word interface and a corresponding memory stripe. In order to route the numerous couplings between a memory stripe and its corresponding PE sub-array, the PE sub-array is placed so that its data path is orthogonal to the orientation of the memory stripes. The data lines of the PE sub-arrays are formed on one metal layer and coupled to the memory stripe data lines which are formed on a different metal layer having an orthogonal orientation.In another aspect of the present invention, the PEs each contain a small register file constructed as a small DRAM array. Small DRAM arrays are sufficiently fast to serve as a register file and utilize less power and semiconductor real estate than traditional SRAM register files.In another aspect of the invention, the PE array of the active memory chip is formed by coupling the plurality of PE sub-arrays into a single logical array in accordance to a mapping technique. The mapping technique of the invention include mapping each PE sub-array into the logical array as a row (optionally with row interleaving), a rectangular region, or a column. Each PE of the logical array is coupled to four other PEs along its (logical) north, south, east, and west axis. PEs which are located at the corners or along the edges of the logical array have couplings along their exterior edges which wrap around the array to opposite corner and edge PEs, respectively. Depending on the mapping, some PEs may be coupled to other PEs which are (physically) distant and the present invention uses current mode differential logical couplings and drivers for its long distance PE-to-PE couplings.BRIEF DESCRIPTION OF THE DRAWINGSThe foregoing and other advantages and features of the invention will become more apparent from the detailed description of the preferred embodiments of the invention given below with reference to the accompanying drawings in which:FIG. 1 illustrates in block diagram form a conventional von Neumann computer architecture;FIG. 2 illustrates in block diagram form the architecture of computer system with an active memory;FIG. 3 illustrates in block diagram form the layout of a single chip active memory system;FIG. 4 illustrates in block diagram form a processing element;FIG. 5 illustrates the logical array formed by mapping processing element sub-arrays;FIGS. 6, 7, 8, and 9 illustrate different mapping techniques which an be used to form the logical array of FIG. 5; andFIG. 10 illustrates how different metal layers can be used to coupled the I/O lines of the memory stripes to the I/O lines of the processing element sub-arrays.DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTSNow referring to the drawing, where like reference numerals designate like elements, there is shown in FIG. 3 a block diagram of a single chip active memory 100. The active memory chip 100 contains several components integrated onto a substrate 103, including a plurality of 8*8 PE arrays 15-0-15-15, a plurality of memory areas formed as stripes S00-S07, S10-S17, S20-S27, S30-S37, SA0-SA3, and a full word interface 101.As shown in FIG. 4, the PEs 200 include an arithmetic logic unit (ALU) 201. In the exemplary embodiment, the ALU 201 is an 8-bit integer ALU, but ALUs of different types may also be used. Suitable ALUs may include, for example 1-bit and 32-bit integer ALUs, or 32-bit floating point ALUs. The ALU 201 is coupled to a register file 202 and an interconnect cell 203.The register file 202 needs to be small, fast, and low powered, since prior art register files typically occupy approximately one third of the total area of the PE 200 and approximately 75% of its power consumption. In the exemplary embodiment, a dynamic random access memory (DRAM) is used to form the register file. DRAM is not ordinarily used to form register files because it is normally considered to be too slow and requires periodic refreshing. However, in the present context, DRAM offers several advantages. The register file 202 is a very small memory. For example, register file 202 may have only 64 locations. A small DRAM array has very short word lines and can be operated at high speeds. Additionally, DRAM refreshes can be controlled by a simple logic without adversely affecting the processing throughput of the PE. This is a consequence of the SIMD processing of the active memory chip 100, since every PE 200 of the active memory chip performs the same processing at any given time. Thus, whenever there is an opportunity to refresh the DRAM which make up the register file 202 of any PE 200, every DRAM register file 202 can also be simultaneously refreshed. Since DRAM cells are smaller and use fewer transistors than SRAM cells, the use of a small DRAM for the register file 202 permits high speed operation with low power consumption, and occupies less space than a traditional SRAM register file.The PE 200's ALU 201 is also coupled to an interconnect cell 203. The interconnect cell 203 is used to couple the PE 200 to four other PE's 200 via connections 205 and to a memory strip S00-S07, S10-S17, S20-S27, S30-S37 of the active memory chip 100 via a connection 206. The connections 205, 206 are bidirectional communication links. Output data is driven onto the connections 205, 206 via drivers 204. The connections 205 and drivers 204 may be of differing types. The PEs 200 are ordinarily coupled to other PEs 200 which are physically close. Near distance couplings use single ended connections driven by CMOS drivers, in order to reduce power consumption. Thus, in most instances, the connection 205 is one single ended signal line and the driver 204 is a CMOS driver. However, some of the PE-to-PE connections 205 will need to traverse a significant distance. At high clock frequencies, CMOS drivers and single ended connections may not be capable of driving signals over a long distance without significant degradation in signal integrity. For these connections, the present invention uses a pair of signal lines coupled to a differential drivers. In the exemplary embodiment, long distance PE-to-PE couplings are implemented with current mode differential logic as the type of differential drivers.In the exemplary embodiment, the active memory chip 100 includes one thousand twenty-four PEs 200 which are physically distributed over the sixteen 8*8 PE arrays 15-0-15-15. Each of the sixteen 8*8 PE arrays contain sixty-four PEs 200, which are physically arranged in a 8*8 format, and can be further subdivided into two 8*4 sub-arrays 15-0a-15-15a, 15-0b-15-15b. Collectively, as shown in FIG. 5, the PEs 200 contained within the thirty-two sub-arrays 15-0a-15-15b are wired to form a single 32*32 logical array 14, in which each PE 200 is capable of communicating with four logically adjacent PEs in its north, south, east, and west directions. PE's 200 which are located on the periphery of the logical array 14 will have one (for PEs located along the edges) or two (for corner PEs) communication links which wrap around the logical array 14, thereby permitting each PE 200 to communicate with four other PEs 200. In addition to the interconnection between PEs 200, each PE 200 is also coupled to a portion of the memory of the active memory chip 100 via a plurality of buses 102. In the exemplary embodiment, each 8*4 sub-array of PEs 15-0a-15-15b is coupled via buses 102 to a memory stripe S00-S07, S10-S17, S20-S27, S30-S37 (described below) located near each array.The memory of the active memory chip 100 includes a plurality of memory stripes S00-S07, S10-S17, S20-S27, S30-S37, SA0-SA3. In the exemplary embodiment, the active memory chip 100 is a 144 Mbit chip which contains 128 Mbit of data storage and 16 Mbit of additional storage. The 128 Mbit of data storage is evenly distributed across thirty-two 4 Mbit memory stripes S00-S07, S10-S17, S20-S27, S30-S37. The thirty-two memory stripes may be organized into a first S00-S07, second S10-S17, third S20-S27, and fourth S30-S37 groups. The 16 Mbit of additional storage is evenly distributed across four additional stripes SA0-SA3 and may be used to store parity or error correction codes. The use of additional storage for parity or error correction purposes is well known in the art and will not be further described or illustrated in order to avoid obscuring the invention.The memory stripes S00-S07, S10-S17, S20-S27, S30-S37 are each coupled to one of the 8*4 sub-arrays 15-0a-15-15b and the full word interface 101. Since the 8*4 sub-arrays 15-0a-15-15b are located on the opposite side from the full word interface 101, the memory stripes S00-S07, S10-S17, S20-S27, S30-S37 have two sets of sense amplifiers and repair logic. One set of sense amplifiers and repair logic is located near the full word interface 101 and the other set is located near the 8*4 sub-arrays 15-0a-15-15b. The coupling of a memory stripe S00-S07, S10-S17, S20-S27, S30-S37 to a 8*4 sub-array 15-0a-15-15b is performed by a set of four 64-bit buses 102. Each of the four 64-bit wide buses is coupled to one column of the corresponding 8*4 PE sub-array 15-0a-15-15b. Each of the eight PEs 200 in a row of the 8*4 PE sub-array 15-0a-15-15b is associated with a respective 8-bits of that 64-bit bus. This mechanism of connecting the memory stripes to the 8*4 PE sub-arrays 15-0a-15-15b maintains the same distance between each 8*4 PE sub-array 15-0a-15-15b and is associated memory stripe S00-S07, S10-S17, S20-S27, S30-S37, respectively.Physically wiring the memory stripes S00-S07, S10-S17, S20-S27, S30-S37 to their associated 8*4 PE sub-arrays 15-0a-15-15b requires a large number of connections. For example, the groups of four 64-bit buses 102 each require 256 data lines. Referring now to FIG. 10, the present invention wires the memory stripes S00-S07, S10-S17, S20-S27, S30-S37 to the PE sub-arrays 15-0a-15-15b by routing memory stripe I/O lines 10-1 to a first metal layer 10-2, running them towards the 8*4 PE sub-arrays 15-0a-15-15b. When the memory stripe I/O lines 10-1 approach an appropriate PE 200, vias 10-3 are used to couple the memory stripe I/O lines 10-1 to sub-array I/O lines 10-4. The sub-array I/O lines 10-4 are located on a second metal layer 10-5, which has the I/O lines 10-4 oriented orthogally to the I/O lines 10-1. To facilitate this routing mechanism, the 8*4 PE sub-arrays 15-0a-15-15b are placed so that the sub-array I/O lines 10-4 run at right angles to the memory stripes I/O lines 10-1.The active memory chip's 100 interface 101 is a full word width interface. The use of a full word width interface, which permits a single chip to store a plurality of words, is important in an active memory system because the active memory system needs to efficiently satisfy the needs of both an external user such as CPU 10 and the logical array 14 of PEs 200. Memory chips which do not contain a full word interfaces are typically assembled onto a memory module wherein each memory chip stores a subset of bits corresponding to a word of memory. Such arrangements are unsuitable for efficient processing by the logical array 14 of PEs 200 because it would require the PEs 100 to perform off-chip communications in order to process data organized in the word order of the external CPU 10. In the exemplary embodiment, the active memory chip 100 utilizes a SLDRAM interface or a RAMBUS interface. Both the SLDRAM and RAMBUS memory devices use 16-bit interfaces and store data corresponding to bits 0-7 in the first S00-S07 and third S20-S27 groups of memory stripes, and data corresponding to bits 8-15 in the second S10-S17 and fourth S30-S37 groups of memory stripes. The requirement for efficiently satisfying the processing requirements of both the array 14 of PEs 200 and the external CPU 10 is a concern which affects the design of the interconnections among the PEs 200.As shown in FIG. 5, the logical array 14 of PEs 32 is a 32*32 lattice of PEs 200. Although the PEs 200 are physically located in a plurality of 8*4 sub-arrays 15-0a-15-15b, this physical grouping is designed to facilitate connection of individual PEs to corresponding memory stripes S00-S07, S10-S17, S20-S27, S30-S37. The wiring scheme used to connect the PEs 200 within each 8*4 sub-array 15-0a-15-15b to each other, and to the PEs of other 8*4 sub-arrays 15-0a-15-15b to form the 32*32 logical array 14 is a separate matter. The present invention contemplates several of embodiments for wiring the PEs 200 in each 8*4 array 15-0a-15-15b to form the 32*32 array 14.FIGS. 6 and 7 show two similar memory mappings for constructing the 32*32 logical array 14. As illustrated in FIGS. 6 and 7, the thirty-two PEs 200 in each 8*4 sub-array 15-0a-15-15b may be wired so that each 8*4 sub-array 15-0a-15-15b represents one row of the 32*32 logical array 14. FIGS. 6 and 7 show where each 8*4 sub-array 15-0a-15-15b is mapped within the 32*32 logical array 14. For example, FIG. 6 show the thirty-two PEs 200 located in the 8*4 sub-array 15-0a forms the first row of the 32*32 logical array 14, while the thirty-two PEs 200 located in the 8*4 sub-array 15-15b forms the last row of the 32*32 array 14. In order for each PE 200 to be able to communicate with its four neighbors of the 32*32 logical array 14 (as shown in FIG. 5), the PEs 200 of the 8*4 sub-arrays 15-0a-15-15b are wired to each other. Some connections are short, since, for example, in the mapping illustrated by FIG. 5, PEs 200 from the 8*4 sub-array 15-0a (corresponding to the first row in the 32*32 logical array 14) are wired to a physically adjacent 8*4 PE sub-array 15-0b. Short connection can be driven using standard single ended CMOS drivers. Other connections are long, for example, in the mapping shown in FIG. 6, PEs from the 8*4 sub-array 15-0a are also wired to PEs 200 from the 8*4 sub-array 15-15b, which is located on the opposite side of the chip. As previously discussed, the long connections may require the use of special drivers and connections to ensure signal integrity at high speeds. The difference between the memory mappings of FIG. 6 and FIG. 7 is that FIG. 7 shows an interleaved arrangement in which sub-arrays 15-0a-15-15a are mapped as a first set of rows while sub-arrays 15-0b-15-15b are mapped as a second set of rows (while reversing the sub-array ordering for 15-0b-15-15b). The interleaved arrangement of FIG. 6 has a smaller maximum PE-to-PE connection distance. This is important since the speed of the active memory chip 100 is limited by the speed of its slowest component. Thus, the length of the longest connection is a limiting factor on the speed of the active memory chip 100.The memory of the active memory chip 100 must efficiently service the processing requirements of both an external CPU 10 and internal logical array 14 of PEs 200. For the memory mappings shown in FIGS. 6 and 7, the group of thirty-two PEs mapped into each row of the logical array 14 are connected to each bit of the 16-bit word of the active memory chip. However, since each 8*4 sub-array 15-0a-15-15b is only connected to one corresponding memory stripe S00-S07, S10-S17, S20-S27, S30-S37, this means that each memory stripe must contain and drive all 16-bits of the memory word through the full word interface 101. This places another limitation in the speed of the active memory chip since each stripe is require to have connections which span the entire width of the active memory chip 100.FIG. 8 shows another way of mapping the 8*4 sub-arrays 15-0a-15-15b to form the 32*32 sub-array 14. The memory mapping illustrated by FIG. 8 requires each 8*4 sub-array 15-0a-15-15b be wired as blocks of 8*4. The 8*4 sub-arrays are then connected to each other as shown in the figure. The memory mapping illustrated in FIG. 8 requires that each 8*4 sub-array be connected one byte of data (i.e., either bits 0-7 or bits 8-15 of the word.) In comparison to the mapping shown in FIGS. 6-7, this has the advantage of only requiring each strip drive, via full word interface 101, data along only a portion of the chip. This reduces interconnection lengths between the memory strips 15-0a-15-15b and the full word interface 101, however, it also requires a large number of long interconnects between the PEs 200 of different 8*4 blocks. For example, 8*4 sub-arrays 15-0a-15-3b are wired to 8*4 sub-arrays 15-8a-15-11b and 8*4 sub-arrays 15-4a-15-7b are wired to 8*4 sub-arrays 15-12a-15-15b, respectively. Each of these connections span half the width of the active memory chip 100.FIG. 9 shows yet another way of mapping the 8*4 sub-arrays 15-0a-15-15b to form the 32*32 sub-array 14. The thirty-two PEs 200 in each 8*4 sub-array 15-0a-15-15b are wired so that each 8*4 sub-array represents one column of the 32*32 logical array 14. Additionally, the memory mapping shown in FIG. 9 reverses the connection order in the second 15-4a-15-7b and third 15-8a-15-11b groups of sub-arrays in order to reduce the amount of required long interconnects.In summary, the present invention is directed to a single active memory chip 100 containing a plurality of PEs 200 and a memory 12. In the exemplary embodiment, there are 1024 PEs 200 which are logically organized as a 32*32 logical array 14. The 1024 PEs 200 are physically organized into 16 8*8 PE arrays 15-0-15-15. Each 8*8 PE array is organized as 2 8*4 sub-arrays 15-0a-15-15a, 15-0b-15-15b. In the exemplary embodiment, the active memory chip 100 has 128 Mbit of data storage organized as 32 4 Mbit memory stripes S00-S07, S10-S17, S20-S27, S30-S37. Each of the 8*4 sub-arrays 15-0a-15-15b is coupled to one of the memory stripes S00-S07, S10-S17, S20-S27, S30-S37.The PEs 200 of the active memory chip 100 include an ALU 201, a register file 202, and an interconnect cell 203. In the exemplary embodiment, the register file 202 is implemented using a small DRAM array. Small DRAM arrays are suitable for use as a register file because they use less power and are sufficiently fast. The interconnect cell 203 is the PE's 200 interface to a memory stripe S00-S07, S10-S17, S20-S27, S30-S37 and to 4 other PEs 200.The PEs 200 of the plurality of sub-arrays 15-0a-15-15b can be wired differently, as described above, in order to form the 32*32 logical array 14. The wiring will require some PEs 200 to communicate with physically distant PEs 200. In order to maintain signal integrity for these long distance connections, the exemplary embodiment utilizes current mode differential logic drivers for long distance signaling.While certain embodiments of the invention have been described and illustrated above, the invention is not limited to these specific embodiments as numerous modifications, changes and substitutions of equivalent elements can be made without departing from the spirit and scope of the invention. Accordingly, the scope of the present invention is not to be considered as limited by the specifics of the particular structures which have been described and illustrated, but is only limited by the scope of the appended claims. |
Example devices and methods are presented for timer-based access for audio streaming and rendering. For example, a device configured to play one or more of a plurality of audio streams includes a memory configured to store timing information and the plurality of audio streams. The device also includes one or more processors coupled to the memory. The one or more processors are configured to control access to at least one of the plurality of audio streams based on the timing information. |
WHAT IS CLAIMED IS:1. A device configured to play one or more of a plurality of audio streams comprising:a memory configured to store timing information and the plurality of audio streams; andone or more processors coupled to the memory, and configured tocontrol access to at least one of the plurality of audio streams based on the timing information.2. The device of claim 1, wherein the memory is further configured to store location information associated with coordinates of an acoustical space in which a corresponding one of the plurality of audio streams was captured or synthesized.3. The device of claim 1, wherein the one or more processors are configured to control access to the at least one of the plurality of audio streams by selecting a subset of the plurality of audio streams, the subset of the plurality of audio streams excluding at least one of the plurality of audio streams.4. The device of claim 3, wherein the excluded streams are associated with one or more privacy zones.5. The device of claim 4, wherein the one or more processors are further configured to:determine an authorization level for a user;compare the authorization level for the user to an authorization level of the one or more privacy zones; andselect the subset of the plurality of audio streams based on the comparison.6. The device of claim 3, wherein the one or more processors are further configured to:obtain, from a user, an override request to add at least one excluded audio stream of the plurality of audio streams; andbased upon the override request, add the at least one excluded audio stream for a limited time period.
7. The device of claim 1, wherein the one or more processors are configured to control access to the at least one of the plurality of audio streams by not downloading or receiving at least one of the plurality of audio streams based on the timing information.8. The device of claim 1, wherein the timing information comprises a start time of when at least one of the plurality of audio streams includes audio content.9. The device of claim 8, wherein the one or more processors are configured to: compare the start time to a current time; andselect, when the start time is equal to or greater than the current time, a subset of the plurality of audio streams.10. The device of claim 1, wherein the timing information comprises a duration of at least one of the plurality of audio streams.11. The device of claim 10, wherein the one or more processors are configured to: compare the duration to a timer; andselect, when the duration is equal or greater than the timer, a subset of the plurality of audio streams.12. The device of claim 1, wherein the one or more processors are configured to: obtain from a user a request for one of a plurality of ambisonic soundfield types; andreproduce corresponding soundfields, based on the request for the one of a plurality of ambisonic soundfield types, and the plurality of audio streams or a subset of the plurality of audio streams,wherein the plurality of ambisonic soundfield types comprises at least two of first order ambisonic soundfield (FOA), higher order ambisonic soundfield (HOA), and mixed order ambisonic soundfield (MO A).13. The device of claim 1, wherein the timing information comprises a delay and wherein the one or more processors are further configured to:detect a trigger;compare the delay to a timer; andwait until the delay is equal to or greater than the timer to select a subset of the plurality of audio streams.14. The device of claim 1, wherein the one or more processors are further configured to combine at least two of the plurality of audio streams by at least one of mixing or interpolation or another variant of soundfield manipulation.15. The device of claim 1, wherein the one or more processors are further configured to change a gain of one or more of the plurality of audio streams.16. The device of claim 1, further comprising a display device.17. The device of claim 16, further comprising a microphone, wherein the one or more processors are further configured to receive a voice command from themicrophone and control the display device based on the voice command.18. The device of claim 1, further comprising one or more speakers.19. The device of claim 1, wherein the device comprises an extended reality headset, andwherein an acoustical space comprises a scene represented by video data captured by a camera.20. The device of claim 1, wherein the device comprises an extended reality headset, and wherein an acoustical space comprises a virtual world.21. The device of claim 1, further comprising a head-mounted display configured to present an acoustical space.22. The device of claim 1, wherein the device comprises one of a mobile handset or a vehicle.23. The device of claim 1, further comprising a wireless transceiver, the wireless transceiver being coupled to the one or more processors and being configured to receive a wireless signal.24. A method of playing one or more of a plurality of audio streams comprising: storing, by a memory, timing information and the plurality of audio streams; and controlling access to at least one of the plurality of audio streams based on the timing information.25. The method of claim 24, further comprising storing location information associated with coordinates of an acoustical space in which a corresponding one of the plurality of audio streams was captured or synthesized.26. The method of claim 24, wherein the controlling access to the at least one of the plurality of audio streams comprises selecting a subset of the plurality of audio streams, the subset of the plurality of audio streams excluding at least one of the plurality of audio streams.27. The method of claim 26, wherein the excluded streams are associated with one or more privacy zones.28. The method of claim 27, further comprising:determining an authorization level for a user;comparing the authorization level for the user to an authorization level of the one or more privacy zones; andselecting the subset of the plurality of audio streams based on the comparison.29. The method of claim 26, further comprising:obtaining, from a user, an override request to add at least one excluded audio stream of the plurality of audio streams; andbased upon the override request, adding the at least one excluded audio stream for a limited time period.30. The method of claim 24, wherein the controlling access to the at least one of the plurality of audio streams comprises not downloading or receiving at least one of the plurality of audio streams based on the timing information.31. The method of claim 24, wherein the timing information comprises a start time of when at least one of the plurality of audio streams includes audio content.32. The method of claim 31, further comprising:comparing the start time to a current time; andselecting, when the start time is equal to or greater than the current time, a subset of the plurality of audio streams.33. The method of claim 24, wherein the timing information comprises a duration of at least one of the plurality of audio streams.34. The method of claim 33, further comprising:comparing the duration to a timer; andselecting, when the duration is equal or greater than the timer, a subset of the plurality of audio streams.35. The method of claim 24, further comprising:obtaining, from a user, a request for one of a plurality of ambisonic soundfield types; andreproducing corresponding soundfields, based on the request for the one of a plurality of ambisonic soundfield types, and the plurality of audio streams or a subset of the plurality of audio streams,
wherein the plurality of ambisonic soundfield types comprises at least two of first order ambisonic soundfield (FOA), higher order ambisonic soundfield (HOA), and mixed order ambisonic soundfield (MO A).36. The method of claim 24, wherein the timing information comprises a delay, further comprising:detecting a trigger;comparing the delay to a timer; andwaiting until the delay is equal to or greater than the timer before selecting a subset of the plurality of audio streams.37. The method of claim 24, further comprising combining at least two of the plurality of audio streams by at least one of mixing or interpolation or another variant of soundfield manipulation.38. The method of claim 24, further comprising changing a gain of one or more of the plurality of audio streams.39. The method of claim 24, further comprising receiving, by a microphone, a voice command and controlling a display device based on the voice command.40. The method of claim 24, further comprising outputting at least one of the plurality of audio streams to one or more speakers.41. The method of claim 24, wherein an acoustical space comprises a scene represented by video data captured by a camera.42. The method of claim 24, wherein an acoustical space comprises a virtual world.43. The method of claim 24, further comprising presenting an acoustical space on a head-mounted device.44. The method of claim 24, further comprising presenting an acoustical space on a mobile handset or in a vehicle.
45. The method of claim 24, further comprising receiving a wireless signal.46. A non-transitory computer-readable storage medium having stored thereon instructions that, when executed, cause one or more processors to:store timing information and a plurality of audio streams; andcontrol access to at least one of the plurality of audio streams based on the timing information.47. A device configured to play one or more of a plurality of audio streams comprising:means for storing timing information and a plurality of audio streams; and means for controlling access to at least one of the plurality of audio streams based on the timing information. |
TIMER-BASED ACCESS FOR AUDIO STREAMING AND RENDERING[0001] This application claims priority to U.S. Patent Application No. 16/918,465, filed July 1, 2020, and U.S. Provisional Application No. 62/870,599, filed July 3, 2019, the entire contents of both of which are hereby incorporated by reference.TECHNICAL FIELD[0002] This disclosure relates to processing of media data, such as audio data.BACKGROUND[0003] Computer-mediated reality systems are being developed to allow computing devices to augment or add to, remove or subtract from, or generally modify existing reality experienced by a user. Computer-mediated reality systems (which may also be referred to as“extended reality systems,” or“XR systems”) may include, as examples, virtual reality (VR) systems, augmented reality (AR) systems, and mixed reality (MR) systems. The perceived success of computer-mediated reality systems are generally related to the ability of such computer-mediated reality systems to provide a realistically immersive experience in terms of both the video and audio experience where the video and audio experience align in ways expected by the user. Although the human visual system is more sensitive than the human auditory systems (e.g., in terms of perceived localization of various objects within the scene), ensuring an adequate auditory experience is an increasingly important factor in ensuring a realistically immersive experience, particularly as the video experience improves to permit better localization of video objects that enable the user to better identify sources of audio content.SUMMARY[0004] This disclosure relates generally to auditory aspects of the user experience of computer-mediated reality systems, including virtual reality (VR), mixed reality (MR), augmented reality (AR), computer vision, and graphics systems. Various aspects of the techniques may provide for adaptive audio capture, synthesis, and rendering for extended reality systems. As used herein, an acoustic environment is represented as either an indoor environment or an outdoor environment, or both an indoor environment and an outdoor environment. The acoustic environment may include one or more sub-acoustic
spaces that may include various acoustic elements. An example of an outdoor environment could include a car, buildings, walls, a forest, etc. An acoustical space may be an example of an acoustical environment and may be an indoor space or an outdoor space. As used herein, an audio element is either a sound captured by a microphone (e.g., directly captured from near-field sources or reflections from far-field sources whether real or synthetic), or a sound field previously synthesized, or a mono sound synthesized from text to speech, or a reflection of a virtual sound from an object in the acoustic environment.[0005] In one example, various aspects of the techniques are directed to a device a memory configured to store timing information and the plurality of audio streams; and one or more processors coupled to the memory, and configured to control access to at least one of the plurality of audio streams based on the timing information.[0006] In another example, various aspects of the techniques are directed to a method of playing one or more of a plurality of audio streams comprising: storing, by a memory, timing information and the plurality of audio streams; and controlling access to at least one of the plurality of audio streams based on the timing information.[0007] In another example, various aspects of the techniques are directed to a device configured to play one or more of a plurality of audio streams, the device comprising: means for storing the plurality of audio streams and means for controlling access to at least one of the plurality of audio streams based on the timing information.[0008] In another example, various aspects of the techniques are directed to a non- transitory computer-readable storage medium having stored thereon instructions that, when executed, cause one or more processors to: store timing information and a plurality of audio streams; and control access to at least one of the plurality of audio streams based on the timing information.[0009] The details of one or more examples of this disclosure are set forth in the accompanying drawings and the description below. Other features, objects, and advantages of various aspects of the techniques will be apparent from the description and drawings, and from the claims.BRIEF DESCRIPTION OF DRAWINGS[0010] FIGS. 1 A-1C are diagrams illustrating systems that may perform various aspects of the techniques described in this disclosure.
[0011] FIG. 2 is a diagram illustrating an example of a VR device worn by a user.[0012] FIGS. 3 A-3E are diagrams illustrating, in more detail, example operation of the stream selection unit shown in the examples of FIGS. 1 A-1C.[0013] FIGS. 4A-4C are flowcharts illustrating example operation of the stream selection unit shown in the examples of FIGS. 1 A-1C to control access to at least one of the plurality of audio streams based on timing information.[0014] FIGS. 4D and 4E are diagrams further illustrating the use of timing information, such as timing metadata, in accordance with various aspects of the techniques described in this disclosure.[0015] FIGS. 4F and 4G are diagrams illustrating the use of a temporary request for greater access in accordance with various aspects of the techniques described in this disclosure.[0016] FIGS. 4H and 41 are diagrams illustrating an example of privacy zones provided in accordance with various aspects of the techniques described in this disclosure.[0017] FIGS. 4J and 4K are diagrams illustrating the use of tiers of service of audio rendering in accordance with various aspects of the techniques described in this disclosure.[0018] FIG. 4L is a state transition diagram illustrating state transitions in accordance with various aspects of the techniques described in this disclosure.[0019] FIG. 4M is a diagram of a vehicle in accordance with various aspects of the techniques described in this disclosure.[0020] FIG. 4N is a diagram of a moving vehicle in accordance with various aspects of the techniques described in this disclosure.[0021] FIG. 40 is a flowchart illustrating example techniques of using authorization levels for controlling access to at least one of the plurality of audio streams based on timing information.[0022] FIG. 4P is a flowchart illustrating example techniques of using a trigger and delay to control access to at least one of the plurality of audio streams based on timing information.[0023] FIG. 5 is a diagram illustrating an example of a wearable device that may operate in accordance with various aspect of the techniques described in this disclosure.[0024] FIGS. 6A and 6B are diagrams illustrating other example systems that may perform various aspects of the techniques described in this disclosure.
[0025] FIG. 7 is a block diagram illustrating example components of one or more of the source device and the content consumer device shown in the example of FIG. 1.[0026] FIGS. 8A-8C are flowcharts illustrating example operation of the stream selection unit shown in the examples of FIGS. 1 A-1C in performing various aspects of the stream selection techniques.[0027] FIG. 9 is a conceptual diagram illustrating an example of a wirelesscommunications system in accordance with aspects of the present disclosure.DETAILED DESCRIPTION[0028] Currently, rendering an XR scene with many audio sources which may be obtained from audio capture devices, for example, in a live scene, may render audio sources containing sensitive information that would be better kept restricted, or if access were to be granted, the access should not be permanent. According to the techniques of this disclosure, individual audio streams may be restricted from rendering or may be rendered on a temporary basis based on timing information, such as a time or a duration. Certain individual audio streams or clusters of audio streams may be enabled or disabled for a fixed duration for better audio interpolation. Accordingly, the techniques of this disclosure provide for a flexible manner of controlling access to audio streams based on time.[0029] There are a number of different ways to represent a soundfield. Example formats include channel -based audio formats, object-based audio formats, and scene-based audio formats. Channel -based audio formats refer to the 5.1 surround sound format, 7.1 surround sound formats, 22.2 surround sound formats, or any other channel-based format that localizes audio channels to particular locations around the listener in order to recreate a soundfield.[0030] Object-based audio formats may refer to formats in which audio objects, often encoded using pulse-code modulation (PCM) and referred to as PCM audio objects, are specified in order to represent the soundfield. Such audio objects may include location information, such as location metadata, identifying a location of the audio object relative to a listener or other point of reference in the soundfield, such that the audio object may be rendered to one or more speaker channels for playback in an effort to recreate the soundfield. The techniques described in this disclosure may apply to any of the following formats, including scene-based audio formats, channel -based audio formats, object-based audio formats, or any combination thereof.
[0031] Scene-based audio formats may include a hierarchical set of elements that define the soundfield in three dimensions. One example of a hierarchical set of elements is a set of spherical harmonic coefficients (SHC). The following expression demonstrates a description or representation of a soundfield using SHC:[0032] The expression shows that the pressure pi at any point {rr, 6r, cpr} of the soundfield, at time /, can be represented uniquely by the SHC, A™(k). Here, k = c is the speed of sound (-343 m/s), {rr, qn, fg] is a point of reference (or observation point), jn( ) is the spherical Bessel function of order //, and U™(qt, ft) are the spherical harmonic basis functions (which may also be referred to as a spherical basis function) of order n and suborder m. It can be recognized that the term in square brackets is a frequency-domain representation of the signal (e.g., 5(w, rr, qt, ft)) which can be approximated by various time-frequency transformations, such as the discrete Fourier transform (DFT), the discrete cosine transform (DCT), or a wavelet transform. Other examples of hierarchical sets include sets of wavelet transform coefficients and other sets of coefficients of multiresolution basis functions.[0033] The SHC A™(k) can either be physically acquired (e.g., recorded) by various microphone array configurations or, alternatively, they can be derived from channel- based or object-based descriptions of the soundfield. The SHC (which also may be referred to as ambisonic coefficients) represent scene-based audio, where the SHC may be input to an audio encoder to obtain encoded SHC that may promote more efficient transmission or storage. For example, a fourth-order representation involving (1+4)2(25, and hence fourth order) coefficients may be used.[0034] As noted above, the SHC may be derived from a microphone recording using a microphone array. Various examples of how SHC may be physically acquired from microphone arrays are described in Poletti, M.,“Three-Dimensional Surround Sound Systems Based on Spherical Harmonics,” J. Audio Eng. Soc., Vol. 53, No. 11, 2005 November, pp. 1004-1025.[0035] The following equation may illustrate how the SHCs may be derived from an object-based description. The coefficients A^ ik) for the soundfield corresponding to an individual audio object may be expressed as:Ani ) = g( (-4 ik)h krs)Y™ es, ps),
where i is V— Ϊ, the spherical Hankel function (of the second kind) of order n, and {rs, 6S, f3} is the location of the object. Knowing the object source energy g(o ) as a function of frequency (e.g., using time-frequency analysis techniques, such as performing a fast Fourier transform on the pulse code modulated - PCM - stream) may enable conversion of each PCM object and the corresponding location into the SHC A™(k). Further, it can be shown (since the above is a linear and orthogonal decomposition) that the A™(k) coefficients for each object are additive. In this manner, a number of PCM objects can be represented by the A™(k) coefficients (e.g., as a sum of the coefficient vectors for the individual objects). The coefficients may contain information about the soundfield (the pressure as a function of three dimensional (3D) coordinates), and the above represents the transformation from individual objects to a representation of the overall soundfield, in the vicinity of the observation point {rr, 6r, ft}.[0036] Computer-mediated reality systems (which may also be referred to as“extended reality systems,” or“XR systems”) are being developed to take advantage of many of the potential benefits provided by ambisonic coefficients. For example, ambisonic coefficients may represent a soundfield in three dimensions in a manner that potentially enables accurate 3D localization of sound sources within the soundfield. As such, XR devices may render the ambisonic coefficients to speaker feeds that, when played via one or more speakers, accurately reproduce the soundfield.[0037] As another example, the ambisonic coefficients may be translated or rotated to account for user movement without overly complex mathematical operations, thereby potentially accommodating the low latency requirements of XR devices. In addition, the ambisonic coefficients are hierarchical and thereby naturally accommodate scalability through order reduction (which may eliminate ambisonic coefficients associated with higher orders), and thereby potentially enable dynamic adaptation of the soundfield to accommodate latency and/or battery requirements of XR devices.[0038] The use of ambisonic coefficients for XR devices may enable development of a number of use cases that rely on the more immersive soundfields provided by the ambisonic coefficients, particularly for computer gaming applications and live video streaming applications. In these highly dynamic use cases that rely on low latency reproduction of the soundfield, the XR devices may prefer ambisonic coefficients over other representations that are more difficult to manipulate or involve complex rendering.
More information regarding these use cases is provided below with respect to FIGS. 1A-1C.[0039] While described in this disclosure with respect to the VR device, various aspects of the techniques may be performed in the context of other devices, such as a mobile device. In this instance, the mobile device (such as a so-called smartphone) may present the acoustical space via a screen, which may be mounted to the head of the user 102 or viewed as would be done when normally using the mobile device. As such, any information on the screen can be part of the mobile device. The mobile device may be able to provide tracking information and thereby allow for both a VR experience (when head mounted) and a normal experience to view the acoustical space, where the normal experience may still allow the user to view the acoustical space providing a VR-lite-type experience (e.g., holding up the device and rotating or translating the device to view different portions of the acoustical space).[0040] FIGS. 1A-1C are diagrams illustrating systems that may perform various aspects of the techniques described in this disclosure. As shown in the example of FIG. 1 A, system 10 includes a source device 12A and a content consumer device 14 A. While described in the context of the source device 12A and the content consumer device 14 A, the techniques may be implemented in any context in which any representation of a soundfield is encoded to form a bitstream representative of the audio data. Moreover, the source device 12Amay represent any form of computing device capable of generating the representation of a soundfield, and is generally described herein in the context of being a VR content creator device. Likewise, the content consumer device 14A may represent any form of computing device capable of implementing rendering techniques described in this disclosure as well as audio playback, and is generally described herein in the context of being a VR client device.[0041] The source device 12A may be operated by an entertainment company or other entity that may generate mono and/or multi-channel audio content for consumption by operators of content consumer devices, such as the content consumer device 14 A. In some VR scenarios, the source device 12A generates audio content in conjunction with video content. The source device 12A includes a content capture device 20, a content editing device 22, and a soundfield representation generator 24. The content capture device 20 may be configured to interface or otherwise communicate with a microphone
[0042] The microphone 18 may represent an Eigenmike® or other type of 3D audio microphone capable of capturing and representing the soundfield as audio data 19, which may refer to one or more of the above noted scene-based audio data (such as ambisonic coefficients), object-based audio data, and channel -based audio data. Although described as being 3D audio microphones, the microphone 18 may also represent other types of microphones (such as omni-directional microphones, spot microphones, unidirectional microphones, etc.) configured to capture the audio data 19. Audio data 19 may represent an audio stream or include an audio stream.[0043] The content capture device 20 may, in some examples, include an integrated microphone 18 that is integrated into the housing of the content capture device 20. The content capture device 20 may interface wirelessly or via a wired connection with the microphone 18. Rather than capture, or in conjunction with capturing, the audio data 19 via microphone 18, the content capture device 20 may process the audio data 19 after the audio data 19 is input via some type of removable storage, wirelessly and/or via wired input processes. As such, various combinations of the content capture device 20 and the microphone 18 are possible in accordance with this disclosure.[0044] The content capture device 20 may also be configured to interface or otherwise communicate with the content editing device 22. In some instances, the content capture device 20 may include the content editing device 22 (which in some instances may represent software or a combination of software and hardware, including the software executed by the content capture device 20 to configure the content capture device 20 to perform a specific form of content editing). The content editing device 22 may represent a unit configured to edit or otherwise alter content 21 received from content capture device 20, including the audio data 19. The content editing device 22 may output edited content 23 and associated metadata 25 to the soundfield representation generator 24.[0045] The soundfield representation generator 24 may include any type of hardware device capable of interfacing with the content editing device 22 (or the content capture device 20). Although not shown in the example of FIG. 1 A, the soundfield representation generator 24 may use the edited content 23, including the audio data 19, and metadata 25 provided by the content editing device 22 to generate one or more bitstreams 27. In the example of FIG. 1A, which focuses on the audio data 19, the soundfield representation generator 24 may generate one or more representations of the same soundfield represented by the audio data 19 to obtain a bitstream 27 that includes the representations of the soundfield and the audio metadata 25.
[0046] For instance, to generate the different representations of the soundfield using ambisonic coefficients (which again is one example of the audio data 19), soundfield representation generator 24 may use a coding scheme for ambisonic representations of a soundfield, referred to as Mixed Order Ambisonics (MO A) as discussed in more detail in U.S. Application Serial No. 15/672,058, entitled “MIXED-ORDER AMBISONICS (MOA) AUDIO DATA FOR COMPUTER-MEDIATED REALITY SYSTEMS,” filed August 8, 2017, and published as U.S. patent publication no. 20190007781 on January 3, 2019.[0047] To generate a particular MOA representation of the soundfield, the soundfield representation generator 24 may generate a partial subset of the full set of ambisonic coefficients. For instance, each MOA representation generated by the soundfield representation generator 24 may provide precision with respect to some areas of the soundfield, but less precision in other areas. In one example, an MOA representation of the soundfield may include eight (8) uncompressed ambisonic coefficients, while the third order ambisonic representation of the same soundfield may include sixteen (16) uncompressed ambisonic coefficients. As such, each MOA representation of the soundfield that is generated as a partial subset of the ambisonic coefficients may be less storage-intensive and less bandwidth intensive (if and when transmitted as part of the bitstream 27 over the illustrated transmission channel) than the corresponding third order ambisonic representation of the same soundfield generated from the ambisonic coefficients.[0048] Although described with respect to MOA representations, the techniques of this disclosure may also be performed with respect to first-order ambisonic (FOA) representations in which all of the ambisonic coefficients associated with a first order spherical basis function and a zero order spherical basis function are used to represent the soundfield. In other words, rather than represent the soundfield using a partial, non-zero subset of the ambisonic coefficients, the soundfield representation generator 302 may represent the soundfield using all of the ambisonic coefficients for a given order N, resulting in a total of ambisonic coefficients equaling (N+l)2.[0049] In this respect, the ambisonic audio data (which is another way to refer to the ambisonic coefficients in either MOA representations or full order representation, such as the first-order representation noted above) may include ambisonic coefficients associated with spherical basis functions having an order of one or less (which may be referred to as “1storder ambisonic audio data”), ambisonic coefficients associated with spherical basis
functions having a mixed order and suborder (which may be referred to as the“MOA representation” discussed above), or ambisonic coefficients associated with spherical basis functions having an order greater than one (which is referred to above as the“full order representation”).[0050] The content capture device 20 or the content editing device 22 may, in some examples, be configured to wirelessly communicate with the soundfield representation generator 24. In some examples, the content capture device 20 or the content editing device 22 may communicate, via one or both of a wireless connection or a wired connection, with the soundfield representation generator 24. Via the connection between the content capture device 20 or the content editing device 22 and the soundfield representation generator 24, the content capture device 20 or the content editing device 22 may provide content in various forms of content, which, for purposes of discussion, are described herein as being portions of the audio data 19.[0051] In some examples, the content capture device 20 may leverage various aspects of the soundfield representation generator 24 (in terms of hardware or software capabilities of the soundfield representation generator 24). For example, the soundfield representation generator 24 may include dedicated hardware configured to (or specialized software that when executed causes one or more processors to) perform psychoacoustic audio encoding (such as a unified speech and audio coder denoted as“US AC” set forth by the Moving Picture Experts Group (MPEG), the MPEG-H 3D audio coding standard, the MPEG-I Immersive Audio standard, or proprietary standards, such as AptX™ (including various versions of AptX such as enhanced AptX - E-AptX, AptX live, AptX stereo, and AptX high definition - AptX-HD), advanced audio coding (AAC), Audio Codec 3 (AC-3), Apple Lossless Audio Codec (ALAC), MPEG-4 Audio Lossless Streaming (ALS), enhanced AC-3, Free Lossless Audio Codec (FLAC), Monkey’s Audio, MPEG-1 Audio Layer II (MP2), MPEG-1 Audio Layer III (MP3), Opus, and Windows Media Audio (WMA).[0052] The content capture device 20 may not include the psychoacoustic audio encoder dedicated hardware or specialized software and instead may provide audio aspects of the content 21 in a non-psychoacoustic-audio-coded form. The soundfield representation generator 24 may assist in the capture of content 21 by, at least in part, performing psychoacoustic audio encoding with respect to the audio aspects of the content 21.[0053] The soundfield representation generator 24 may also assist in content capture and transmission by generating one or more bitstreams 27 based, at least in part, on the audio
content (e.g., MOA representations and/or first order ambisonic representations) generated from the audio data 19 (in the case where the audio data 19 includes scene- based audio data). The bitstream 27 may represent a compressed version of the audio data 19 and any other different types of the content 21 (such as a compressed version of spherical video data, image data, or text data).[0054] The soundfield representation generator 24 may generate the bitstream 27 for transmission, as one example, across a transmission channel, which may be a wired or wireless channel, a data storage device, or the like. The bitstream 27 may represent an encoded version of the audio data 19, and may include a primary bitstream and another side bitstream, which may be referred to as side channel information or metadata. In some instances, the bitstream 27 representing the compressed version of the audio data 19 (which again may represent scene-based audio data, object-based audio data, channel- based audio data, or combinations thereof) may conform to bitstreams produced in accordance with the MPEG-H 3D audio coding standard and/or the MPEG-I Immersive Audio standard.[0055] The content consumer device 14 may be operated by an individual, and may represent a VR client device. Although described with respect to a VR client device, content consumer device 14 may represent other types of devices, such as an augmented reality (AR) client device, a mixed reality (MR) client device (or other XR client device), a standard computer, a headset, headphones, a mobile device (including a so-called smartphone), or any other device capable of tracking head movements and/or general translational movements of the individual operating the content consumer device 14. As shown in the example of FIG. 1A, the content consumer device 14 includes an audio playback system 16 A, which may refer to any form of audio playback system capable of rendering the audio data for playback as mono and/or multi-channel audio content.[0056] While shown in FIG. 1A as being directly transmitted to the content consumer device 14, the source device 12A may output the bitstream 27 to an intermediate device positioned between the source device 12A and the content consumer device 14 A. The intermediate device may store the bitstream 27 for later delivery to the content consumer device 14A, which may request the bitstream 27. The intermediate device may include a file server, a web server, a desktop computer, a laptop computer, a tablet computer, a mobile phone, a smart phone, or any other device capable of storing the bitstream 27 for later retrieval by an audio decoder. The intermediate device may reside in a content delivery network capable of streaming the bitstream 27 (and possibly in conjunction with
transmitting a corresponding video data bitstream) to subscribers, such as the content consumer device 14, requesting the bitstream 27.[0057] Alternatively, the source device 12A may store the bitstream 27 to a storage medium, such as a compact disc, a digital video disc, a high definition video disc or other storage media, most of which are capable of being read by a computer and therefore may be referred to as computer-readable storage media or non-transitory computer-readable storage media. In this context, the transmission channel may refer to the channels by which content (e.g., in the form of one or more bitstreams 27) stored to the mediums are transmitted (and may include retail stores and other store-based delivery mechanism). In any event, the techniques of this disclosure should not therefore be limited in this respect to the example of FIG. 1 A.[0058] As noted above, the content consumer device 14 includes the audio playback system 16A. The audio playback system 16A may represent any system capable of playing back mono and/or multi-channel audio data. The audio playback system 16A may include a number of different Tenderers 32. The audio Tenderers 32 may each provide for a different form of rendering, where the different audio forms of rendering may include one or more of the various ways of performing vector-base amplitude panning (VBAP), and/or one or more of the various ways of performing soundfield synthesis. As used herein,“A and/or B” means“A or B”, or“both A and B”.[0059] The audio playback system 16Amay further include an audio decoding device 34. The audio decoding device 34 may represent a device configured to decode bitstream 27 to output audio data 19’ (where the prime notation may denote that the audio data 19’ differs from the audio data 19 due to lossy compression, such as quantization, of the audio data 19). Again, the audio data 19’ may include scene-based audio data that, in some examples, may form the full first (or higher) order ambisonic representation or a subset thereof that forms an MOA representation of the same soundfield, decompositions thereof, such as a predominant audio signal, ambient ambisonic coefficients, and the vector based signal described in the MPEG-H 3D Audio Coding Standard, or other forms of scene-based audio data.[0060] Other forms of scene-based audio data include audio data defined in accordance with an HOA (Higher Order Ambisonic) Transport Format (HTF). More information regarding the HTF can be found in a Technical Specification (TS) by the European Telecommunications Standards Institute (ETSI) entitled “Higher Order Ambisonics (HOA) Transport Format,” ETSI TS 103 589 VI .1.1, dated June 2018 (2018-06), and also
in U.S. Patent Publication No. 2019/0918028, entitled“PRIORITY INFORMATION FOR HIGHER ORDER AMBISONIC AUDIO DATA,” filed December 20, 2018. In any event, the audio data 19’ may be similar to a full set or a partial subset of the audio data 19’, but may differ due to lossy operations (e.g., quantization) and/or transmission via the transmission channel.[0061] The audio data 19’ may include, as an alternative to, or in conjunction with the scene-based audio data, channel -based audio data. The audio data 19’ may include, as an alternative to, or in conjunction with the scene-based audio data, object-based audio data, or channel -based audio data. As such, the audio data 19’ may include any combination of scene-based audio data, object-based audio data, and channel -based audio data.[0062] The audio Tenderers 32 of audio playback system 16A may, after audio decoding device 34 has decoded the bitstream 27 to obtain the audio data 19’, render the audio data 19’ to output speaker feeds 35. The speaker feeds 35 may drive one or more speakers (which are not shown in the example of FIG. 1 Afor ease of illustration purposes). Various audio representations, including scene-based audio data (and possibly channel-based audio data and/or object-based audio data) of a soundfield may be normalized in a number of ways, including N3D, SN3D, FuMa, N2D, or SN2D.[0063] To select the appropriate Tenderer or, in some instances, generate an appropriate Tenderer, the audio playback system 16Amay obtain speaker information 37 indicative of a number of speakers (e.g., loudspeakers or headphone speakers) and/or a spatial geometry of the speakers. In some instances, the audio playback system 16A may obtain the speaker information 37 using a reference microphone and may drive the speakers (which may refer to the output of electrical signals to cause a transducer to vibrate) in such a manner as to dynamically determine the speaker information 37. In other instances, or in conjunction with the dynamic determination of the speaker information 37, the audio playback system 16Amay prompt a user to interface with the audio playback system 16A and input the speaker information 37.[0064] The audio playback system 16A may select one of the audio Tenderers 32 based on the speaker information 37. In some instances, the audio playback system 16A may, when none of the audio Tenderers 32 are within some threshold similarity measure (in terms of the speaker geometry) to the speaker geometry specified in the speaker information 37, generate the one of audio Tenderers 32 based on the speaker information 37. The audio playback system 16A may, in some instances, generate one of the audio
Tenderers 32 based on the speaker information 37 without first attempting to select an existing one of the audio Tenderers 32.[0065] When outputting the speaker feeds 35 to headphones, the audio playback system 16A may utilize one of the Tenderers 32 that provides for binaural rendering using head- related transfer functions (HRTF) or other functions capable of rendering to left and right speaker feeds 35 for headphone speaker playback, such as binaural room impulse response Tenderers. The terms“speakers” or“transducer” may generally refer to any speaker, including loudspeakers, headphone speakers, bone-conducting speakers, earbud speakers, wireless headphone speakers, etc. One or more speakers may then playback the rendered speaker feeds 35 to reproduce a soundfield.[0066] Although described as rendering the speaker feeds 35 from the audio data 19’, reference to rendering of the speaker feeds 35 may refer to other types of rendering, such as rendering incorporated directly into the decoding of the audio data from the bitstream 27. An example of the alternative rendering can be found in Annex G of the MPEG-H 3D Audio standard, where rendering occurs during the predominant signal formulation and the background signal formation prior to composition of the soundfield. As such, reference to rendering of the audio data 19’ should be understood to refer to both rendering of the actual audio data 19’ or decompositions or representations of the audio data 19’ (such as the above noted predominant audio signal, the ambient ambisonic coefficients, and/or the vector-based signal - which may also be referred to as a V-vector or as a multi-dimensional ambisonic spatial vector).[0067] The audio playback system 16A may also adapt the audio Tenderers 32 based on tracking information 41. That is, the audio playback system 16A may interface with a tracking device 40 configured to track head movements and possibly translational movements of a user of the VR device. The tracking device 40 may represent one or more sensors (e.g., a camera - including a depth camera, a gyroscope, a magnetometer, an accelerometer, light emitting diodes - LEDs, etc.) configured to track the head movements and possibly translational movements of a user of the VR device. The audio playback system 16A may adapt, based on the tracking information 41, the audio Tenderers 32 such that the speaker feeds 35 reflect changes in the head and possibly translational movements of the user to correct reproduce the soundfield that is responsive to such movements.[0068] FIG. IB is a block diagram illustrating another example system 50 configured to perform various aspects of the techniques described in this disclosure. The system 50 is
similar to the system 10 shown in FIG. 1A, except that the audio Tenderers 32 shown in FIG. 1 A are replaced with a binaural Tenderer 42 capable of performing binaural rendering using one or more head-related transfer functions (HRTFs) or the other functions capable of rendering to left and right speaker feeds 43.[0069] The audio playback system 16B may output the left and right speaker feeds 43 to headphones 48, which may represent another example of a wearable device and which may be coupled to additional wearable devices to facilitate reproduction of the soundfield, such as a watch, the VR headset noted above, smart glasses, smart clothing, smart rings, smart bracelets or any other types of smart jewelry (including smart necklaces), and the like. The headphones 48 may couple wirelessly or via wired connection to the additional wearable devices.[0070] Additionally, the headphones 48 may couple to the audio playback system 16B via a wired connection (such as a standard 3.5 mm audio jack, a universal system bus (USB) connection, an optical audio jack, or other forms of wired connection) or wirelessly (such as by way of a Bluetooth™ connection, a wireless network connection, and the like). The headphones 48 may recreate, based on the left and right speaker feeds 43, the soundfield represented by the audio data 19’. The headphones 48 may include a left headphone speaker and a right headphone speaker which are powered (or, in other words, driven) by the corresponding left and right speaker feeds 43.[0071] FIG. 1C is a block diagram illustrating another example system 60. The example system 60 is similar to the example system 10 of FIG. 1A, but source device 12B of system 60 does not include a content capture device. Source device 12B contains synthesizing device 29. Synthesizing device 29 may be used by a content developer to generate synthesized audio sources. The synthesized audio sources may have location information associated therewith that may identifying a location of the audio source relative to a listener or other point of reference in the soundfield, such that the audio source may be rendered to one or more speaker channels for playback in an effort to recreate the soundfield.[0072] For example, a content developer may generate synthesized audio streams for a video game. While the example of FIG. 1C is shown with the content consumer device 14 of the example of FIG. 1A, the source device 12B of the example of FIG. 1C may be used with the content consumer device 14B of FIG. IB. In some examples, the source device 12B of FIG. 1C may also include a content capture device, such that bitstream 27 may contain captured audio streams and synthesized audio streams.
[0073] As described above, the content consumer device 14A or 14B (either of which may be hereinafter referred to as content consumer device 14) may represent a VR device in which a human wearable display (which may also be referred to a“head mounted display”) is mounted in front of the eyes of the user operating the VR device. FIG. 2 is a diagram illustrating an example of a VR device 1100 worn by a user 1102. The VR device 1100 is coupled to, or otherwise includes, headphones 1104, which may reproduce a soundfield represented by the audio data 19’ through playback of the speaker feeds 35. The speaker feeds 35 may represent an analog or digital signal capable of causing a membrane within the transducers of headphones 104 to vibrate at various frequencies, where such process is commonly referred to as driving the headphones 1104.[0074] Video, audio, and other sensory data may play important roles in the VR experience. To participate in a VR experience, the user 1 102 may wear the VR device 1100 (which may also be referred to as a VR client device 1100) or other wearable electronic device. The VR client device (such as the VR device 1100) may include a tracking device (e.g., the tracking device 40) that is configured to track head movement of the user 1102, and adapt the video data shown via the VR device 1100 to account for the head movements, providing an immersive experience in which the user 1102 may experience an acoustical space shown in the video data in visual three dimensions. The acoustical space may refer to a virtual world (in which all of the world is simulated), an augmented world (in which portions of the world are augmented by virtual objects), or a physical world (in which a real world image is virtually navigated).[0075] While VR (and other forms of AR and/or MR) may allow the user 1102 to reside in the virtual world visually, often the VR device 1100 may lack the capability to place the user in the acoustical space audibly. In other words, the VR system (which may include a computer responsible for rendering the video data and audio data - that is not shown in the example of FIG. 2 for ease of illustration purposes, and the VR device 1100) may be unable to support full three-dimension immersion audibly (and in some instances realistically in a manner that reflects the displayed scene presented to the user via the VR device 1100).[0076] While described in this disclosure with respect to the VR device, various aspects of the techniques may be performed in the context of other devices, such as a mobile device. In this instance, the mobile device (such as a so-called smartphone) may present the acoustical space via a screen, which may be mounted to the head of the user 1102 or viewed as would be done when normally using the mobile device. As such, any
information on the screen can be part of the mobile device. The mobile device may be able to provide tracking information 41 and thereby allow for both a VR experience (when head mounted) and a normal experience to view the acoustical space, where the normal experience may still allow the user to view the acoustical space providing a VR4ite-type experience (e.g., holding up the device and rotating or translating the device to view different portions of the acoustical space).[0077] In any event, returning to the VR device context, the audio aspects of VR have been classified into three separate categories of immersion. The first category provides the lowest level of immersion, and is referred to as three degrees of freedom (3DOF). 3DOF refers to audio rendering that accounts for movement of the head in the three degrees of freedom (yaw, pitch, and roll), thereby allowing the user to freely look around in any direction. 3DOF, however, cannot account for translational head movements in which the head is not centered on the optical and acoustical center of the soundfield.[0078] The second category, referred to 3DOF plus (3DOF+), provides for the three degrees of freedom (yaw, pitch, and roll) in addition to limited spatial translational movements due to the head movements away from the optical center and acoustical center within the soundfield. 3DOF+ may provide support for perceptual effects such as motion parallax, which may strengthen the sense of immersion.[0079] The third category, referred to as six degrees of freedom (6DOF), renders audio data in a manner that accounts for the three degrees of freedom in term of head movements (yaw, pitch, and roll) but also accounts for translation of the user in space (x, y, and z translations). The spatial translations may be induced by sensors tracking the location of the user in the physical world or by way of an input controller.[0080] 3DOF rendering is the current state of the art for the audio aspects of VR. As such, the audio aspects of VR are less immersive than the video aspects, thereby potentially reducing the overall immersion experienced by the user. However, VR is rapidly transitioning and may develop quickly to supporting both 3DOF+ and 6DOF that may expose opportunities for additional use cases.[0081] For example, interactive gaming application may utilize 6DOF to facilitate fully immersive gaming in which the users themselves move within the VR world and may interact with virtual objects by walking over to the virtual objects. Furthermore, an interactive live streaming application may utilize 6DOF to allow VR client devices to experience a live stream of a concert or sporting event as if present at the concert themselves, allowing the users to move within the concert or sporting event.
[0082] There are a number of difficulties associated with these use cases. In the instance of fully immersive gaming, latency may need to remain low to enable gameplay that does not result in nausea or motion sickness. Moreover, from an audio perspective, latency in audio playback that results in loss of synchronization with video data may reduce the immersion. Furthermore, for certain types of gaming applications, spatial accuracy may be important to allow for accurate responses, including with respect to how sound is perceived by the users as that allows users to anticipate actions that are not currently in view.[0083] In the context of live streaming applications, a large number of source devices 12A or 12B (either of which may hereinafter be referred to as source device 12) may stream content 21, where the source devices 12 may have widely different capabilities. For example, one source device may be a smartphone with a digital fixed-lens camera and one or more microphones, while another source device may be production level television equipment capable of obtaining video of a much higher resolution and quality than the smartphone. However, all of the source devices, in the context of the live streaming applications, may offer streams of varying quality from which the VR device may attempt to select an appropriate one to provide an intended experience.[0084] Moreover, similar to the gaming applications, latency in audio data such that loss of synchronization occurs with the video data may result in less immersion. Moreover, spatial accuracy may also be important such that the users may better understand the context or location of different audio sources. Further, when users are live streaming using cameras and microphones, privacy may become an issue, as users may not want to live streams fully available to the public.[0085] In the context of streaming application (live or recorded), there may be a large number of audio streams associated with varying levels of quality and/or content. The audio streams may represent any type of audio data, including scene-based audio data (e.g., ambisonic audio data, including FOA audio data, MOA audio data and/or HOA audio data), channel -based audio data, and object-based audio data. Selecting only one of a potentially large number of audio streams from which to recreate a soundfield may not provide an experience that ensures an adequate level of immersion. However, selecting multiple audio streams may create distractions due to different spatial localization between the multiple audio streams, thereby potentially reducing immersion.[0086] In accordance with the techniques described in this disclosure, the audio decoding device 34 may adaptively select between audio streams available via the bitstream 27
(which are represented by the bitstream 27 and hence the bitstream 27 may also be referred to as“audio streams 27”). The audio decoding device 34 may select between different audio streams of the audio streams 27 based on audio location information (ALI) (e.g., 45 A in FIGS. 1A-1C), which, in some examples, may be included as metadata accompanying the audio streams 27, where the audio location information may define coordinates in the acoustical space for the microphones that capture the respective audio streams 27 or virtual coordinates where the audio streams were synthesized. The ALI 45A may be representative of a capture location in an acoustical space at which the corresponding one of the audio streams 27 was captured or virtual coordinates where the corresponding one of the audio streams was synthesized. The audio decoding device 34 may select, based on the ALI 45A, a subset of the audio streams 27, where the subset of the audio streams 27 excludes at least one of the audio streams 27. The audio decoding device 34 may output the subset of audio streams 27 as audio data 19’ (which may also be referred to as“audio data 19’”).[0087] In addition, the audio decoding device 34 may obtain the tracking information 41, which the content consumer device 14 may translate into device location information (DLI) (e.g., 45B in FIGS. 1A-1C). The DLI 45B may represent a virtual location or an actual location of the content consumer device 14 in the acoustical space, which may be defined as one or more device coordinates in the acoustical space. The content consumer device 14 may provide the DLI 45B to the audio decoding device 34. The audio decoding device 34 may then select, based on the ALI 45A and the DLI 45B, the audio data 19’ from the audio streams 27. The audio playback system 16A may then reproduce, based on the audio data 19’, the corresponding soundfields.[0088] In this respect, the audio decoding device 34 may adaptively select a subset of the audio streams 27 to obtain the audio data 19’ that may result in a more immersive experience (compared to selecting a single audio stream or all of the audio data 19’). As such, various aspects of the techniques described in this disclosure may improve operation of the audio decoding device 34 (and the audio playback system 16A or 16B and the content consumer device 14) itself by possibly enabling the audio decoding device 34 to better spatialize sound sources within the soundfield and thereby improve immersion.[0089] In operation, the audio decoding device 34 may interface with one or more source devices 12 to determine the ALI 45A for each of the audio streams 27. As shown in the example of FIG. 1A, the audio decoding device 34 may include a stream selection unit
44, which may represent a unit configured to perform various aspects of the audio stream selection techniques described in this disclosure.[0090] The stream selection unit 44 may generate, based on the ALI 45 A, a constellation map (CM) 47. The CM 47 may define the ALI 45A for each of the audio streams 27. The stream selection unit 44 may also perform an energy analysis with respect to each of the audio streams 27 to determine an energy map for each of the audio streams 27, storing the energy map along with the ALI 45 A in the CM 47. The energy maps may jointly define an energy of a common soundfield represented by the audio streams 27.[0091] The stream selection unit 44 may next determine distance(s) between the device location represented by the DLI 45B and the capture location(s) or synthesis location(s) represented by the ALI 45A associated with at least one and possibly each of the audio streams 27. The stream selection unit 44 may then select, based on the distance(s), the audio data 19’ from the audio streams 27 as discussed in more detail below with respect to FIGS. 3A-3F.[0092] Further, in some examples, the stream selection unit 44 may also select, based on the energy maps stored to the CM 47, the ALI 45 A, and the DLI 45B (jointly where the ALI 45A and the DLI 45B are presented in the form of the above noted distances, which may also be referred to as“relative distances”), the audio data 19’ from the audio streams 27. For example, the stream selection unit 44 may analyze the energy maps presented in the CM 47 to determine an audio source location (ASL) 49 of an audio source in the common soundfield emitting sound that is captured by microphones (such as the microphone 18) and represented by the audio streams 27. The stream selection unit 44 may then determine, based on ALI 45A, the DLI 45B, and the ASL 49, the audio data 19’ from the audio streams 27. More information regarding how the stream selection unit 44 may select the streams is discussed below with respect to FIGS. 3 A-3F.[0093] FIGS. 3A-3F are diagrams illustrating, in more detail, example operation of the stream selection unit 44 shown in the example of FIGs. 1 A-1C. As shown in the example of FIG. 3 A, the stream selection unit 44 may determine that the DLI 45B indicates that the content consumer device 14 (shown as the VR device 1100) is at virtual location 300 A. The stream selection unit 44 may next determine the ALI 45 A for one or more of audio elements 302A-302J (collectively referred to as audio elements 302), which may represent not just microphones, such as the microphone 18 shown in FIG. 1A, but other types of capture devices, including other XR devices, mobile phones - including so-called smartphones - and the like, or synthesized soundfield, etc.).
[0094] As described above, the stream selection unit 44 may obtain the audio streams 27. The stream selection unit 44 may interface with audio elements 302A-302J to obtain the audio streams 27. In some examples, the stream selection unit 44 may interact with an interface (such as a receiver, a transmitter and/or a transceiver) to obtain the audio streams 27 in accordance with a fifth generation (5G) cellular standard, a personal area network (PAN), such as Bluetooth™, or some other open-source, proprietary or standardized communication protocol. Wireless communication of the audio streams is denoted as a lightning bolt in the examples of FIGS. 3 A-3E, where the selected audio data 19’ is shown as communication from the selected one or more of the audio elements 302 to the VR device 1100.[0095] In any event, the stream selection unit 44 may next obtain energy maps in the manner described above, analyzing the energy maps to determine the audio source location 304, which may represent one example of the ASL 49 shown in the example of FIG. 1A. The energy maps may denote audio source location 304 as the energy at the audio source location 304 may be higher than the surrounding area. Given that each of the energy maps may denote this higher energy, the stream selection unit 44 may triangulate, based on the higher energy in the energy maps, the audio source location 304.[0096] Next, the stream selection unit 44 may determine an audio source distance 306A as a distance between the audio source location 304 and the virtual location 300A of the VR device 1100. The stream selection unit 44 may compare the audio source distance 306A to an audio source distance threshold. The stream selection unit 44 may, in some examples, derive the audio source distance threshold based on the energy of the audio source 308. That is, when the audio source 308 has a higher energy (or, in other words, when the audio source 308 is louder), the stream selection unit 44 may increase the audio source distance threshold. When the audio source 308 has a lower energy (or, in other words, when the audio source 308 is quieter), the stream selection unit 44 may decrease the audio source distance threshold. In other examples, the stream selection unit 44 may obtain a statically defined audio source distance threshold, which may be statically defined or specified by the user 1102.[0097] In any event, the stream selection unit 44 may select, when the audio source distance 306A is greater than the audio source distance threshold (which is assumed in this example for purposes of illustration), a single audio stream of the audio streams 27 captured by the audio elements 302A-302J (“audio elements 302”). The stream selection
unit 44 may output the corresponding one of the audio streams 27, which the audio decoding device 34 may decode and output as the audio data 19’.[0098] Assuming that the user 1102 moves from the virtual location 300 A to the virtual location 300B, the stream selection unit 44 may determine an audio source distance 306B as a distance between the audio source location 304 and the virtual location 300B. In some examples, the stream selection unit 44 may only update after some configurable release time, which may refer to a time until the receiver region increases after the listener stops moving.[0099] In any event, the stream selection unit 44 may again compare the audio source distance 306B to the audio source distance threshold. The stream selection unit 44 may select, when the audio source distance 306B is less than or equal to the audio source distance threshold (which is assumed in this example for purposes of illustration), multiple audio streams of the audio streams 27 captured by the audio elements 302A-302J (“audio elements 302”). The stream selection unit 44 may output the corresponding ones of the audio streams 27, which the audio decoding device 34 may decode and output as the audio data 19’.[0100] The stream selection unit 44 may also determine one or more proximity distances between the virtual location 300B and one or more (and possibly each) of the capture locations represented by the ALI 45A. The stream selection unit 44 may then compare the one or more proximity distances to a threshold proximity distance. The stream selection unit 44 may select, when the one or more proximity distances are greater than the threshold proximity distance, a smaller number of the audio streams 27 compared to when the one or more proximity distances are less than or equal to the threshold proximity distance to obtain the audio data 19’. However, the stream selection unit 44 may select, when the one or more of the proximity distances are less than or equal to the threshold proximity distance, a larger number of the audio streams 27 compared to when the one or more proximity distances are less than or equal to the threshold proximity distance to obtain the audio data 19’.[0101] In other words, the stream selection unit 44 may attempt to select those of the audio streams 27 such that the audio data 19’ are most closely aligned to the virtual location 300B and surround the virtual location 300B. The proximity distance threshold may define such a threshold, which the user 1102 of the VR device 1100 may set or the stream selection unit 44 may again determine dynamically based on a quality of the audio elements 302F-302J, the gain or loudness of the audio source 308, tracking information
41 (e.g., to determine whether the user 1102 is facing the audio source 308), or any other factors.[0102] In this respect, the stream selection unit 44 may increase audio spatialization accuracy when the listener is at the location 300B. Furthermore, when at the listener is at the location 300 A, the stream selection unit 44 may reduce a bitrate, as only the audio stream captured by audio element 302A is used to reproduce the soundfield rather than multiple audio streams of audio elements 302B-302J.[0103] Referring next to the example of FIG. 3B, the stream selection unit 44 may determine that the audio stream of the audio element 302A is corrupted, noisy, or unavailable. The stream selection unit 44 may remove the audio stream from the CM 47 and reiterate through the audio streams 27 in accordance with the techniques described in more detail above to select a single one of the audio streams 27 (e.g., the audio stream captured by the microphone 302B in the example of FIG. 3B) given that the audio source distance 306A is greater than the audio source distance threshold.[0104] Referring next to the example of FIG. 3C, the stream selection unit 44 may obtain a new audio stream (the audio stream of the audio element 302K) and corresponding new audio information, e.g., metadata, that includes ALI 45 A. The stream selection unit 44 may add the new audio stream to the CM 47 representative of the audio streams 27. The stream selection unit 44 may then reiterate through the audio streams 27 in accordance with the techniques described in more detail above to select a single one of the audio streams 27 (e.g., the audio stream captured by the of audio element 302B example of FIG. 3C) given that the audio source distance 306A is greater than the audio source distance threshold.[0105] In the example of FIG. 3D, the audio elements 302 are replaced with specific example devices 320A-320J (“devices 320”), where device 320A represents a dedicated microphone 320A, while devices 320B, 320C, 320D, 320G, 320H, and 320J represent smartphones. The devices 320E, 320F, and 3201 may represent VR devices. Each of devices 320 may include the audio elements 302, which capture audio streams 27 that are to be selected in accordance with various aspects of the stream selection techniques described in this disclosure.[0106] FIG. 3E is a conceptual diagram illustrating an example concert with three or more audio elements. In the example of FIG. 3E, a number of musicians are depicted on stage 323. Singer 312 is positioned behind audio element 310A. A string section 314 is depicted behind audio element 310B. Drummer 316 is depicted behind audio element
3 IOC. Other musicians 318 are depicted behind audio element 310D. Audio elements 310A-301D may represent captured audio streams that correspond to the sounds received by microphones. In some examples, microphones 310A-310D may represent synthesized audio streams. For example, audio element 310A may represent a captured audio stream(s) primarily associated with singer 312, but the audio stream(s) may also include sounds produced by other band members, such as the string section 314, the drummer 316 or the other musicians 318, while the audio element 310B may represent a captured audio stream(s) primarily associated with string section 314, but include sounds produced by other band members. In this manner, each of audio elements 310A-310D, may represent a different audio stream(s).[0107] Also, a number of devices are depicted. These devices represent user devices located at a number of different listening positions. Headphones 321 are positioned near audio element 310 A, but between audio element 310A and audio element 310B. As such, according to the techniques of this disclosure, stream selection unit 44 may select at least one of the audio streams to produce an audio experience for the user of the headphones321 similar to the user being located where the headphones 321 are located in FIG. 3F. Similarly, VR goggles 322 are shown located behind the audio element 3 IOC and between the drummer 316 and the other musicians 318. The stream selection unit 44 may select at least one audio stream to produce an audio experience for the user of the VR goggles322 similar to the user being located where the VR goggles 322 are located in FIG. 3F.[0108] Smart glasses 324 are shown located fairly centrally between the audio elements 310A, 3 IOC and 310D. The stream selection unit 44 may select at least one audio stream to produce an audio experience for the user of the smart glasses 324 similar to the user being located where the smart glasses 324 are located in FIG. 3F. Additionally, device 326 (which may represent any device capable of implementing the techniques of this disclosure, such as a mobile handset, a speaker array, headphones, VR goggles, smart glasses, etc.) is shown located in front of audio element 310B. Stream selection unit 44 may select at least one audio stream to produce an audio experience for the user of the device 326 similar to the user being located where the device 325 is located in FIG. 3E. While specific devices where discussed with respect to particular locations, a used of any of the devices depicted may provide an indication of a desired listening position that is different than depicted in FIG. 3E.[0109] FIGS. 4A-4C are flowcharts illustrating an example of operation of the stream selection unit 44 shown in the examples of FIGS. 1 A- 1C to control access to at least one
of the plurality of audio streams based on timing information. In some examples, the timing information may be timing metadata. In some examples, the timing metadata may be included in audio metadata. In the example of FIG. 4A, the use of a start time is discussed.[0110] In many contexts, there are audio streams that may be inappropriate or offensive for some people. For example, at a live sporting event, there may be people using offensive language in the venue. The same may be true in some video games. At other live events, like a convention, there may be sensitive discussions occurring. With the use of a start time, the stream selection unit 44 of the content consumer device 14 may screen out the undesired or sensitive audio streams and exclude them from playback to the user. The timing information, such as timing metadata, may be associated with individual audio streams or with privacy zones (discussed in more detail with respect to FIGS. 4H and 4J).[0111] In some cases, the source device 12 may apply the start time. For example, at a convention where sensitive discussions are going to occur at a given time, the content creator or source may create and apply the start time when the discussions are going to begin so that only certain people with appropriate privileges are able to hear the discussions. For other people without the appropriate privileges, the stream selection unit 44 may screen out or otherwise exclude the audio stream(s) for the discussions.[0112] In other cases, such as the sporting event example, the content consumer device 14 may create and apply the start time. As such, a user may exclude the offensive language during audio playback.[0113] The use of the start time information, such as start time metadata, is now discussed (400). The stream selection unit 44 may take the incoming audio streams and metadata associated with the audio streams, including location information, and start time information and stores them in the memory of the content consumer device 14 (401). The stream selection unit 44 may obtain location information (402). This location information may be associated with capture coordinates in the acoustical space, as discussed above. Start time information may be associated with each stream or with privacy zones (to be discussed more thoroughly with respect to FIG. 4F). For instance, at a live event, there may be sensitive discussions occurring, or there may be inappropriate language being used or topics being discussed for certain audiences. For instance, if a sensitive meeting at a convention is going to be held at 1 :00 PM GMT, the content creator or source may set the start time for the audio stream(s) or privacy zone(s) containing the audio associated with that meeting to 1 :00 PM GMT. In one example, the stream selection unit 44 may
compare the start time to the current time (403) and if the start time is equal or later than the current time, the stream selection unit 44 may screen out or otherwise exclude those audio streams or privacy zones with the associated start time (404). In some examples, content consumer device 14 may stop downloading the excluded audio streams.[0114] In another example, when the stream selection unit 44 screens out or excludes an audio stream or privacy zone, the content consumer device 14 may send a message to the source device 12 instructing the source device 12 to cease sending the excluded streams (405). This way content consumer device does not received the excluded streams and bandwidth within the transmission channel may be saved.[0115] In one example, the audio playback system 16 (which may represent either audio playback system 16A or audio playback system 16B, for simplicity purposes) may change the gain based upon the start time associated with the audio stream or privacy zone, boosting or attenuating the audio output. In another example, the audio playback system 16 may not change the gain. The audio decoding device 34 may also combine two or more selected audio streams together (406). The combining of selected audio streams could be done by way of mixing or interpolation or another variant of soundfield manipulation, for example. The audio decoding device may output the subset of audio streams (407).[0116] In one example, the audio playback system 16 may allow a user to override the start time. For example, content consumer device 14 may obtain, from user 1102, e.g., an override request to add at least one excluded audio stream of the plurality of audio streams (408). In the example where the content consumer device 14 sends a message to tell the source device to stop sending the excluded audio streams or privacy zones (405), the content consumer device 14 would send a new message to tell the source device restart the sending of those audio streams or privacy zones (409). If the start time is overridden, then the audio decoding device 34 may add or combine those respective streams or privacy zones with the subset of audio streams or privacy zones (410). The combining of selected audio streams could be done by way of mixing or interpolation or another variant of soundfield manipulation, for example. The audio decoding device 34 may include the selected streams in the audio output (411).[0117] FIG. 4B is a flowchart illustrating an example of operation of the stream selection unit shown in the examples of FIGS. 1A-1C to control access to at least one of the plurality of audio streams based on timing information. In this example, the timing information is a duration. In some examples, the timing information may be timing
metadata. In some examples, the timing metadata may be included in audio metadata. In some instances, a content creator or source may desire to provide a more complete experience for a temporary time period. For instance, a content provider or source may want to do so for an advertisement or a trial period when attempting to get a user to upgrade their level of service.[0118] Stream selection unit 44 may store the incoming audio streams and information, such as metadata, associated with them, including location information, and start time metadata in the memory of the content consumer device 14 (421). The stream selection unit 44 may obtain location information (422). The stream selection unit 44 may do this by reading the location information from memory, for example in the case of a single audio stream, or calculating it, for example in the case of a privacy zone. This location information may be associated with capture coordinates in the acoustical space, as discussed above. Duration metadata may be associated with each stream or with privacy zones and may be set to any duration. For instance, in the example of offering a full experience for a limited time period, the source device or the content consumer device may set the duration to be an hour, for example only. The stream selection unit 44 may compare the duration with a timer (423). If the timer is equal or greater than the duration, the stream selection unit 44 may exclude the audio streams or privacy zones associated with the duration, thereby selecting a subset of the audio streams (424). If the timer is less than the duration, the stream selection unit 44 would not exclude those streams or privacy zones (425).[0119] As with the example of FIG. 4A, the content consumer device 14 could send a message to the source device 12 telling it to cease sending the excluded streams and send another message to start resending the excluded streams if the duration is overridden (not shown for the sake of simplicity). This way bandwidth within the transmission channel could be saved.[0120] In one example, the audio playback system 16 may change the gain based upon the duration associated with the audio stream or privacy zone, boosting or attenuating the audio output. In another example, the audio playback system may not change the gain. The audio decoding device 34 may combine two or more selected audio streams together (426). The combining of selected audio streams could be done by way of mixing or interpolation or another variant of soundfield manipulation, for example. The audio decoding device 34 may then output the subset of audio streams (427).
[0121] By using start time and/or duration as access controls, the stream selector unit 44 may maintain access control even when there is no connection to the source device. For example, when the content consumer device 14 is offline and is playing stored audio, the stream selector unit 44 may still compare the start time to the current time or the duration to the timer and effectuate offline access control.[0122] FIG. 4C is a flowchart illustrating an example of operation of the stream selection unit shown in the examples of FIGS. 1 A- 1C in performing various aspects of the stream selection techniques (430). The source device 12 may make available different soundfields, such as FOA soundfields, higher order ambisonic soundfield (HO A) or MOA soundfields. A user of the content consumer device 14 may make a request on content consumer device 14 through a user interface to change the audio experience (431). For example, the user who is experiencing FOA soundfields may desire an enhanced experience and request HOA or MOA soundfields. If the content consumer device is in receipt of the necessary coefficients and is configured to change the ambisonic soundfield type (432), it may then change the ambisonic soundfield type (433) and the stream selection unit 44 may output the audio streams (434). If the content consumer device 14 is not in receipt of the necessary coefficients or is not configured to change the ambisonic soundfield type, the content consumer device 14 may send a request to the source device 12 to make the change (435). The source device may make the change and send the new soundfields to the content consumer device 14. The audio decoding device 34 may then receive the new soundfields (436) and output the audio streams (437). The use of different types of ambisonic soundfields could also be used with the start time example of FIG 4A and the duration example of FIG 4B. For example, the content consumer device 14 may use one ambisonic soundfield type until the start time is equal or greater than the current time and then another ambisonic soundfield type. Or the content consumer device 14 may use one ambisonic soundfield type until the timer is equal to or greater than the duration and then use another ambisonic soundfield type.[0123] FIGS. 4D and 4E are diagrams further illustrating the use of timing information, such as timing metadata, in accordance with various aspects of the techniques described in this disclosure. A static audio source 441, such as an open microphone is shown. In some examples, the static audio source 441 may be a live audio source. In other examples, the static audio source 441 may be a synthetic audio source. A dynamic audio source 442, such as in a user operated mobile handset where the user sets when it is recording, is also shown. In some examples, the dynamic audio source may be a live
audio source. In other examples, the dynamic audio source 442 may be a synthetic source. One or more of the static audio source 441 and/or the dynamic audio source 442 may capture audio information 443. A controller 444 may process the audioinformation 443. In FIG. 4D, the controller 444 may be implemented in one or more processors 440 in the content consumer device 14. In FIG. 4E, the controller 444 may be implemented in one or more processors 448 in the source device 12. The controller 444 may compartmentalize the audio information into zones, create audio streams and tag the audio streams with information, such as metadata, including locationinformation regarding the location of the audio sources 441 and 442, and the zonal compartmentalization, including the boundaries of the zones, through centroid and radius data, for example. In some examples, controller 444 may provide the location information in a manner other than as metadata. The controller 444 may perform these functions online or offline. The controller 444 may also assign timing information, such as timing metadata, to each of the audio streams or zones, such as start time information or duration information. The controller 444 may provide burst (e.g., periodic) or fixed (e.g., sustained) audio streams and associated information, such as metadata, to the content consumer device 14. The controller 444 may also assign gains and/or nulling to be applied to the audio streams.[0124] The stream selection unit 44 may use the timing metadata to provide bursts or fixed audio streams to the user during rendering. So the user’s experience may change based upon the timing metadata. The user may request the controller 444 over the link 447 to override the timing metadata and change the user’s access to the audio streams or privacy zones.[0125] FIGS. 4F and 4G are diagrams illustrating the use of a temporary request for greater access in accordance with various aspects of the techniques described in this disclosure. In this example as shown in FIG. 4F, the content consumer device 14 is rendering to the user 470 audio streams 471, 472 and 473 which are represented by the depicted audio elements. The content consumer device 14 is not rendering the audio stream 474 also represented by an audio element. In this case if the user would like temporary elevation of their experience, they may send a request through a user interface to temporarily grant them access to the audio stream 474. The stream selector unit may then add in the audio stream 474 as shown in FIG. 4G. In some examples, the content consumer device 14 may send a message to the source device 12 asking for
access. In other examples, the stream selection unit 44 may add in the audio stream 474 without sending a message to the source device 12.[0126] FIGS. 4H and 41 are diagrams illustrating the concept of privacy zones in accordance with various aspects of the techniques described in this disclosure. The user 480 is shown near several groups of audio elements each representing an audio stream. It may be useful to authorize which streams are used to create the audio experience of the user 480 in groups, rather than individually. For instance, in the example of the convention, multiple audio elements may be receiving the sensitive information. Therefore, privacy zones may be created.[0127] The source device 12 or the content consumer device 14 may assign the user an authorization level (e.g., a rank), and an authorization level (e.g., a rank) for each privacy zone, respectively. The controller 444, for example, may assign gain and nulling metadata and, in this example, a rank for each privacy zone. For example, privacy zone 481 may contain audio streams 4811, 4812 and 4813. Privacy zone 482 may contain audio streams 4821, 4822 and 4823. Privacy zone 483 may contain audio streams 4831, 4832 and 4833. As shown in Table 1, the controller 444 may tag these audio streams as belonging to their respective privacy zones and may associate gain and nulling metadata with them as well. As represented in Table 1, G is gain and N is nulling or excluding. In this example, the user 480 has a rank of 2 with respect to privacy zones 481 and 483, but a rank of 3 with respect to privacy zone 482. As indicated in the table, the stream selection unit 44 would exclude or null zone 482 and it would be unavailable for rendering unless the user 480 were to override it. The resulting rendering is shown in FIG. 4H.TABLE 1[0128] Timing information, such as timing metadata, may be used to temporarily change the rank of one or more of the privacy zones. For instance, the source device 12 may assign a duration for zone 462 that would raise the rank to a 2 for a period of time, 5 minutes for example. The stream selector unit 44 would then not exclude or null out the privacy zone 482 for that duration. In another example, source device 12 could assign a
start time to privacy zone 461 of 12:00pm GMT that would lower the rank to a 3. The stream selector unit 44 would then exclude privacy zone 461. If the stream selector unit 44 were to do both, the user would receive the audio streams from the privacy zones 462 and 463, but not 461 as shown in FIG. 41.[0129] Content consumer device 14 may use the timing information, such as timing metadata, and comparisons as time stamps and store them in memory as a way of maintaining a record of events for each zone.[0130] FIGS. 4J and 4K are diagrams illustrating the use of tiers of service in audio rendering according to aspects of this disclosure. A user 480 is depicted surrounded by audio elements. In this example, the audio elements in privacy zone 482 represent FOA soundfields. The audio elements inside the privacy zone 481 represent HO A or MOA soundfields. In FIG. 4J the content consumer device 14 is using FOA soundfields. In this example, certain individual streams or groups of streams may be enabled for better audio interpolation. The source device 12 may wish to make higher resolution rendering available for a temporary period of time, such as for an advertisement or a teaser for the higher resolution rendering. In another example, as discussed above with respect to FIG. 4C, the user may ask for the higher resolution rendering. The content consumer device 14 may then provide an enhanced experience as shown in FIG. 4K.[0131] Another way to utilize timing information, such as timing metadata, is for node modification as part of audio scene updates for 6DOF use cases as described below. Currently, audio scene updates occur instantaneously and that is not always desirable. FIG. 4L is a state transition diagram illustrating state transitions in accordance with various aspects of the techniques described in this disclosure. In this case, the timing information is timing metadata and the timing metadata is a delay (fireOnStartTime) and a duration (updateDuration). This timing metadata may be included in the audio metadata.[0132] It may be desirable to update the audio scene experienced by a user based upon a condition occurring, but not update it immediately upon that condition occurring. It also may be desirable to stretch out the time it takes the content consumer device 14 to make the update. As such, stream selection unit 44 may use a modifiablefireOnStartTime to delay the beginning of the update and use an updateDuration to change the time it takes to complete the update and thereby affect the selection of streams and update the audio scene in a controlled manner. The source device 12 or the
content consumer device 14 may determine or modify the fireOnStartTime and/or the updateDuration.[0133] A condition (490) may occur, such as a nearby car is started, that may make a delayed update in the audio scene desirable. The source device 12 or the content consumer device 14 may set the delay by setting the fireOnStartTime (491). The fireOnStartTime may be a time of delay or the time after the condition occurs that the audio scene update begins. The stream selection unit 44 may compare a timer to the fireOnStartTime and if the timer is equal or is greater than the fireOnStartTime begin the update of the audio scene (492). The stream selection unit 44 may update the audio scene during a transition duration (494) based upon the update duration (493) and finish the update (495) when the transition duration (494) passed. The stream selection unit 44 may modify the audio scene as discussed in Table 2 below:[0134] FIG. 4M is an illustration of a vehicle 4000 in accordance with various aspects of the techniques described in this disclosure. The stream selection unit 44 may update sequentially three object sources (audio sources) of a vehicle based upon the modifiable timing parameters fireOnStartTime and updateDuration. The content consumer device
14 or the source device 12 may set or modify these parameters. In this example, the three object sources are the vehicle’s 4000 engine 4001, radio 4002 and exhaust 4003. The source device 12 or the content consumer device 14 may assign each object source, engine 4001, radio 4002 and exhaust 4003, its own native trigger time(fireOnStartTime) and duration to finish transitioning (updateDuration). The stream selection unit 44 may apply a fireOnStartTime irrespective of the interpolate attribute mentioned in Table 2. The stream selection unit 44 may also treat updateDuration as an effect of the interpolate attribute. For example, if the attribute is set to“true” then the stream selection unit 44 may utilize updateDuration and make the update over the course of the updateDuration, or else the stream selection unit 44 may transition the audio scene immediately.[0135] The following code provides an example according to various aspects of techniques described in this disclosure:<!-- Define a condition for someone turning on a car when the listener gets close. The car’s audio element’s were previously inactive, for example the car is parked and turned off. --><Li stenerProximityCondition i d=”cond : 1 i stenerNearCar”region=”geo: regionl”/><Box id=”geo: regionl” position=”5 0 -5” size=”10 2 10” /><Update time=”0.2”><Modify id=”engine” position=”2.2 1.7 -1.25” /><Modify id=” radio” positional.1 1.5 -0.55” /><Modify id=”exhaust” position=”2.2 1.5 -0.95” /></L)pdate><L)pdate condition=”cond:listenerNearCar” fi reOn=”true”><Modify id=”engine” active=”true” i nterpol ate=”true”f reonstartT me = 0.1 ,updateDuration = 0.05 /><Modify id=”radio” active=”true” i nterpol ate=”true” fi reonstartT me = 0.2updateDuration = 0.1 /><Modify id=”exhaust” active=”true” i nterpol ate=”true” fi reonstartTime = 0.2,updateDuration = 0.1 /></L)pdate>
[0136] FIG.4N is an illustration of a moving vehicle 4100 in accordance with various aspects of the techniques described in this disclosure. This illustration represents a scenario where the stream selection unit 44 may update the audio scene positionally while the vehicle 4100 is navigating on a highway. In this example, there are five object sources: the engine 4101, the tire 14102, the tire 24103, the radio 4104 and the exhaust 4105. The positional update after the update duration is affected is the final position since the update time. The intermediate updates/interpolation between the update duration are applied as a part of the audio Tenderer and the different schemes of interpolation can be applied as a personal preference or can be situational. An example is given in the following code:[0137]<!-Car moving along a highway...--><L)pdate time=”0.2”>>/></L)pdate><L)pdate condition=”cond:listenerNearCar” fi reOn=”True”><Modify id=”engine” position=”32.2 31.7 -1.25”i nterpol ate=”True” , updateDuration = 30/><Modify id=”tirel” position=”32.1 30.4 0.75”i nterpol ate=”T rue”updateDuration = 30/><Modify id=”tire2” position=”30.7 30.4 -0.95”i nterpol ate=”T rue”, updateDuration = 30/><Modify id=”radio” position=”32.0 31.7 -0.55”i nterpol ate=”T rue”updateDuration = 30/><Modify id=”exhaust” position=”30.5 30.5 -0.95”i nterpol ate=”T rue”/></L)pdate>[0138] These techniques may be particularly useful in a virtual teleportation case. In such a case, an audio signal may be perceived by a user as emanating from the direction from
where a virtual teleported image is located. The virtual image may be a different passenger or driver in another vehicle or other fixed environment (e.g., a school, office, or a home). The virtual image, e.g., virtual passenger may include either two-dimensional avatar data or three-dimensional avatar data. When the virtual passenger speaks, it sounds as if the virtual passenger(s) is in the location (e.g., orientation on the screen) projected on the digital display of the headset device, or digital display viewed by the camera(s) that may be coupled to the headset device. That is, the virtual passenger(s) may be coupled to a two-dimensional audio signal or three-dimensional audio signal. The two- dimensional audio signal or three-dimensional audio signal may include one or more audio objects (e.g., the person’s voice) spatially located where the virtual image appears to be oriented relative to the position of the screen of the digital display on the headset device or the digital display coupled to the headset device. The loudspeakers that generate the two-dimensional or three-dimensional audio signal may be mounted and integrated into the headset device. In other embodiments, the loudspeakers may be distributed in different positions within the vehicle 4100, and the audio signal may be rendered such that the sound from the audio stream is perceived as being located where the virtual teleported image is located. In an alternate embodiment, a“teleportation” may be the sound being teleported but not the virtual image. As such, a person in a vehicle or wearing a headset device may hear a sound or voice of a person as if they are near them, e.g., next to them, in front of them, behind them, etc.[0139] It may be useful to include a“Listener Event Trigger” in the audio metadata in virtual teleportation use cases, as the controller may control listener navigation between positions by means of a trigger. The controller could use this Listener Event Trigger to actuate teleportation.[0140] FIG. 40 is a flow diagram illustrating example techniques of using authorization levels for controlling access to at least one of the plurality of audio streams based on timing information. The use of authorization levels (430) is now discussed. Stream selection unit 44 may determine an authorization level for user 1102 (504). For example, user 1102 may have a rank associated with them, as discussed above with respect to FIGS. 4H and 41. Stream selection unit 44 compare the authorization level for user 1102 to authorization levels of one or more privacy zones. For example, each privacy zone may have an associated authorization level, as discussed above with respect to FIGS. 4H and 41. Stream selection unit 44 may select the subset of the plurality of audio streams based on the comparison. For example, stream selection unit 44 may determine that user 1102
is not authorized to access privacy zone 482 of FIG. 4H and may exclude or null zone 482. Thus, audio streams 4821, 4822 and 4823 would be excluded from the subset of the plurality of audio streams.[0141] FIG. 4P is a flowchart illustrating example techniques of using a trigger and delay to control access to at least one of the plurality of audio streams based on timing information. The use of a trigger and delay (510) is now discussed. For example, stream selection unit 44 may detect a trigger (512). For example, stream selection unit 44 may detect a native trigger time, such as a fireOnStartTime, or a Listener Event Trigger. Stream selection unit 44 may compare the delay to a timer (514). For example, stream selection unit 44 may compare an updateDuration or other delay to the timer. If the delay is less than the timer (the“NO” path of FIG. 4P), stream selection unit 44 may continue to compare the delay to the timer. If the delay is greater than or equal to the timer, the stream selection unit may select a subset of the plurality of audio streams (516). In this manner, stream selection unit may wait until the delay is equal to or greater than the timer to select the subset of the plurality of audio streams.[0142] FIG. 5 is a diagram illustrating an example of a wearable device 500 that may operate in accordance with various aspect of the techniques described in this disclosure. In various examples, the wearable device 500 may represent a VR headset (such as the VR device 1100 described above), an AR headset, an MR headset, or any other type of extended reality (XR) headset. Augmented Reality“ AR” may refer to computer rendered image or data that is overlaid over the real world where the user is actually located. Mixed Reality“MR” may refer to computer rendered image or data that is world locked to a particular location in the real world, or may refer to a variant on VR in which part computer rendered 3D elements and part photographed real elements are combined into an immersive experience that simulates the user’s physical presence in the environment. Extended Reality“XR” may represent a catchall term for VR, AR, and MR. More information regarding terminology for XR can be found in a document by Jason Peterson, entitled“Virtual Reality, Augmented Reality, and Mixed Reality Definitions,” and dated July 7, 2017.[0143] The wearable device 500 may represent other types of devices, such as a watch (including so-called“smart watches”), glasses (including so-called“smart glasses”), headphones (including so-called“wireless headphones” and“smart headphones”), smart clothing, smart jewelry, and the like. Whether representative of a VR device, a watch, glasses, and/or headphones, the wearable device 500 may communicate with the
computing device supporting the wearable device 500 via a wired connection or a wireless connection.[0144] In some instances, the computing device supporting the wearable device 500 may be integrated within the wearable device 500 and as such, the wearable device 500 may be considered as the same device as the computing device supporting the wearable device 500. In other instances, the wearable device 500 may communicate with a separate computing device that may support the wearable device 500. In this respect, the term “supporting” should not be understood to require a separate dedicated device but that one or more processors configured to perform various aspects of the techniques described in this disclosure may be integrated within the wearable device 500 or integrated within a computing device separate from the wearable device 500.[0145] For example, when the wearable device 500 represents the VR device 1100, a separate dedicated computing device (such as a personal computer including the one or more processors) may render the audio and visual content, while the wearable device 500 may determine the translational head movement upon which the dedicated computing device may render, based on the translational head movement, the audio content (as the speaker feeds) in accordance with various aspects of the techniques described in this disclosure. As another example, when the wearable device 500 represents smart glasses, the wearable device 500 may include the one or more processors that both determine the translational head movement (by interfacing within one or more sensors of the wearable device 500) and render, based on the determined translational head movement, the speaker feeds.[0146] As shown, the wearable device 500 includes a rear camera, one or more directional speakers, one or more tracking and/or recording cameras, and may include one or more light-emitting diode (LED) lights. In some examples, the LED light(s) may be referred to as“ultra bright” LED light(s). In addition, the wearable device 500 includes one or more eye-tracking cameras, high sensitivity audio microphones, and optics/projection hardware. The optics/projection hardware of the wearable device 500 may include durable semi-transparent display technology and hardware.[0147] The wearable device 500 also includes connectivity hardware, which may represent one or more network interfaces that support multimode connectivity, such as 4G communications, 5G communications, etc. The wearable device 500 also includes ambient light sensors, one or more cameras and night vision sensors, and one or more bone conduction transducers. In some instances, the wearable device 500 may also
include one or more passive and/or active cameras with fisheye lenses and/or telephoto lenses. It will be appreciated that the wearable device 500 may exhibit a variety of different form factors.[0148] Furthermore, the tracking and recording cameras and other sensors may facilitate the determination of translational distance. Although not shown in the example of FIG. 5, wearable device 500 may include other types of sensors for detecting translational distance.[0149] Although described with respect to particular examples of wearable devices, such as the VR device 1100 discussed above with respect to the examples of FIG. 2 and other devices set forth in the examples of FIGS. 1A-1C, a person of ordinary skill in the art would appreciate that descriptions related to FIGS. 1A-1C, and 2 may apply to other examples of wearable devices. For example, other wearable devices, such as smart glasses, may include sensors by which to obtain translational head movements. As another example, other wearable devices, such as a smart watch, may include sensors by which to obtain translational movements. As such, the techniques described in this disclosure should not be limited to a particular type of wearable device, but any wearable device may be configured to perform the techniques described in this disclosure.[0150] FIGS. 6A and 6B are diagrams illustrating example systems that may perform various aspects of the techniques described in this disclosure. FIG. 6A illustrates an example in which the source device 12C further includes a camera 600. The camera 600 may be configured to capture video data, and provide the captured raw video data to the content capture device 20. The content capture device 20 may provide the video data to another component of the source device 12C, for further processing into viewport-divided portions.[0151] In the example of FIG. 6A, the content consumer device 14C also includes the VR device 1100. It will be understood that, in various implementations, the VR device 1100 may be included in, or externally coupled to, the content consumer device 14C. The VR device 1100 includes display hardware and speaker hardware for outputting video data (e.g., as associated with various viewports) and for rendering audio data.[0152] FIG. 6B illustrates an example in which the audio Tenderers 32 shown in FIG. 6A are replaced with a binaural Tenderer 42 capable of performing binaural rendering using one or more HRTFs or the other functions capable of rendering to left and right speaker feeds 43. The audio playback system 16C of content consumer device 14D may output the left and right speaker feeds 43 to headphones 48.
[0153] The headphones 48 may couple to the audio playback system 16C via a wired connection (such as a standard 3.5 mm audio jack, a universal system bus (USB) connection, an optical audio jack, or other forms of wired connection) or wirelessly (such as by way of a Bluetooth™ connection, a wireless network connection, and the like). The headphones 48 may recreate, based on the left and right speaker feeds 43, the soundfield represented by the audio data 19’. The headphones 48 may include a left headphone speaker and a right headphone speaker which are powered (or, in other words, driven) by the corresponding left and right speaker feeds 43.[0154] FIG. 7 is a block diagram illustrating example components of one or more of the source device 12 and the content consumer device 14 shown in the examples of FIGS. 1A-1C. In the example of FIG. 7, the device 710 includes a processor 712 (which may be referred to as“one or more processors” or“processor(s)”), a graphics processing unit (GPU) 714, system memory 716, a display processor 718, one or more integrated speakers 740, a display 703, a user interface 720, antenna 721, and a transceiver module 722. In examples where the device 710 is a mobile device, the display processor 718 is a mobile display processor (MDP). In some examples, such as examples where the device 710 is a mobile device, the processor 712, the GPU 714, and the display processor 718 may be formed as an integrated circuit (IC).[0155] For example, the IC may be considered as a processing chip within a chip package and may be a system-on-chip (SoC). In some examples, two of the processors 712, the GPU 714, and the display processor 718 may be housed together in the same IC and the other in a different integrated circuit (e.g., different chip packages) or all three may be housed in different ICs or on the same IC. However, it may be possible that the processor 712, the GPU 714, and the display processor 718 are all housed in different integrated circuits in examples where the device 710 is a mobile device.[0156] Examples of the processor 712, the GPU 714, and the display processor 718 include, but are not limited to, one or more digital signal processors (DSPs), general purpose microprocessors, application specific integrated circuits (ASICs), field programmable logic arrays (FPGAs), or other equivalent integrated or discrete logic circuitry. The processor 712 may be the central processing unit (CPU) of the device 710. In some examples, the GPU 714 may be specialized hardware that includes integrated and/or discrete logic circuitry that provides the GPU 714 with massive parallel processing capabilities suitable for graphics processing. In some instances, GPU 714 may also include general purpose processing capabilities, and may be referred to as a general-
purpose GPU (GPGPU) when implementing general purpose processing tasks (e.g., non graphics related tasks). The display processor 718 may also be specialized integrated circuit hardware that is designed to retrieve image content from the system memory 716, compose the image content into an image frame, and output the image frame to the display 703.[0157] The processor 712 may execute various types of the applications. Examples of the applications include web browsers, e-mail applications, spreadsheets, video games, other applications that generate viewable objects for display, or any of the application types listed in more detail above. The system memory 716 may store instructions for execution of the applications. The execution of one of the applications on the processor 712 causes the processor 712 to produce graphics data for image content that is to be displayed and the audio data 19 that is to be played (possibly via integrated speaker 740). The processor 712 may transmit graphics data of the image content to the GPU 714 for further processing based on and instructions or commands that the processor 712 transmits to the GPU 714.[0158] The processor 712 may communicate with the GPU 714 in accordance with a particular application processing interface (API). Examples of such APIs include the DirectX®API by Microsoft®, the OpenGL®or OpenGL ES®by the Khronos group, and the OpenCL™; however, aspects of this disclosure are not limited to the DirectX, the OpenGL, or the OpenCL APIs, and may be extended to other types of APIs. Moreover, the techniques described in this disclosure are not required to function in accordance with an API, and the processor 712 and the GPU 714 may utilize any process for communication.[0159] The system memory 716 may be the memory for the device 710. The system memory 716 may include one or more computer-readable storage media. Examples of the system memory 716 include, but are not limited to, a random-access memory (RAM), an electrically erasable programmable read-only memory (EEPROM), flash memory, or other medium that can be used to carry or store desired program code in the form of instructions and/or data structures and that can be accessed by a computer or a processor.[0160] In some examples, the system memory 716 may include instructions that cause the processor 712, the GPU 714, and/or the display processor 718 to perform the functions ascribed in this disclosure to the processor 712, the GPU 714, and/or the display processor 718. Accordingly, the system memory 716 may be a computer-readable storage medium having instructions stored thereon that, when executed, cause one or more processors
(e.g., the processor 712, the GPU 714, and/or the display processor 718) to perform various functions.[0161] The system memory 716 may include a non-transitory storage medium. The term “non-transitory” indicates that the storage medium is not embodied in a carrier wave or a propagated signal. However, the term“non-transitory” should not be interpreted to mean that the system memory 716 is non-movable or that its contents are static. As one example, the system memory 716 may be removed from the device 710 and moved to another device. As another example, memory, substantially similar to the system memory 716, may be inserted into the device 710. In certain examples, a non-transitory storage medium may store data that can, over time, change (e.g., in RAM).[0162] The user interface 720 may represent one or more hardware or virtual (meaning a combination of hardware and software) user interfaces by which a user may interface with the device 710. The user interface 720 may include physical buttons, switches, toggles, lights or virtual versions thereof. The user interface 720 may also include physical or virtual keyboards, touch interfaces - such as a touchscreen, haptic feedback, and the like.[0163] The processor 712 may include one or more hardware units (including so-called “processing cores”) configured to perform all or some portion of the operations discussed above with respect to one or more of any of the modules, units or other functional components of the content creator device and/or the content consumer device. The antenna 721 and the transceiver module 722 may represent a unit configured to establish and maintain the connection between the source device 12 and the content consumer device 14. The antenna 721 and the transceiver module 722 may represent one or more receivers and/or one or more transmitters capable of wireless communication in accordance with one or more wireless communication protocols, such as a fifth generation (5G) cellular standard, a person area network (PAN) protocol, such as Bluetooth™, or other open-source, proprietary, or other communication standard. For example, the transceiver module 722 may receive and/or transmit a wireless signal. The transceiver module 722 may represent a separate transmitter, a separate receiver, both a separate transmitter and a separate receiver, or a combined transmitter and receiver. The antenna 721 and the transceiver module 722 may be configured to receive encoded audio data. Likewise, the antenna 721 and the transceiver module 722 may be configured to transmit encoded audio data.[0164] FIG. 8A-8C are flowcharts illustrating example operation of the stream selection unit 44 shown in the examples of FIGS. 1A-1C in performing various aspects of the
stream selection techniques. Referring first to the example of FIG. 8A, the stream selection unit 44 may obtain audio stream 27 from all enabled audio elements, where the audio streams 27 may include corresponding audio information, e.g., metadata, such as the ALI 45A (800). The stream selection unit 44 may perform the energy analysis with respect to each of the audio streams 27 to calculate a respective energy map (802).[0165] The stream selection unit 44 may next iterate through different combinations of the audio elements (defined in the CM 47) based on proximity to the audio source 308 (as defined by audio source distance 306A and/or 306B) and the audio elements (as defined by the proximity distances discussed above) (804). As shown in FIG. 8A, the audio elements may be ranked or otherwise associated with different access rights. The stream selection unit 44 may iterate, based on the listener position (which is another way to refer to the“virtual location” or“device location”) represented by the DLI 45B, and the audio element positions represented by the ALI 45 A, in the manner described above to identify whether a larger subset of the audio streams 27 or a reduced subset of the audio streams 27 is required (806, 808).[0166] When a larger subset of the audio streams 27 is required, the stream selection unit 44 may add audio element(s), or in other words, additional audio stream(s) to the audio data 19’ (such as when the user is closer to the audio source in the example of FIG. 3A) (810). When a reduced subset of the audio streams 27 is required, the stream selection unit 44 may remove audio element(s) or in other words existing audio stream(s) from the audio data 19’ (such as when the user is farther from the audio source in the example of FIG. 3 A) (812).[0167] In some examples, the stream selection unit 44 may determine that the current constellation of audio elements is an optimal set (or, in other words, that the existing audio data 19’ is to remain the same as the selection process described herein results in the same audio data 19’) (804), and the process may return to 802. However, when audio streams are added or removed from the audio data 19’, the stream selection unit 44 may update the CM 47 (814), generating a constellation history (815) (including positions, energy maps, etc.).[0168] In addition, the stream selection unit 44 may determine whether privacy settings enable or disable addition of the audio elements (where the privacy settings may refer to digital access rights that limit access to one or more of the audio streams 27, e.g., by way of a password, an authorization level or rank, a time, etc.) (816, 818). When privacy settings enable addition of an audio element, the stream selection unit 44 may add audio
element(s) to the updated CM 47 (which refers to addition of audio stream(s) to the audio data 19’) (820). When privacy settings disable addition of an audio element, the stream selection unit 44 may remove audio element(s) from the updated CM 47 (which refers to removal of audio stream(s) from the audio data 19’) (822). In this manner, the stream selection unit 44 may identify a new set of enabled audio elements (824).[0169] The stream selection unit 44 may iterate in this fashion and update various inputs according to any given frequency. For example, the stream selection unit 44 may update privacy settings at a user interface rate (meaning that updates are driven by way of updates entered via the user interface). The stream selection unit 44, as another example, may update positions at sensor rate (meaning that positions are changed through movement of the audio element). The stream selection unit 44 may further update the energy maps at an audio frame rate (meaning that the energy maps are updated each frame).[0170] Referring next to the example of FIG. 8B, the stream selection unit 44 may operate in the manner described above with respect to FIG. 8A, except that the stream selection unit 44 may not base the determination of the CM 47 on energy maps. As such, the stream selection unit 44 may obtain audio stream 27 from all enabled audio elements, where the audio streams 27 may include corresponding audio information, e.g., metadata, such as the ALI 45 A (840). The stream selection unit 44 may determine whether privacy settings enable or disable addition of the audio elements (where the privacy settings may refer to digital access rights that limit access to one or more of the audio streams 27, e.g., by way of a password, an authorization level or rank, a time, etc.) (842, 844).[0171] When privacy settings enable addition of an audio element, the stream selection unit 44 may add audio element(s) to the updated CM 47 (which refers to addition of audio stream(s) to the audio data 19’) (846). When privacy settings disable addition of an audio element, the stream selection unit 44 may remove audio element(s) from the updated CM 47 (which refers to removal of audio stream(s) from the audio data 19’) (848). In this manner, the stream selection unit 44 may identify a new set of enabled audio elements (850). The stream selection unit 44 may iterate (852) through the different combinations of audio elements in the CM 47 to determine the constellation history (854), which is representative of the audio data 19’.[0172] The stream selection unit 44 may iterate in this fashion and update various inputs according to any given frequency. For example, the stream selection unit 44 may update privacy settings at a user interface rate (meaning that updates are driven by way of updates entered via the user interface). The stream selection unit 44, as another example, may
update positions at sensor rate (meaning that positions are changed through movement of the audio element).[0173] Referring next to the example of FIG. 8C, the stream selection unit 44 may operate in the manner described above with respect to FIG. 8A, except that the stream selection unit 44 may not base the determination of the CM 47 on privacy settings enabled audio elements. As such, the stream selection unit 44 may obtain audio stream 27 from all enabled audio elements, where the audio streams 27 may include corresponding audio information, e.g., metadata, such as the ALI 45A (860). The stream selection unit 44 may perform the energy analysis with respect to each of the audio streams 27 to calculate a respective energy map (862).[0174] The stream selection unit 44 may next iterate through different combinations of the audio elements (defined in the CM 47) based on proximity to the audio source 308 (as defined by audio source distance 306A and/or 306B) and the audio elements (as defined by the proximity distances discussed above) (864). As shown in FIG. 8C, the audio elements may be ranked or otherwise associated with different access rights. The stream selection unit 44 may iterate, based on the listener position (which again is another way to refer to the“virtual location” or“device location” discussed above) represented by the DLI 45B, and the audio element positions represented by the ALI 45A, in the manner described above to identify whether a larger subset of the audio streams 27 or a reduced subset of the audio streams 27 is required (866, 868).[0175] When a larger subset of the audio streams 27 is required, the stream selection unit 44 may add audio element(s), or in other words, additional audio stream(s) to the audio data 19’ (such as when the user is closer to the audio source in the example of FIG. 3A) (870). When a reduced subset of the audio streams 27 is required, the stream selection unit 44 may remove audio element(s) or in other words existing audio stream(s) from the audio data 19’ (such as when the user is farther from the audio source in the example of FIG. 3 A) (872).[0176] In some examples, the stream selection unit 44 may determine that the current constellation of audio elements is an optimal set (or, in other words, that the existing audio data 19’ is to remain the same as the selection process described herein results in the same audio data 19’) (864), and the process may return to 862. However, when audio streams are added or removed from the audio data 19’, the stream selection unit 44 may update the CM 47 (874), generating a constellation history (875).
[0177] The stream selection unit 44 may iterate in this fashion and update various inputs according to any given frequency. For example, the stream selection unit 44, may update positions at sensor rate (meaning that as positions are changed through movement of the audio element). The stream selection unit 44 may further update the energy maps at an audio frame rate (meaning that the energy maps are updated each frame).[0178] FIG. 9 illustrates an example of a wireless communications system 100 in accordance with aspects of the present disclosure. The wireless communications system 100 includes base stations 105, UEs 115, and a core network 130. In some examples, the wireless communications system 100 may be a Long Term Evolution (LTE) network, an LTE-Advanced (LTE-A) network, an LTE-A Pro network, a 5thgeneration cellular network, or a New Radio (NR) network. In some cases, wireless communications system 100 may support enhanced broadband communications, ultra-reliable (e.g., mission critical) communications, low latency communications, or communications with low-cost and low-complexity devices.[0179] Base stations 105 may wirelessly communicate with UEs 115 via one or more base station antennas. Base stations 105 described herein may include or may be referred to by those skilled in the art as a base transceiver station, a radio base station, an access point, a radio transceiver, a NodeB, an eNodeB (eNB), a next-generation NodeB or giga- NodeB (either of which may be referred to as a gNB), a Home NodeB, a Home eNodeB, or some other suitable terminology. Wireless communications system 100 may include base stations 105 of different types (e.g., macro or small cell base stations). The UEs 115 described herein may be able to communicate with various types of base stations 105 and network equipment including macro eNBs, small cell eNBs, gNBs, relay base stations, and the like.[0180] Each base station 105 may be associated with a particular geographic coverage area 110 in which communications with various UEs 115 are supported. Each base station 105 may provide communication coverage for a respective geographic coverage area 110 via communication links 125, and communication links 125 between a base station 105 and a UE 115 may utilize one or more carriers. Communication links 125 shown in wireless communications system 100 may include uplink transmissions from a UE 115 to a base station 105, or downlink transmissions from a base station 105 to a UE 115. Downlink transmissions may also be called forward link transmissions while uplink transmissions may also be called reverse link transmissions.
[0181] The geographic coverage area 110 for a base station 105 may be divided into sectors making up a portion of the geographic coverage area 110, and each sector may be associated with a cell. For example, each base station 105 may provide communication coverage for a macro cell, a small cell, a hot spot, or other types of cells, or various combinations thereof. In some examples, a base station 105 may be movable and therefore provide communication coverage for a moving geographic coverage area 110. In some examples, different geographic coverage areas 110 associated with different technologies may overlap, and overlapping geographic coverage areas 110 associated with different technologies may be supported by the same base station 105 or by different base stations 105. The wireless communications system 100 may include, for example, a heterogeneous LTE/LTE-A/LTE-A Pro, 5thgeneration, or NR network in which different types of base stations 105 provide coverage for various geographic coverage areas 110.[0182] UEs 115 may be dispersed throughout the wireless communications system 100, and each UE 115 may be stationary or mobile. A UE 115 may also be referred to as a mobile device, a wireless device, a remote device, a handheld device, or a subscriber device, or some other suitable terminology, where the“device” may also be referred to as a unit, a station, a terminal, or a client. A UE 115 may also be a personal electronic device such as a cellular phone, a personal digital assistant (PDA), a tablet computer, a laptop computer, or a personal computer. In examples of this disclosure, a UE 115 may be any of the audio sources described in this disclosure, including a VR headset, an XR headset, an AR headset, a vehicle, a smartphone, a microphone, an array of microphones, or any other device including a microphone or is able to transmit a captured and/or synthesized audio stream. In some examples, a synthesized audio stream may be an audio stream that that was stored in memory or was previously created or synthesized. In some examples, a UE 115 may also refer to a wireless local loop (WLL) station, an Internet of Things (IoT) device, an Internet of Everything (IoE) device, or a machine-type communication (MTC) device, or the like, which may be implemented in various articles such as appliances, vehicles, meters, or the like.[0183] Some UEs 115, such as MTC or IoT devices, may be low cost or low complexity devices, and may provide for automated communication between machines (e.g., via Machine-to-Machine (M2M) communication). M2M communication or MTC may refer to data communication technologies that allow devices to communicate with one another or a base station 105 without human intervention. In some examples, M2M
communication or MTC may include communications from devices that exchange and/or use audio metadata that may include timing metadata used to affect audio streams and/or audio sources.[0184] In some cases, a UE 115 may also be able to communicate directly with other UEs 115 (e.g., using a peer-to-peer (P2P) or device-to-device (D2D) protocol). One or more of a group of UEs 115 utilizing D2D communications may be within the geographic coverage area 110 of a base station 105. Other UEs 115 in such a group may be outside the geographic coverage area 110 of a base station 105, or be otherwise unable to receive transmissions from a base station 105. In some cases, groups of UEs 115 communicating via D2D communications may utilize a one-to-many (1 :M) system in which each UE 115 transmits to every other UE 115 in the group. In some cases, a base station 105 facilitates the scheduling of resources for D2D communications. In other cases, D2D communications are carried out between UEs 115 without the involvement of a base station 105.[0185] Base stations 105 may communicate with the core network 130 and with one another. For example, base stations 105 may interface with the core network 130 through backhaul links 132 (e.g., via an SI, N2, N3, or other interface). Base stations 105 may communicate with one another over backhaul links 134 (e.g., via an X2, Xn, or other interface) either directly (e.g., directly between base stations 105) or indirectly (e.g., via core network 130).[0186] In some cases, wireless communications system 100 may utilize both licensed and unlicensed radio frequency spectrum bands. For example, wireless communications system 100 may employ License Assisted Access (LAA), LTE-Unlicensed (LTE-U) radio access technology, or NR technology in an unlicensed band such as the 5 GHz Industrial, Scientific, Medical (ISM) band. When operating in unlicensed radio frequency spectrum bands, wireless devices such as base stations 105 and UEs 115 may employ listen-before talk (LBT) procedures to ensure a frequency channel is clear before transmitting data. In some cases, operations in unlicensed bands may be based on a carrier aggregation configuration in conjunction with component carriers operating in a licensed band (e.g., LAA). Operations in unlicensed spectrum may include downlink transmissions, uplink transmissions, peer-to-peer transmissions, or a combination of these. Duplexing in unlicensed spectrum may be based on frequency division duplexing (FDD), time division duplexing (TDD), or a combination of both.
[0187] According to the techniques of this disclosure, individual audio streams may be restricted from rendering or may be rendered on a temporary basis based on timing information, such as a time or a duration. Certain individual audio streams or clusters of audio streams may be enabled or disabled for a fixed duration for better audio interpolation. Accordingly, the techniques of this disclosure provide for a flexible manner of controlling access to audio streams based on time.[0188] It should be noted that the methods described herein describe possible implementations, and that the operations and the steps may be rearranged or otherwise modified and that other implementations are possible. Further, aspects from two or more of the methods may be combined.[0189] It is to be recognized that depending on the example, certain acts or events of any of the techniques described herein can be performed in a different sequence, may be added, merged, or left out altogether (e.g., not all described acts or events are necessary for the practice of the techniques). Moreover, in certain examples, acts or events may be performed concurrently, e.g., through multi -threaded processing, interrupt processing, or multiple processors, rather than sequentially.[0190] In some examples, the VR device (or the streaming device) may communicate, using a network interface coupled to a memory of the VR/streaming device, exchange messages to an external device, where the exchange messages are associated with the multiple available representations of the soundfield. In some examples, the VR device may receive, using an antenna coupled to the network interface, wireless signals including data packets, audio packets, video pacts, or transport protocol data associated with the multiple available representations of the soundfield. In some examples, one or more microphone arrays may capture the soundfield.[0191] In some examples, the multiple available representations of the soundfield stored to the memory device may include a plurality of object-based representations of the soundfield, higher order ambisonic representations of the soundfield, mixed order ambisonic representations of the soundfield, a combination of object-based representations of the soundfield with higher order ambisonic representations of the soundfield, a combination of object-based representations of the soundfield with mixed order ambisonic representations of the soundfield, or a combination of mixed order representations of the soundfield with higher order ambisonic representations of the soundfield.
[0192] In some examples, one or more of the soundfield representations of the multiple available representations of the soundfield may include at least one high-resolution region and at least one lower-resolution region, and wherein the selected presentation based on the steering angle provides a greater spatial precision with respect to the at least one high- resolution region and a lesser spatial precision with respect to the lower-resolution region.[0193] This disclosure includes the following examples.[0194] Example 1. A device configured to play one or more of a plurality of audio streams comprising: a memory configured to store timing metadata, the plurality of audio streams and corresponding audio metadata, and location information associated with coordinates of an acoustical space in which the corresponding one of the plurality of audio streams was captured; and one or more processors coupled to the memory, and configured to: select, based on the timing metadata and the location information, a subset of the plurality of audio streams, the subset of the plurality of audio streams excluding at least one of the plurality of audio streams.[0195] Example 2. The device of example 1, wherein the one or more processors are further configured to obtain the location information.[0196] Example 3. The device of example 2, wherein excluded streams are associated with one or more privacy zones and the one or more processors obtain the location information by determining the location information.[0197] Example 4. The device of example 2, wherein the one or more processors obtain the location information by reading the location information from the memory.[0198] Example 5. The device of any combination of examples 1-4, wherein the one or more processors are further configured to combine at least two of the subset of the plurality of audio streams.[0199] Example 6. The device of example 5, wherein the one or more processors combine the at least two of the subset of the plurality of audio streams by at least one of mixing or interpolation.[0200] Example 7. The device of any combination of examples 1-6, wherein the one or more processors are further configured to change a gain of one or more of the subset of the plurality of audio streams.[0201] Example 8. The device of any combination of examples 1-7, wherein the timing metadata comprises a start time of when at least one of the plurality of audio streams includes audio content.
[0202] Example 9. The device of example 8, wherein the one or more processors are configured to: compare the start time to a current time; and select, when the start time is equal to or greater than the current time, the subset of the plurality of audio streams.[0203] Example 10. The device of any combination of examples 1-9, wherein the timing metadata comprises a duration of at least one of the plurality of audio streams.[0204] Example 11. The device of example 10, wherein the one or more processors are configured to: compare the duration to a timer; and select, when the duration is equal or greater than the timer, the subset of the plurality of audio streams.[0205] Example 12. The device of example 10, wherein the one or more processors are further configured to: select, based on the location information, a second subset of the plurality of audio streams, the second subset of the plurality of audio streams excluding at least one of the plurality of audio streams; and interpolate between the subset of the plurality of audio streams and the second subset of the plurality of audio streams through the duration.[0206] Example 13. The device of any combination of examples 1-12, wherein the one or more processors are further configured to: obtain from a user a request to select the subset of the plurality of audio streams; and based upon the user request, the location information, and the timing metadata, select the subset of the plurality of audio streams.[0207] Example 14. The device of any combination of examples 1-13, wherein the timing metadata is received from a source device.[0208] Example 15. The device of example 1-13, wherein the one or more processors are further configured to generate the timing metadata.[0209] Example 16. The device of example 1-15, wherein the one or more processors are configured to: obtain from a user a request for one of a plurality of ambisonic soundfield types; and reproduce corresponding soundfields, based on the request for the one of a plurality of ambisonic soundfield types, and the plurality of audio streams or the subset of the plurality of audio streams,[0210] Example 17. The device of example 16, wherein the plurality of ambisonic soundfield types comprises at least two of first order ambisonic soundfield (FOA), higher order ambisonic soundfield (HO A), and mixed order ambisonic soundfield (MO A).[0211] Example 18. The device of any combination of examples 1-17, further comprising a display device.
[0212] Example 19. The device of example 18, further comprising a microphone, wherein the one or more processors are further configured to receive a voice command from the microphone and control the display device based on the voice command.[0213] Example 20. The device of any combination of examples 1-19, further comprising one or more speakers.[0214] Example 21. The device of any combination of examples 1-20, wherein the device comprises an extended reality headset, and wherein the acoustical space comprises a scene represented by video data captured by a camera.[0215] Example 22. The device of any combination of example 1-20, wherein the device comprises an extended reality headset, and wherein the acoustical space comprises a virtual world.[0216] Example 23. The device of any combination of examples 1-22, further comprising a head-mounted display configured to present the acoustical space.[0217] Example 24. The device of any combination of examples 1-20, wherein the device comprises a mobile handset.[0218] Example 25. The device of any combination of examples 1-24, further comprising a wireless transceiver, the wireless transceiver being coupled to the one or more processors and being configured to receive a wireless signal.[0219] Example 26. The device of example 25, wherein the wireless signal isBluetooth.[0220] Example 27. The device of example 25, wherein the wireless signal is 5G.[0221] Example 28. The device of any combination of examples 1-27, wherein the device comprises a vehicle.[0222] Example 29. The device of any combination of examples 1-25, wherein the timing metadata comprises a delay and wherein the one or more processors are further configured to: detect a trigger; compare the delay to a timer; and wait until the delay is equal or greater than the timer to select the subset of the plurality of audio streams.[0223] Example 30. A method of playing one or more of a plurality of audio streams comprising: storing, by a memory, timing metadata, the plurality of audio streams and corresponding audio metadata, and location information associated with coordinates of an acoustical space in which the corresponding one of the plurality of audio streams was captured; and selecting, by the one or more processors and based on the timing metadata and the location information, a subset of the plurality of audio streams, the subset of the plurality of audio streams excluding at least one of the plurality of audio streams.
[0224] Example 31. The method of example 30, further comprising obtaining, by the one or more processors, the location information.[0225] Example 32. The method of example 31, wherein excluded streams are associated with one or more privacy zones and the obtaining the location information is by determining the location information.[0226] Example 33. The method of example 31, wherein the obtaining the location information is by reading the location information from the memory.[0227] Example 34. The method of any combination of examples 31-33, further comprising combining, by the one or more processors, at least two of the subset of the plurality of audio streams.[0228] Example 35. The method of example 34, wherein the combining the at least two of the subset of the plurality of audio streams is by at least one of mixing or interpolation.[0229] Example 36. The method of any combination of examples 30-35, further comprising changing, by the one or more processors, a gain of one or more of the subset of the plurality of audio streams.[0230] Example 37. The method of any combination of examples 30-36, wherein the timing metadata comprises a start time of when at least one of the plurality of audio streams includes audio content.[0231] Example 38. The method of example 37, further comprising: comparing, by the one or more processors, the start time to a current time; and selecting, by the one or more processors, when the start time is equal to or greater than the current time, the subset of the plurality of audio streams.[0232] Example 39. The method of any combination of examples 30-38, wherein the timing metadata comprises a duration of at least one of the plurality of audio streams.[0233] Example 40. The method of example 39, further comprising: comparing, by the one or more processors, the duration to a timer; and selecting, by the one or more processors, when the duration is equal or greater than the timer, the subset of the plurality of audio streams.[0234] Example 41. The method of example 39, further comprising: selecting, by the one or more processors, based on the location information, a second subset of the plurality of audio streams, the second subset of the plurality of audio streams excluding at least one of the plurality of audio streams; and interpolating, by the one or more processors, between the subset of the plurality of audio streams and the second subset of the plurality of audio streams through the duration.
[0235] Example 42. The method of any combination of examples 30-41, further comprising: obtaining from a user a request to select the subset of the plurality of audio streams; and based upon the user request, the location information, and the timing metadata, selecting, by the one or more processors, the subset of the plurality of audio streams.[0236] Example 43. The method of any combination of examples 30-42, wherein the timing metadata is received from a source device.[0237] Example 44. The method of any combination of examples 30-42, further comprising generating, by the one or more processors, the timing metadata.[0238] Example 45. The method of any combination of examples 30-44, further comprising: obtaining from a user a request for one of a plurality of ambisonic soundfield types; and reproducing, by the one or more processors, corresponding soundfields based on the request for the one of a plurality of ambisonic soundfield types, and the plurality of audio streams or the subset of the plurality of audio streams,.[0239] Example 46. The method of example 45, wherein the plurality of ambisonic soundfield types comprises at least two of first order ambisonic soundfield (FOA), higher order ambisonic soundfield (HO A), and mixed order ambisonic soundfield (MO A).[0240] Example 47. The method of any combination of examples 30-46, further comprising a microphone, receiving a voice command and controlling, by the one or more processors, a display device based on the voice command.[0241] Example 48. The method of any combination of examples 30-47, further comprising outputting the subset of the plurality of audio streams to one or more speakers.[0242] Example 49. The method of any combination of examples 30-48, wherein the acoustical space comprises a scene represented by video data captured by a camera.[0243] Example 50. The method of any combination of examples 30-48, wherein the acoustical space comprises a virtual world.[0244] Example 51. The method of any combination of examples 30-50, further comprising presenting, by the one or more processors, the acoustical space on a head- mounted device.[0245] Example 52. The method of any combination of examples 30-51, further comprising presenting, by the one or more processors, the acoustical space on a mobile handset.[0246] Example 53. The method of any combination of examples 30-52, further comprising receiving a wireless signal.
[0247] Example 54. The method of example 53, wherein the wireless signal is Bluetooth.[0248] Example 55. The method of example 53, wherein the wireless signal is 5G.[0249] Example 56. The method of any combination of examples 30-55, further comprising presenting, by the one or more processors, the acoustical space in a vehicle.[0250] Example 57. The method of any combination of examples 30-56, wherein the timing metadata comprises a delay and wherein the method further comprises: detecting, by the one or more processors, a trigger; comparing, by the one or more processors, the delay to a timer; and waiting until the delay is equal or greater than the timer to select the subset of the plurality of audio streams.[0251] Example 58. A device configured to play one or more of a plurality of audio streams, the device comprising: means for storing timing metadata, the plurality of audio streams and corresponding audio metadata, and location information associated with coordinates of an acoustical space in which the corresponding one of the plurality of audio streams was captured; and means for selecting, based on the timing metadata and the location information, a subset of the plurality of audio streams, the subset of the plurality of audio streams excluding at least one of the plurality of audio streams.[0252] Example 59. The device of example 58, further comprising means for obtaining the location information.[0253] Example 60. The device of example 59, wherein excluded streams are associated with one or more privacy zones and the obtaining the location information is by determining the location information.[0254] Example 61. The device of example 59, wherein the obtaining the location information is by reading the location information from the memory.[0255] Example 62. The device of any combination of examples 58-60, further comprising means for combining at least two of the subset of the plurality of audio streams.[0256] Example 63. The device of example 62, wherein the combining the at least two of the subset of the plurality of audio streams is by at least one of mixing or interpolation.[0257] Example 64. The device of any combination of examples 58-63, further comprising means for changing a gain of one or more of the subset of the plurality of audio streams.
[0258] Example 65. The device of any combination of examples 58-64, wherein the timing metadata comprises a start time of when at least one of the plurality of audio streams includes audio content.[0259] Example 66. The device of example 65, further comprising: means for comparing the start time to a current time; and means for selecting when the start time is equal to or greater than the current time, the subset of the plurality of audio streams.[0260] Example 67. The device of any combination of examples 58-66, wherein the timing metadata comprises a duration of at least one of the plurality of audio streams.[0261] Example 68. The device of example 67, further comprising: means for comparing the duration to a timer; and means for selecting when the duration is equal or greater than the timer, the subset of the plurality of audio streams.[0262] Example 69. The device of example 67, further comprising: means for selecting, based on the location information, a second subset of the plurality of audio streams, the second subset of the plurality of audio streams excluding at least one of the plurality of audio streams; and means for interpolating between the subset of the plurality of audio streams and the second subset of the plurality of audio streams through the duration.[0263] Example 70. The device of any combination of examples 58-69, further comprising: means for obtaining from a user a request to select the subset of the plurality of audio streams; and means for selecting, based upon the user request, the location information, and the timing metadata, the subset of the plurality of audio streams.[0264] Example 71. The device of any combination of examples 58-70, wherein the timing metadata is received from a source device.[0265] Example 72. The device of any combination of examples 58-70, further comprising means for generating the timing metadata.[0266] Example 73. The device of any combination of examples 58-72, further comprising: means for obtaining from a user a request for one of a plurality of ambisonic soundfield types; and means for reproducing corresponding soundfields, based on the request for the one of a plurality of ambisonic soundfield types, and the plurality of audio streams or the subset of the plurality of audio streams.[0267] Example 74. The device of example 73, wherein the plurality of ambisonic soundfield types comprises at least two of first order ambisonic soundfield (FOA), higher order ambisonic soundfield (HO A), and mixed order ambisonic soundfield (MO A).
[0268] Example 75. The device of any combination of examples 58-74, further comprising means for receiving a voice command and means for controlling a display device based on the voice command.[0269] Example 76. The device of any combination of examples 58-75, further comprising means for outputting the subset of the plurality of audio streams to one or more speakers.[0270] Example 77. The device of any combination of examples 58-76, wherein the acoustical space comprises a scene represented by video data captured by a camera.[0271] Example 78. The device of any combination of examples 58-76, wherein the acoustical space comprises a virtual world.[0272] Example 79. The device of any combination of examples 58-78, further comprising means for presenting the acoustical space on a head-mounted device.[0273] Example 80. The device of any combination of examples 58-78, further comprising means for presenting the acoustical space on a mobile handset.[0274] Example 81. The device of any combination of examples 58-80, further comprising means for receiving a wireless signal.[0275] Example 82. The device of example 81, wherein the wireless signal is Bluetooth.[0276] Example 83. The device of example 81, wherein the wireless signal is 5G.[0277] Example 84. The device of any combination of examples 58-83, further comprising means for presenting the acoustical space in a vehicle.[0278] Example 85. The device of any combination of examples 58-84, wherein the timing metadata comprises a delay and wherein the device further comprises: means for detecting a trigger; means for comparing the delay to a timer; and means for waiting until the delay is equal or greater than the timer to select the subset of the plurality of audio streams.[0279] Example 86. A non-transitory computer-readable storage medium having stored thereon instructions that, when executed, cause one or more processors to: store timing metadata, the plurality of audio streams and corresponding audio metadata, and location information associated with coordinates of an acoustical space in which the corresponding one of the plurality of audio streams was captured; and select, based on the timing metadata and the location information, a subset of the plurality of audio streams, the subset of the plurality of audio streams excluding at least one of the plurality of audio streams.
[0280] Example 87. The non-transitory computer-readable storage medium of example86, further comprising instructions that, when executed, cause one or more processors to obtain the location information.[0281] Example 88. The non-transitory computer-readable storage medium of example87, wherein excluded streams are associated with one or more privacy zones and the one or more processors obtain the location information by determining the location information.[0282] Example 89. The non-transitory computer-readable storage medium of example 87, wherein the one or more processors obtain the location information by reading the location information from the memory.[0283] Example 90. The non-transitory computer-readable storage medium of any combination of examples 86-89, further comprising instructions that, when executed, cause one or more processors to combine at least two of the subset of the plurality of audio streams.[0284] Example 91. The non-transitory computer-readable storage medium of example 90, wherein the combining the at least two of the subset of the plurality of audio streams is by at least one of mixing or interpolation.[0285] Example 92. The non-transitory computer-readable storage medium of any combination of examples 86-91, further comprising instructions that, when executed, cause one or more processors to change a gain of one or more of the subset of the plurality of audio streams.[0286] Example 93. The non-transitory computer-readable storage medium of any combination of examples 86-92, wherein the timing metadata comprises a start time of when at least one of the plurality of audio streams includes audio content.[0287] Example 94. The non-transitory computer-readable storage medium of example 93, further comprising instructions that, when executed, cause one or more processors to: compare the start time to a current time; and select, when the start time is equal to or greater than the current time, the subset of the plurality of audio streams.[0288] Example 95. The non-transitory computer-readable storage medium of any combination of examples 86-94, wherein the timing metadata comprises a duration of at least one of the plurality of audio streams.[0289] Example 96. The non-transitory computer-readable storage medium of example 95, further comprising instructions that, when executed, cause one or more processors to:
compare the duration to a timer; and select, when the duration is equal or greater than the timer, the subset of the plurality of audio streams.[0290] Example 97. The non-transitory computer-readable storage medium of example 95, further comprising instructions that, when executed, cause one or more processors to: select, based on the location information, a second subset of the plurality of audio streams, the second subset of the plurality of audio streams excluding at least one of the plurality of audio streams; and interpolate between the subset of the plurality of audio streams and the second subset of the plurality of audio streams through the duration.[0291] Example 98. The non-transitory computer-readable storage medium of any combination of examples 86-97, further comprising instructions that, when executed, cause one or more processors to: obtain from a user a request to select the subset of the plurality of audio streams; and based upon the user request, the location information, and the timing metadata, select the subset of the plurality of audio streams.[0292] Example 99. The non-transitory computer-readable storage medium of any combination of examples 86-98, wherein the timing metadata is received from a source device.[0293] Example 100. The non-transitory computer-readable storage medium of examples 86-99, further comprising instructions that, when executed, cause one or more processors to generate the timing metadata.[0294] Example 101. The non-transitory computer-readable storage medium of examples 86-100, further comprising instructions that, when executed, cause one or more processors to:obtain from a user a request for one of a plurality of ambisonic soundfield types; and reproduce corresponding soundfields, based on the request for the one of a plurality of ambisonic soundfield types, and the plurality of audio streams or the subset of the plurality of audio streams.[0295] Example 102. The non-transitory computer-readable storage medium of example 101, wherein the plurality of ambisonic soundfield types comprises at least two of first order ambisonic soundfield (FOA), higher order ambisonic soundfield (HO A), and mixed order ambisonic soundfield (MO A).[0296] Example 103. The non-transitory computer-readable storage medium of any combination of examples 86-102, further comprising instructions that, when executed, cause one or more processors to receive a voice command from the microphone and control a display device based on the voice command.
[0297] Example 104. The non-transitory computer-readable storage medium of any combination of examples 86-103, further comprising instructions that, when executed, cause one or more processors to output the subset of the plurality of audio streams to one or more speakers.[0298] Example 105. The non-transitory computer-readable storage medium of any combination of examples 86-104, wherein the acoustical space comprises a scene represented by video data captured by a camera.[0299] Example 106. The non-transitory computer-readable storage medium of any combination of examples 86-104, wherein the acoustical space comprises a virtual world.[0300] Example 107. The non-transitory computer-readable storage medium of any combination of examples 86-106, further comprising instructions that, when executed, cause one or more processors to present the acoustical space on a head-mounted display.[0301] Example 108. The non-transitory computer-readable storage medium of any combination of examples 86-107, further comprising instructions that, when executed, cause one or more processors to present the acoustical space on a mobile handset.[0302] Example 109. The non-transitory computer-readable storage medium of any combination of examples 86-108, further comprising instructions that, when executed, cause one or more processors to receive a wireless signal.[0303] Example 110. The non-transitory computer-readable storage medium of example 109, wherein the wireless signal is Bluetooth.[0304] Example 111. The non-transitory computer-readable storage medium of example 109, wherein the wireless signal is 5G.[0305] Example 112. The non-transitory computer-readable storage medium of any combination of examples 86-111, further comprising instructions that, when executed, cause one or more processors to present the acoustical space in a vehicle.[0306] Example 113. The non-transitory computer-readable storage medium of any combination of examples 86-112, wherein the timing metadata comprises a delay and the non-transitory computer-readable storage medium further comprises instructions that, when executed, cause one or more processors to: detect a trigger; compare the delay to a timer; and wait until the delay is equal or greater than the timer to select the subset of the plurality of audio streams.[0307] In one or more examples, the functions described may be implemented in hardware, software, firmware, or any combination thereof. If implemented in software, the functions may be stored on or transmitted over as one or more instructions or code on
a computer-readable medium and executed by a hardware-based processing unit. Computer-readable media may include computer-readable storage media, which corresponds to a tangible medium such as data storage media, or communication media including any medium that facilitates transfer of a computer program from one place to another, e.g., according to a communication protocol. In this manner, computer-readable media generally may correspond to (1) tangible computer-readable storage media which is non-transitory or (2) a communication medium such as a signal or carrier wave. Data storage media may be any available media that can be accessed by one or more computers or one or more processors to retrieve instructions, code and/or data structures for implementation of the techniques described in this disclosure. A computer program product may include a computer-readable medium.[0308] By way of example, and not limitation, such computer-readable storage media can comprise RAM, ROM, EEPROM, CD-ROM or other optical disk storage, magnetic disk storage, or other magnetic storage devices, flash memory, or any other medium that can be used to store desired program code in the form of instructions or data structures and that can be accessed by a computer. Also, any connection is properly termed a computer- readable medium. For example, if instructions are transmitted from a website, server, or other remote source using a coaxial cable, fiber optic cable, twisted pair, digital subscriber line (DSL), or wireless technologies such as infrared, radio, and microwave, then the coaxial cable, fiber optic cable, twisted pair, DSL, or wireless technologies such as infrared, radio, and microwave are included in the definition of medium. It should be understood, however, that computer-readable storage media and data storage media do not include connections, carrier waves, signals, or other transitory media, but are instead directed to non-transitory, tangible storage media. Disk and disc, as used herein, includes compact disc (CD), laser disc, optical disc, digital versatile disc (DVD), floppy disk and Blu-ray disc, where disks usually reproduce data magnetically, while discs reproduce data optically with lasers. Combinations of the above should also be included within the scope of computer-readable media.[0309] Instructions may be executed by one or more processors, such as one or more digital signal processors (DSPs), general purpose microprocessors, application specific integrated circuits (ASICs), field programmable gate arrays (FPGAs), or other equivalent integrated or discrete logic circuitry. Accordingly, the term“processor,” as used herein may refer to any of the foregoing structure or any other structure suitable for implementation of the techniques described herein. In addition, in some aspects, the
functionality described herein may be provided within dedicated hardware and/or software modules configured for encoding and decoding, or incorporated in a combined codec. Also, the techniques could be fully implemented in one or more circuits or logic elements.[0310] The techniques of this disclosure may be implemented in a wide variety of devices or apparatuses, including a wireless handset, an integrated circuit (IC) or a set of ICs (e.g., a chip set). Various components, modules, or units are described in this disclosure to emphasize functional aspects of devices configured to perform the disclosed techniques, but do not necessarily require realization by different hardware units. Rather, as described above, various units may be combined in a codec hardware unit or provided by a collection of interoperative hardware units, including one or more processors as described above, in conjunction with suitable software and/or firmware.[0311] Various examples have been described. These and other examples are within the scope of the following claims. |
A memory device (340) is provided which includes a write bit line (452), a read bit line (454), and at least one memory cell (410). The memory cell (410) includes a write access transistor (470), a read access transistor (480) coupled to the read bit line (454) and to the first write access transistor (470), and a gated-lateral thyristor (GLT) device (460) coupled to the first write access transistor (470). Among its many features, the memory cell (410) prevents read disturbances during read operations by decoupling the read and write bit lines (454, 452). |
CLAIMS What is claimed is: L A memory cell (410), comprising: a gated-lateral thyristor (GLT) device (460); a write access transistor (470), coupled to the gated-lateral thyristor (GLT) device (460), for controlling write access; and a read access transistor (480), coupled to the write access transistor (470), for controlling read access. 2. A memory cell (410) according to claim 1, further comprising: a sensing transistor (490), coupled to the GLT device (460), the write access transistor (470) and to the read access transistor (490). 3. A memory cell (410) according to claim 2, wherein the write access transistor (470), the read access transistor (480) and the sensing transistor (490) each comprise: P-channel field effect transistors. 4. A memory cell (410) according to claim 3, wherein the GLT device (460) comprises: an NPNP device (464, 463, 468, 466) comprising a first N-region (464) and a second N-region (468); a capacitor (463, 408, 465) coupled to the second N-region (468). 5. A memory cell (410) according to claim 2, wherein the write access transistor (470), the read access transistor (480) and the sensing transistor (490) each comprise: N-channel field effect transistors. 6. A memory cell (410) according to claim 5, wherein the GLT device (460) comprises: an PNPN device (464, 463, 468, 466) comprising a first P-region (464) and a second P-region (468); a capacitor (463, 408, 465) coupled to the second P-region (468). 7. A memory cell (410) according to claim 5, wherein the write access transistor (470) is coupled to a first node (441/633), wherein the read access transistor (480) is coupled to a second node (442) and to the write access transistor (470) at a third node (443), wherein the gated-lateral thyristor (GLT) device (460) is coupled to the write access transistor (470) at a fourth node (444), wherein the sensing transistor (490) is coupled to the GLT device (460) and the write access transistor (470) at the fourth node (444) and to the read access transistor (490) at a fifth node (445). 8. A memory cell (410) according to claim 7, wherein the write access transistor (470) further comprises: a first source electrode (472) coupled to the first node (441/633); a first drain electrode (474) coupled to the fourth node (444); and a first gate electrode (475). 9. A memory cell (410) according to claim 8, wherein the GLT device (460) comprises: a cathode node (464) coupled to the first drain electrode (474) at the fourth node (444); a gated electrode (465) coupled to a sixth node (446); and an anode node (466) coupled to the sensing transistor (490). 10. A memory cell (410) according to claim 9, wherein the read access transistor (480) comprises: a second source electrode (482) coupled to the second node (442); a second drain electrode (484) coupled to the fifth node (445); and a second gate electrode (485) coupled to and integral with the first gate electrode (475). 11. A memory cell (410) according to claim 10, wherein the sensing transistor (490) comprises: a third source electrode (492) coupled to the second drain electrode (484) at the fifth node (445);a third drain electrode (494) coupled to the anode node (466) at a seventh node (432/635); and a third gate electrode (495) coupled to the first drain electrode (474) and the cathode (464) at the fourth node (444). 12. A memory device (340), comprising: a supply line (432/632); a write bit line (452); a read bit line (454); a write access transistor (470) coupled to one of the write bit line (452) and the supply line (632); a read access transistor (480) coupled to the read bit line (454) and to the write access transistor (470); and a gated-lateral thyristor (GLT) device (460) coupled to the write access transistor (470). 13. A memory device (340) according to claim 12, further comprising: a sensing transistor (490) coupled to the GLT device (460), the write access transistor (470), and the read access transistor (490). 14. A memory device (340) according to claim 13, further comprising: a write enable line (430) coupled to the GLT device (460). 15. A memory device (340) according to claim 14, wherein the write access transistor (470) comprises a first gate electrode (475), and wherein the read access transistor (480) comprises a second gate electrode (485), and further comprising: a first word line (420) comprising the first gate electrode (475) and the second gate electrode (485). 16. A memory device (340) according to claim 15, wherein the write access transistor (470) further comprises: a first source electrode (472) coupled to the write bit line (452); a first drain electrode (474); and a first gate electrode (475) comprising a portion of the first word line (420). 17. A memory device (340) according to claim 16, wherein the read access transistor (480) comprises: a second source electrode (482) coupled to the read bit line (454); a second drain electrode (484); and a second gate electrode (485) comprising another portion of the first word line (420), wherein the second gate electrode (485) and the first gate electrode (475) are formed from a common conductive layer. 18. A memory device (340) according to claim 17, wherein the sensing transistor (490) comprises: a third source electrode (492) coupled to the second drain electrode (484); a third gate electrode (495) coupled to the first drain electrode (474) and the cathode (464); and a third drain electrode (494) coupled to the supply line (432). 19. A memory device (340) according to claim 18, wherein the GLT device (460) comprises: a cathode node (464) coupled to the first drain electrode (474); a gated electrode (465) coupled to the write enable line (430); and an anode node (466) coupled to the supply line (432). 20. A memory device (340) according to claim 15, wherein the write access transistor (470) further comprises: a first source electrode (472) coupled to the supply line (632); a first drain electrode (474); and a first gate electrode (475) comprising a portion of the first word line (420). 21. A memory device (340) according to claim 20, wherein the read access transistor (480) comprises: a second source electrode (482) coupled to the read bit line (454); a second drain electrode (484); and a second gate electrode (485) comprising another portion of the first word line (420), wherein the second gate electrode (485) and the first gate electrode (475) are formed from a common conductive layer. 22. A memory device (340) according to claim 21, wherein the GLT device (460) comprises: a cathode node (464) coupled to the first drain electrode (474); a gated electrode (465) coupled to the write enable line (430); and an anode node (466) coupled to the write bit line (452). 23. A memory device (340) according to claim 22, wherein the sensing transistor (490) comprises: a third source electrode (492) coupled to the second drain electrode (484); and a third gate electrode (495) coupled to the first drain electrode (474) and the cathode (464); and a third drain electrode (494) coupled to the write bit line (452) and the anode node (466). 24. A memory device (340), comprising: a write enable line (430); a write bit line (452); a read bit line (454); a first transistor (470) comprising a first gate electrode (475), a first source electrode (472), and a first drain electrode (474); a second transistor (480) comprising a second source electrode (482) coupled to the first gate electrode (475) and to the read bit line (454), a second gate electrode (485) coupled to the first gate electrode (475), and a second drain electrode (484); a gated-lateral thyristor (GLT) device (460) comprising an anode node (466), a gated electrode (465) coupled to the write enable line (430), and a cathode node (464) coupled to the first drain electrode (474); and a third transistor (490) comprising a third drain electrode (494), a third source electrode (492) coupled to the second drain electrode (484), and a third gate electrode (495) coupled to the first drain electrode (474) and to the cathode (464) at a common node (444). 25. A memory device (340) according to claim 24, further comprising: a supply line (432) coupled to the anode node (466) and to the third drain electrode (494), and wherein the write bit line (452) is coupled to the first source electrode (472). 26. A memory device (340) according to claim 24, further comprising: a supply line (432) coupled to the first source electrode (472), wherein the anode node (466) is coupled to the third drain electrode (494), and wherein the write bit line (452) is coupled to the anode node (466) and to the third drain electrode (494). |
GATED LATERAL THYRISTOR-BASED RANDOM ACCESS MEMORY (GLTRAM) CELLS WITH SEPARATE READ AND WRITE ACCESS TRANSISTORS , MEMORY DEVICES AND INTEGRATED CIRCUITS INCORPORATING THE SAME TECHNICAL FIELD [0001] Embodiments of the present invention relate generally to semiconductor memory devices. More particularly, embodiments of the present invention relate to gated lateral thyristor-based random access memory (GLTRAM) memory cell structures and memory devices which implement such GLTRAM memory cells, and methods of fabricating the same. BACKGROUND [0002] Integrated circuit memories include static random access memory (SRAM). Many SRAM cell structures utilize six-transistor or eight-transistor memory cells. The large layout areas associated with such six-transistor and eight-transistor memory cells which are used in many implementations of SRAM cells has limited the design of high density SRAM devices. [0003] Given these drawbacks, there have been attempts to build a thyristor-based memory cell with a simple layout and reduced layout area in comparison to conventional memory cells. A thrysitor is a bi-stable, three terminal device which consists of a four layer structure including a P-type anode region, an N-type base region, a P-type base region coupled to a gated electrode, and an N-type cathode region arranged in a PNPN configuration. PN junctions are formed between the P-type anode region and the N-type base region, between the N-type base region and the P-type base region, and between the P-type base region and the N-type cathode region. Contacts are made to the P-type anode region, the N-type cathode region, and the P-type base region. [0004] F. Nemati and J. D. Plummer have disclosed a two-device thyristor-based SRAM (T- RAM) cell that includes an access transistor and a gate-assisted, vertical PNPN thyristor, where the vertical thyristor is operated in a gate-enhanced switching mode. See F. Nemati and J.D. Plummer, A Novel Thyristor-based SRAM Cell (T-RAM) for High-Speed, Low- Voltage, Giga-scale Memories, Center for Integrated Systems, Stanford University, Stanford, CA., 1999. The performance of the T-RAM cell depends on the turn-off characteristics of the vertical thyristor. The turn-off characteristics depend on the stored charge and carrier transit time in the P-type base region of the PNPN thyristor. By reverse biasing the thyristor for a write-zero operation and by using a gated electrode to assist with turn-off switching ofthe vertical thyristor to discharge the stored charge the turn-off characteristics for the vertical thyristor are improved from milliseconds to nanoseconds. [0005] FIG. 1 is a circuit schematic 100 which illustrates an array of conventional thyristor- based Random Access Memory (T-RAM) cells including T-RAM cell 110. [0006] As shown in FIG. 1, T-RAM cell 1 10 consists of word lines 120, 130, a common bit line 150, a Thin Capacitively-Coupled Thyristor (TCCT) device 160 in series with an NMOS access transistor 170. The TCCT device 160 provides an active storage element which comprises a thyristor 162 and a capacitor 165 coupled to the gate of the thyristor 162. The NMOS access transistor 170 is coupled between a cathode node 146 of the TCCT device 160 and the common bit line 150. An anode node 148 of the TCCT device 160 is fixed at a positive bias. The TCCT device 160 exhibits a bi-stable current- versus-voltage (I- V) characteristic. The bi-stable current-versus-voltage characteristic results in a wide read margin between logical one (1) and logical zero (0) data states because the on/off current ratio between two states are greater than 1 x 105. See F. Nemati et al. The bi-stable current- versus-voltage characteristic results in good read current because at a logical one (1) data state, the TCCT device 160 is in forward diode mode resulting in higher current. To store a logical one (1) in the T-RAM cell 110, a constant current greater than a standby or holding current is applied through the TCCT device 160 and the NMOS access transistor 170. The current from each of the memory cells is collected through the common bit line 150. During the read operation, the voltage level on the common bit line 150 must be maintained at a certain level (e.g., ground or one-half (Vdd)). If current flows from each of the memory cells connected to the common bit line 150, the voltage level on the common bit line 150 will fluctuate. This can cause the read operation to be disturbed (also referred to as a "read disturbance" problem) since the voltage level on the common bit line 150 is changed by both the selected cell as well as the amount of leakage current from the unselected cells. [0007] FIG. 2 is a circuit schematic 200 which illustrates an array of conventional Thin Capacitively-Coupled Thyristor (TCCT)-DRAM cells including TCCT-DRAM cells 210, 270. In contrast to conventional DRAM cells, which usually include a MOSFET device and a capacitor, the TCCT-DRAM cell 210 consists of a single TCCT device 260 and three controls lines including a write enable line 230, word line 240, and a bit line 250. Notably, the TCCT-DRAM cell 210 does not require an access transistor. The TCCT device 260 consists of a thyristor 262 which includes an anode node 248 connected to the bit line 250, a cathode node 246 connected to the word line 240, and a gate capacitor 265 connected directly above a P-base region (not shown) of the thyristor 262 to a gate line which functions as thewrite enable line 230. The TCCT-DRAM cell 210 is operated using basic read/write operations which include a standby mode, a write logic one (1) operation, a write logic zero (0) operation, and a read operation. [0008] In standby mode, both bit line 250 and word line 240 are at Vdd, and the stored data is maintained by a charge state of the P-base region of thyristor. The word line 240 in TCCT DRAM activates the TCCT cells connected along the write enable line 230. During a write logic one (1) operation, the voltage applied on the bit line 250 is kept high and the write enable line 230 is pulsed while word line 240 is held at ground level, triggering the TCCT device 260 to latch. The bias scheme for write zero (0) operation is the same as the write one (1) operation except that the voltage applied on the bit line 250 is kept low so that the pulsing of the write enable line 230 switches the TCCT device 260 into its blocking state. During a read operation, the word line 240 is held low and the change in the voltage or the current of the bit line 250 is read into a sense amplifier. [0009] During a standby mode or "holding period," which occurs after the write zero (0) operation, the P-base region (not shown) of the thyristor is negatively charged and the potential of the P-base region gradually increases due to a reverse leakage current that flows from the anode node 248 to the cathode node 246. Because of this leakage current the TCCT-DRAM cell 210 must be periodically refreshed during operation to reset the charge state of the TCCT-DRAM cell 210. The refresh operation involves reading a stored value from the TCCT-DRAM cell 210 and then writing the stored value back to the TCCT-DRAM cell 210. [0010] Accordingly, there is a need for memory devices and memory cell structures which have a small memory cell size and fast operational speed, and for methods for fabricating such memory devices and memory cell structures. It would be desirable if such memory devices and memory cell structures can also eliminate the need to perform a periodic refresh operation. It would also be desirable if such memory devices and memory cell structures can reduce and/or eliminate problems such as read disturbance that can occur during read operations. BRIEF SUMMARY [0011] According to one embodiment, a memory device is provided which includes a write bit line, a read bit line, and at least one memory cell. The memory cell includes a write access transistor, a read access transistor coupled to the read bit line and to the first write access transistor, and a gated-lateral thyristor (GLT) device coupled to the first write accesstransistor. Among its many features, the memory cell prevents read disturbances during read operations by decoupling the read and write bit lines. BRIEF DESCRIPTION OF THE DRAWINGS [0012] A more complete understanding of the present invention may be derived by referring to the detailed description and claims when considered in conjunction with the following figures, where: [0013] FIG. 1 is a circuit schematic which illustrates an array of conventional thyristor-based Random Access Memory (T-RAM) cells; [0014] FIG. 2 is a circuit schematic which illustrates an array of conventional Thin Capacitively-Coupled Thyristor (TCCT)-DRAM cells; [0015] FIG. 3 is a block diagram of a memory system which can be used with embodiments of the present invention; [0016] FIG. 4 is a circuit schematic which illustrates a memory cell in accordance with an embodiment of the present invention; [0017] FIGS. 5, 7, 8, 10-11, 13-14, and 16-21 illustrate, in cross section, a memory cell of FIG. 4 and method steps for its fabrication in accordance with the various embodiments of the invention; [0018] FIGS. 6, 9, 12, 15, and 22 illustrate, in top plan view, the memory cell of FIG. 4 and method steps for its fabrication in accordance with various embodiments of the invention; [0019] FIG. 23 is a timing diagram which illustrates voltages applied to control lines during operation of the memory cell of FIG. 4 in accordance with an embodiment of the present invention; [0020] FIG. 24 is a circuit schematic which illustrates a memory cell in accordance with another embodiment of the present invention; [0021] FIGS. 5, 7, 8, 10-11, 13-14, and 16-21 illustrate, in cross section, a memory cell of FIG. 24 and method steps for its fabrication in accordance with the various embodiments of the invention; [0022] FIGS. 6, 9, 10, 12, and 25 illustrate, in top plan view, the memory cell of FIG. 24 and method steps for its fabrication in accordance with various embodiments of the invention; and [0023] FIG. 26 is a timing diagram which illustrates voltages applied to control lines during operation of the memory cell of FIG. 24 in accordance with an embodiment of the present invention.DETAILED DESCRIPTION [0024] The following detailed description is merely exemplary in nature and is not intended to limit the invention or the application and uses of the invention. The word "exemplary" is used herein to mean "serving as an example, instance, or illustration." Any embodiment described herein as "exemplary" is not necessarily to be construed as preferred or advantageous over other embodiments. All of the implementations described below are exemplary implementations provided to enable persons skilled in the art to make or use the invention and are not intended to limit the scope of the invention which is defined by the claims. Furthermore, there is no intention to be bound by any expressed or implied theory presented in the preceding technical field, background, brief summary or the following detailed description. [0025] For the sake of brevity, conventional techniques related to transistor design and manufacturing, the control of memory devices, memory cell programming, memory cell erasing, and other functional aspects of the devices and systems (and the individual operating components of the devices and systems) may not be described in detail herein. Furthermore, the connecting lines shown in the various figures contained herein are intended to represent exemplary functional relationships and/or physical couplings between the various elements. It should be noted that many alternative or additional functional relationships or physical connections may be present in an embodiment of the invention. [0026] The following description refers to elements or nodes or features being "connected" or "coupled" together. As used herein, unless expressly stated otherwise, "connected" means that one element, node or feature is directly joined to (or directly communicates with) another element, node or feature. Likewise, unless expressly stated otherwise, "coupled" means that one element, node or feature is directly or indirectly joined to (or directly or indirectly communicates with) another element, node or feature. [0027] In the description and the claims, numerical ordinals, such as the terms "first," "second," "third," "fourth," if any, may be used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the terms so used are interchangeable. Under appropriate circumstances, embodiments of the invention described herein are capable of fabrication or operation in sequences other than those illustrated or otherwise described herein. [0028] Furthermore, the terms "comprise," "include," "have," and any variations thereof, are intended to cover non-exclusive inclusions, such that a process, method, article, or apparatusthat comprises a list of elements is not necessarily limited to those elements, but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. [0029] FIG. 3 is a block diagram of a memory system 340 which can be used with embodiments of the present invention. The memory system 340 is a simplified representation of an exemplary embodiment, and an actual system 340 may also include conventional elements, logic, components, and functionality not shown in FIG. 3. The memory system 340 can perform operations including write one (1), read one (1), write zero (0), and read zero (O)) with respect to a memory array 342. [0030] The memory system 340 includes the memory array 342 which comprises a plurality of memory cells whose word lines and bit lines are commonly arranged into rows and columns, respectively, row and column decoders 344, 348 and sense amplifier circuitry 346. Each memory cell is designated with a row address and column address. For a particular memory cell, a particular word line controls access to its particular storage element by allowing or preventing the signal (representing a logic "0" or a logic "1 ") carried on a particular bit line to be written to or read from the storage element. Thus, each memory cell 100 can store one bit of data as a logical "0" or logical "1." [0031] The bit lines of the memory array 342 can be connected to the sense amplifier circuit 346, while its word lines can be connected to the row decoder 344. Address and control signals are input on address/control lines 361 into the memory system 340. The address/control lines 316 are connected to the column decoder 348, sense amplifier circuit 346 and row decoder 344. The address/control lines 316 are used, among other things, to gain read and write access to the memory array 342. [0032] The column decoder 348 is connected to the sense amplifier circuit 346 via control and column select signals on column select lines 362. The sense amplifier circuitry 346 receives input data destined for the memory array 342 and outputs data read from the memory array 342 over input/output (I/O) data lines 363. Data is read from the cells of the memory array 342 by activating a word line (via the row decoder 344), which couples all of the memory cells corresponding to that word line to respective bit lines 360, which define the columns of the array. One or more bit lines are also activated. When a particular word line and bit lines are activated, thereby selecting a bit or bits, the sense amplifier circuitry 346 connected to a bit line detects and amplifies the data in the selected bit by measuring the potential difference between the activated bit line and a reference line. [0033] FIG. 4 is a circuit schematic which illustrates a memory cell 410 in accordance with an embodiment of the present invention. While a single memory cell 410 is illustrated inFIG. 4, it will be appreciated by those skilled in the art that in practical implementations, the memory cell 410 is likely to be one of a large number of memory cells that are interconnected in an integrated circuit. Those of skill in the art will understand that memory cell 410 is likely to be implemented in a memory cell array that can include thousands or more of such memory cells. In one embodiment, the memory cell 410 can be implemented as one of the memory cells within the memory array 342 of the memory system 340 illustrated in FIG. 3. [0034] The memory cell 410 comprises a gated lateral thyristor (GLT) device 460, a write access transistor 470, a read access transistor 480 and a sensing transistor 490. A plurality of control lines are used to operate the memory cell 410 including a word line 420, a write enable line 430, a supply line 432, a write bit line 452, and a read bit line 454. In one implementation, the word line 420 comprises polysilicon, the write enable line 430 and the supply line 432 each comprise a first metal layer, and the write bit line 452 and the read bit line 454 each comprise a second metal layer. [0035] In one implementation, each of the transistors 470, 480, 490 is a MOSFET and thus includes a source electrode, a drain electrode, and a gate electrode. Although the term "MOSFET" properly refers to a device having a metal gate electrode and an oxide gate insulator, that term will be used throughout to refer to any semiconductor device that includes a conductive gate electrode (whether metal or other conductive material) that is positioned over a gate insulator (whether oxide or other insulator) which, in turn, is positioned over a semiconductor substrate (whether silicon or other semiconductor material). The MOSFET transistors can be either NMOSFETs or PMOSFETs depending on the implementation. In FIG. 4, the write access transistor 470 which includes a source electrode 472, a drain electrode 474, and a gate electrode 475 that is coupled to the word line 420. The read access transistor 480 includes a source electrode 482, a drain electrode 484, and a gate electrode 485. The sensing transistor 490 includes a source electrode 492, a drain electrode 494, and a gate electrode 495. [0036] The gated-lateral thyristor (GLT) device is represented by symbol 460 in FIG. 4. It is to be understood that the GLT device 460 comprises a thyristor 462 (represented as two diodes in series) and a Metal Oxide Silicon (MOS) capacitor connected to the thyristor 462, as illustrated, for instance, in FIG. 20. In general, the thyristor is a bi-stable, three terminal device which comprises a gated electrode 465, a cathode region 464, an anode region 466, and a pair of base regions (not shown) disposed between the anode region 466 and cathode region 464. Contacts are made to the anode region 466 to create an anode terminal, to the cathode region 464 to create a cathode terminal, and to the gated electrode 465 to create agate terminal. PN or NP junctions are formed between the anode region 466 and one of the base regions, between the pair of base regions, and between the other one of the base regions and the cathode region 464. In GLT device 460 the MOS capacitor (not shown) is connected to one of the base regions (not shown) of the thyristor 462. [0037] In one exemplary embodiment of the memory cell 410, which will be described below with respect to FIGS. 5-20, the transistors 470, 480, 490 are NMOSFETs, and the GLT device 460 comprises a PNPN thyristor 462 coupled to a MOS capacitor. As illustrated in FIG. 20, the PNPN thyristor 462 includes a gated electrode 465 (that serves as one plate of the MOS capacitor), a P-type anode region 466, an N-type base region 468, a P-type base region 463 and an N-type cathode region 464 arranged in a PNPN configuration, where the N-type and P-type base regions 468, 463 are laterally disposed between the P-type anode region 466 and N-type cathode region 464. As above, contacts are made to the P-type anode region 466, to the N-type cathode region 464, and to the gated electrode 465. A PN junction is formed between P-type anode region 466 and the N-type base region 468, another PN junction is formed between the N-type base region 468 and the P-type base region 463, and yet another PN junction is formed between the P-type base 463 and the N-type cathode region 464. The MOS capacitor of the GLT device 460 includes a gated electrode 465, the P- type base region, and a gate insulator layer disposed between the gated electrode 465 and the P-type base region. The gate insulator layer serves as the capacitor dielectric. The N-type base region and the P-type base region are adjacent one another. The MOS capacitor is connected to the P-base region of the thyristor. In an alternative exemplary embodiment, the transistors 470, 480, 490 are PMOSFETs, and the GLT device 460 comprises a thyristor coupled to a MOS capacitor, where the thyristor is arranged in an NPNP configuration, and the MOS capacitor is connected to an N-base. [0038] FIG. 4 illustrates various nodes 441, 442, 443, 444, 445, 446, 448, 449 to help illustrate the electrical and/or physical couplings between different devices 460, 470, 480, 490 and the various control lines 420, 430, 432, 452, 454 that make up the memory cell 410. The various nodes do not necessarily imply that the different devices 460, 470, 480, 490 and control lines 420, 430, 432, 452, 454 that make up the memory cell 410 are directly connected to one another, and in some embodiments additional intervening devices (not illustrated) may be present between a particular device and a given node. [0039] The cathode node 464 of the GLT device 460 is coupled to the drain electrode 474 of the write access transistor 470 and the gate electrode 495 of the read access transistor 480 at node 444. The gated electrode 465 of the GLT device 460 is coupled to the write enable line430 at node 446, and the anode node 466 of the GLT device 460 is coupled to the supply line 432 at node 448. [0040] The sensing transistor 490 is coupled to the supply line 432 at node 449, and coupled to the drain electrode 474 of write access transistor 470 and the cathode node 464 of the GLT device 460 at node 444. The source electrode 492 of the sensing transistor 490 is coupled to the drain electrode 484 of the read access transistor 480 at node 445. The sensing transistor 490 senses the voltage at node 444. For example, if the GLT device 460 stores a logical one (1), the voltage level at node 444 will be "high" (e.g., greater than 0.5 volts) and large enough to turn on the sensing transistor 490, and the sensing transistor 490 induces a voltage change on read bit line 454. If the GLT device 460 stores a logical zero (0), the voltage level at node 444 will be approximately 0.0 volts and the sensing transistor 490 does not induce a voltage change on read bit line 454 as the sensing transistor 490 will remain off. [0041] In the schematic of FIG. 4, the write access transistor 470 and the read access transistor 480 are illustrated as being coupled to the word line 420, and the gate electrode 485 of read access transistor 480 is illustrated as being coupled to the gate electrode 475 of write access transistor 470 at node 443. Even though gate electrodes 475, 485 are illustrated as being coupled at node 443, it will be appreciated by those skilled in the art that the gate electrodes 475, 485 are actually portions of word line 420 and formed from a common layer of conductive material, such as polysilicon. [0042] In the embodiment illustrated in FIG. 4, the source electrode 472 of the write access transistor 470 is coupled to the write bit line 452 at node 441, the source electrode 482 of the read access transistor 480 is coupled to the read bit line 454 at node 442, and the drain electrode 494 of the sensing transistor 490 is coupled to the supply line 432 at node 449. The write access transistor 470 controls write access during a write operation via write bit line 452 by switching only when the write bit line 452 is not in standby mode. The standby mode refers to a holding state between read and write operations during which word line 420 is at a holding voltage. The read access transistor 480 controls read access during a read operation via read bit line 454. By providing separate write and read bit lines 452, 454 along with a separate write access transistor 470 and a separate read access transistor 480, the reading and writing operations are completely isolated from each other since the read and write paths are decoupled from one another thereby eliminating the read disturbance issues mentioned above. Operation of the memory cell 410 will be described in greater detail below with reference to FIG. 23 following a description of method steps used to fabricate the memory cell 410.[0043] FIGS. 5-22 illustrate a memory cell 410 and method steps for its fabrication in accordance with various embodiments of the invention. In particular, FIGS. 6, 9, 12, 15, 22 illustrate top plan views of the memory cell 410 and method steps for its fabrication, whereas FIGS. 5, 7, 8, 10-11, 13-14, and 16-21 illustrate cross sectional views of the memory cell 410 and method steps for its fabrication. The plan views illustrated in FIGS. 6, 9, 12, 15, 22 include upper and lower section lines. FIGS. 7, 1 1, 13, 16, 18, and 20 illustrate cross sectional views of the memory cell 410 taken across the upper section line, whereas FIGS. 8, 10, 14, 17, 19, and 21 illustrate cross sectional views of the memory cell 410 taken across the lower section line. [0044] In the illustrative embodiments which are described below, the exemplary memory cell 410 comprises three N-channel MOS (NMOS) transistors 470, 480, 490 and a GLT device 460 which comprises a PNPN thyristor coupled to a MOS capacitor. However, as will be explained below, similar method steps can be used to manufacture another memory cell comprising three P-channel MOS (PMOS) transistors and a GLT device which comprises a NPNP thyristor coupled to a MOS capacitor. [0045] Various steps in the manufacture of memory cells, MOS transistors and thyristors are well known and so, in the interest of brevity, many conventional steps will only be mentioned briefly herein or will be omitted entirely without providing the well known process details. As noted above, as used herein, the term "MOS transistor" is to be interpreted non- restrictively and refers to any semiconductor device that includes a conductive gate electrode that is positioned over a gate insulator which, in turn, is positioned over a semiconductor substrate. [0046] The initial steps in the fabrication of memory cell 410 are conventional so the initial steps themselves are not shown and will not be described in detail. The manufacture begins with providing a semiconductor structure or substrate 401 in and on which a memory cell 410 is fabricated. The semiconductor substrate 401 can be either a bulk semiconductor material or a semiconductor-on-insulator (SOI) substrate. In accordance with an embodiment of the invention illustrated in FIG. 5, the semiconductor substrate 401 is illustrated as a (SOI) structure 401 which comprises at least one thin layer of semiconductor material 406 disposed on or over a buried oxide insulating layer 404 which, in turn, is supported by a carrier wafer or substrate 402 so that the buried oxide insulating layer 404 is disposed between the carrier wafer 402 and the semiconductor layer 406. Those of skill in the semiconductor art will appreciate that the semiconductor layer 406 can be a silicon layer, a germanium layer, a gallium arsenide layer, or other semiconductor materials. In one embodiment, thesemiconductor layer 406 comprises a thin monocrystalline layer of silicon on the buried oxide insulating layer 404. The thin monocrystalline layer of silicon can be a silicon substrate having a (100) surface crystal orientation. The thin silicon layer preferably has a resistivity of at least about 1-35 Ohms per square. As used herein, the term "silicon layer" will be used to encompass the relatively pure silicon materials or lightly impurity-doped monocrystalline silicon materials typically used in the semiconductor industry as well as silicon admixed with small amounts of other elements such as germanium, carbon, and the like, as well as impurity dopant elements such as boron, phosphorus, and arsenic, to form a substantially monocrystalline semiconductor material. In one embodiment, the buried oxide insulating layer 404 can be, for example, a silicon dioxide layer, which preferably has a thickness of about 40-200 run. [0047] The semiconductor layer 406 can be impurity doped either with N-type conductivity determining impurities or P-type conductivity determining impurities depending on the conductivity type of the GLT device 460 and MOS transistors 470, 480, 490 to be formed. In an NMOS embodiment, the semiconductor layer 406 is doped with P-type conductivity determining impurities to create P-well regions 463, 471, 486, 493 in the semiconductor layer 406. Impurity doping can take place, for example, by the implantation and subsequent thermal annealing of dopant ions such as boron. Alternatively, in a PMOS embodiment, the semiconductor layer 406 can be doped with N-type conductivity determining impurities to create N-well regions (not shown) in the semiconductor layer 406. Impurity doping can take place, for example, by the implantation and subsequent thermal annealing of dopant ions such as phosphorus and arsenic. [0048] Once the P-well regions 463, 471, 486, 493 are formed, trenches can be etched into the semiconductor layer 406 for the formation of dielectric isolation regions (not shown) between adjacent memory cells. For example, the memory cell 410 can be electrically isolated from other memory cells (not shown) by a dielectric isolation region (not shown), preferably a shallow trench isolation (STI) region. As is well known, there are many processes that can be used to form the STI, so the process need not be described here in detail. In general, STI includes a shallow trench that is etched into the surface of the semiconductor layer 406 that is subsequently filled with an insulating material. After the trench is filled with an insulating material, such as an oxide, the surface is usually planarized, for example, by chemical mechanical planarization (CMP). [0049] As illustrated in FIGS. 6-8, a layer of gate insulating material 408 is formed over the semiconductor layer 406 and gate electrodes 465, 475, 485, 495 are formed overlying thegate insulating material 408 and impurity-doped P-well regions 463, 471, 486, 493, respectively. The layer of gate insulating material 408 can be a layer of thermally grown silicon dioxide or, alternatively, a deposited insulator such as silicon oxide, silicon nitride, or a high dielectric constant (K) insulator material having a high dielectric constant (K) relative to silicon dioxide. Examples of "high-κ dielectric" materials include hafnium and zirconium silicates, and their oxides, including, but not limited to, hafnium oxide (HfO2), hafnium silicate (HfSiO), or the like. Deposited insulators can be deposited, for example, by chemical vapor deposition (CVD), low pressure chemical vapor deposition (LPCVD), plasma enhanced chemical vapor deposition (PECVD) or atomic layer deposition (ALD). The gate insulator layer 408 preferably has a thickness of about 1-10 nm, although the actual thickness can be determined based on the circuit being implemented. [0050] Gate electrodes 465, 475, 485, 495 are preferably formed by depositing a layer (not illustrated) of gate forming material overlying the layer of gate insulating material 408, and then patterning and etching the layer of gate forming material (as well as the underlying layer of gate insulating material 408) to form strips 420, 421, 422 of gate forming material that overlie remaining portions of the gate insulating material 408 as illustrated in FIG. 6. The layer of gate forming material, and hence the gate electrodes 465, 475, 485, 495, can be formed from a layer of polycrystalline silicon or other conductive materials such as metals. In one embodiment, the layer of gate forming material comprises a layer of undoped polycrystalline silicon having a thickness of about 100-300 nm. The polycrystalline silicon can be deposited, for example, by the reduction of silane (SiH4) in a CVD reaction such as a low pressure chemical vapor deposition (LPCVD). [0051] After patterning and etching the layer of gate forming material and the layer of gate insulating material 408 the gate electrodes 465, 475, 485, 495 have been formed, which overlies remaining portions of the gate insulating material 408. As illustrated in FIGS. 9-11, openings in the gate insulating material 408 expose portions of the P-well regions 463, 471, 486, 493 adjacent the gate electrodes 465, 475, 485, 495, and a mask layer 498 is formed overlying a portion of the P-well region 463. At least a surface portion of the exposed portions of P-well regions 463, 471, 486, 493 can be impurity doped with N-type conductivity determining impurities to create lightly doped extension regions 456 in the semiconductor layer 406 adjacent the gate electrodes 465, 475, 485, 495. Impurity doping can take place, for example, by the implantation and subsequent thermal annealing of dopant ions such as arsenic.[0052] As illustrated in FIGS. 12-14, sidewall spacers 469 and insulating spacer block 467 are then formed. In one embodiment, a blanket layer of insulating material (not illustrated), such as a dielectric layer of silicon oxide and/or silicon nitride, is conformally deposited overlying the gate electrodes 465, 475, 485, 495 and exposed portions of the semiconductor layer 406 including the lightly doped extension regions 456. A layer of photosensitive material, such as photoresist, is then applied over the blanket layer of insulating material, and is patterned to leave a remaining portion 496 and to expose other portions of the blanket insulating layer. The exposed portions of the blanket insulating layer (i.e., those not covered by remaining photosensitive material 496) are then anisotropically etched with etchants, for example, by reactive ion etching (RIE), to form sidewall spacers 469 on sidewalls 412, 413, 414, 416, 417, 418, 419 of the gate electrodes 465, 475, 485, 495 and to form an insulating spacer block 467 on sidewall 415 of gate electrode 465. Silicon oxide and silicon nitride can be etched, for example, in a CHF3, CF4, or SF6 chemistry. The insulating spacer block 467 overlies a portion of the semiconductor layer 406, a portion of gate electrode 465, and a sidewall 415 of gate electrode 465. The remaining portions of the photosensitive material 496 are then removed. [0053] As illustrated in FIGS. 15-17, another layer of masking material, which can be, for example, a layer of photoresist, is then applied and patterned to provide an ion implant mask 499. The ion implant mask 499 covers regions of the semiconductor layer 406 which correspond to the eventual locations of the N-type base region/anode region 468, 466, and exposes regions of the semiconductor layer 406 which correspond to the eventual locations of a source region 472, a common drain/cathode region 474, 464, a source region 482, a common drain/source region 484, 492, and drain region 494. The source region 472, drain/cathode region 474, 464, source region 482, common drain/source region 484, 492, and drain region 494 are implanted at approximately zero degrees as represented by the arrows 497. In this exemplary embodiment, N-type conductivity determining ions, such as phosphorus or arsenic, are implanted. The layer of masking material 499 is then removed. [0054] As illustrated in FIGS. 15, 18 and 19, a layer of masking material 501, which can be, for example, a layer of photoresist, is then applied over the gate electrodes 465, 475, 485, 495, and patterned to provide an ion implant mask which exposes regions of the semiconductor layer 406 which correspond to the eventual locations of an N-base region 468 and an anode region 466. The N-base region 468 is implanted at an angle with respect to a line 504 that is perpendicular to an upper surface of the semiconductor layer 406, as represented by the arrows 503 to create the N-base region 468 which extends under theinsulating spacer block 467. The N-base region 468 is preferably implanted at an angle that is greater than zero (0) degrees and less than or equal to forty-five (45) degrees with respect to a line 504 that is perpendicular to an upper surface of the semiconductor layer 406. In this exemplary embodiment, N-type conductivity determining ions, such as phosphorus or arsenic, are implanted. Next, as illustrated in FIGS. 15, 20 and 21, the anode region 466 is implanted at approximately zero degrees as represented by the arrows 505 with P-type conductivity determining ions, such as boron, using a high-energy ion beam to form P-type anode region 466 of the GLT device 420. In an alternate embodiment, N-type conductivity determining ions, such as phosphorus or arsenic, are implanted. Formation of the P-type anode region 466 splits the N-type base region/anode region 468, 466 into two portions: an N-type base region 468 and a P-type anode region 466 of the GLT device 420. The N-type base region 468 is disposed between the P-well region 463 and the P-type anode region 466. [0055] The layer of masking material 501 is then removed, and the resultant memory cell 410 structure is subjected to a rapid thermal anneal (RTA) process by exposing the memory cell 410 to controlled periods of high temperature. The RTA step electrically activates the ions in the N-type source region 472, the N-type drain/cathode region 474, 464, the N-type base region 468, the P-type anode region 466, the N-type source region 482, the N-type common drain/source region 484, 492, and the N-type drain region 494 and causes outward lateral diffusion (not illustrated) of dopant ions implanted in those regions, hi addition, although not illustrated, suicide regions (not illustrated) can then be formed on the surface of exposed regions of the gate electrodes 465, 475, 485, 495, the N-type source region 472, the N-type drain/cathode region 474, 464, the N-type base region 468, the P-type anode region 466, the N-type source region 482, the N-type common drain/source region 484, 492, and the N-type drain region 494. The suicide regions provide a mechanism for electrically coupling contacts to these regions. In addition, the N-type drain/cathode region 474, 464 can be electrically coupled to the gate electrode 495 via a suicide region 444, as illustrated in FIG. 22. [0056] As illustrated in FIG. 22, the memory cell 410 can be completed by well-known steps (not illustrated) such as depositing a layer of dielectric material, etching openings through the dielectric material, and forming metallization that extends through the openings to electrically contact the various devices. For example, insulating material can be deposited overlying the gate electrodes 465, 475, 485, 495 and the exposed portions of the semiconductor layer 406 including the N-type source region 472, the N-type drain/cathode region 474, 464, the P-type anode region 466, the N-type source region 482, the N-typecommon drain/source region 484, 492, and the N-type drain region 494, and etched to form contact holes or openings that extend through the insulating material to the N-type source region 472, the P-type anode region 466, the N-type source region 482, and the N-type drain region 494. A conductive layer (not shown) of interconnect metal or other conductive material can then be deposited in the contact holes and patterned to leave remaining portions that comprise the interconnection metallization to suicide regions (not illustrated) formed on the N-type source region 472, N-type anode region 466, the N-type source region 482 and the N-type drain region 494. Vias can then be formed that extend through another layer of insulating material to the interconnection metallization to provide an electrical pathway to interconnection metallization. A metal- 1 layer can then be deposited overlying at least the vias and patterned to form a write enable line 430 that electrically contacts the gate electrode 465 and N-type base region 468 of the GLT device 460 and a supply line 432 that electrically contacts a suicide region of the P-type anode region 466 of the GLT device 460 and a suicide region formed on the N-type drain region 494 of the sensing transistor 490. Another layer of insulating material (not shown) can then be deposited overlying the write enable line 430 and the supply line 432, vias 451, 455 can be formed that extend through the insulating material, and a metal-2 layer can then be deposited overlying at least the vias 451, 455 and patterned to form a write bit line 452 that electrically contacts via 451 and a read bit line 454 that electrically contacts via 455. [0057] Thus, as illustrated in FIGS. 4 and 22, the memory cell 410 comprises the GLT device 460, the NMOS write access transistor 470, the NMOS read access transistor 480 and the sensing transistor 490. The NMOS write access transistor 470 is fabricated adjacent the NMOS read access transistor 480 and the GLT device 460 on the semiconductor layer 406, and the sensing transistor 490 is fabricated adjacent the NMOS read access transistor 480 and the GLT device 460 on semiconductor layer 406. [0058] The GLT device 420 comprises a lateral NPNP thyristor coupled to a MOS capacitor 463, 408, 465. The lateral NPNP thyristor comprises alternating N-type and P-type material which include a P-type anode region 466, an N-type base region 468, a P-type base region 463 and an N-type cathode region 464, where the base regions 463, 468 are laterally disposed between the P-type anode region 466 and N-type cathode region 464. A PN junction (Ji) is formed between P-type anode region 466 and the N-type base region 468, another PN junction (J2) is formed between the N-type base region 468 and the P-type base region 463, and yet another PN junction (J3) is formed between the P-type base 463 and the N-type cathode region 464. The MOS capacitor 463, 408, 465 of the GLT device 460 includes agate electrode 465, the P-type base region 463, and a gate insulator layer 408 disposed between the gate electrode 465 and the P-type base region 463. The gate insulator layer 408 serves as the capacitor dielectric. The N-type base region 468 and the P-type base region 463 are adjacent one another. When the P-type anode region 466 is at a positive potential with respect to the N-type cathode region 464 (with no voltage applied at the gate electrode 465), then PN junction (Ji) and PN junction (J3) are forward biased, while PN junction (J2) is reverse biased. As PN junction (J2) is reverse biased, no conduction takes place (off state). If a positive potential applied to the P-type anode region 466 is increased beyond a breakdown voltage (VBK) of the thyristor, avalanche breakdown of PN junction (J2) takes place and the thyristor starts conducting (on state). If a positive potential (VG) is applied at the gate electrode 465 with respect to the N-type cathode region 464, the breakdown of the junction PN junction (J2) occurs at a lower value of the positive potential. By selecting an appropriate value of VG, the thyristor can be quickly switched into the on state. [0059] The MOS capacitor 463, 408, 465 is capacitively coupled to the P-base region 463 of the thyristor, and holds charge thereby controlling potential of the P-base region 463 of the thyristor. The voltage level of the P-base region 463 determines whether or not NPN action of the N-type base region 468, the P-type base region 463, and the N-type cathode region 464 is triggered. [0060] Although the example above is an NMOS embodiment, those skilled in the art will appreciated that an alternative PMOS embodiment can be fabricated by switching conductivity types of various regions that make up the devices. For example, in an alternative exemplary embodiment, the transistors 470, 480, 490 comprise PMOS transistors, and the GLT device 460 comprises a thyristor arranged in an PNPN configuration with the MOS capacitor is connected to an N-base of the thyristor. In the PMOS embodiment (not illustrated), the well regions 463, 471, 486, 493 are N- well regions, and exposed portions of N- well regions 463, 471, 486, 493 can be doped with P-type conductivity determining impurities to create lightly doped extension regions and source/drain regions in the semiconductor layer 406. Impurity doping can take place, for example, by the implantation and subsequent thermal annealing of dopant ions such as boron di-flouride (BF2) for lightly doped extension regions and boron for source/drain regions. [0061] As will be described below with reference to FIG. 23, memory cell 410 is operated using a plurality of control lines which include word line 420, write enable line 430, supply line 432, write bit line 452, and read bit line 454. This memory cell 410 arrangement, amongother things, prevents read disturbances during read operations by decoupling the read and write bit lines 454, 452, as will be described below with reference to FIG. 23. [0062] FIG. 23 is a timing diagram which illustrates voltage waveforms 510, 520, 530, 540 applied to control lines 420, 430, 454, 452 of the memory cell 410 of FIG. 4 during reading and writing operations of the memory cell 410 in accordance with an embodiment of the present invention. As described in detail below, the memory cell 410 can be operated in any one of a number of different modes including write one (1) mode 590, read one (1) mode 592, write zero (0) mode 594, and read zero (0) mode 596. [0063] The memory cell 410 can be designed to operate using different voltages, and any values specified below are merely exemplary and provided to illustrate one particular non- limiting implementation. The power supply line 432 is grounded throughout operation of the memory cell 410, and therefore is not illustrated in FIG. 23. The voltage waveform 510 applied to the word line 420 ranges from a low value of approximately 0.0 volts to a high value of approximately 1.2 volts. Voltage waveform 510 transitions from the low value to the high value when the word line 420 is activated. The voltage waveform 520 applied to the write enable line 430 ranges from a low value of approximately -1.5 volts to a high value of approximately 0.0 volts. Voltage waveform 520 transitions from the low value to the high value when the write enable line 430 is activated during either a write one (1) operation that occurs during the write one (1) mode 590 or a write zero (0) operation that occurs during the write zero (0) mode 594. The voltage waveforms 530, 540 applied to the write and read bit lines 452, 454 range from a low value of approximately 0.0 volts to a high value of approximately 2.0 volts. In particular, voltage waveform 530 transitions from the low value to the high value when the read bit line 454 is activated during a read one (1) mode 592, and the voltage waveform 540 applied on the write bit line 452 transitions from the low value to the high value when the write bit line 452 is activated during the write zero (0) mode 594. [0064] During either write operation, the memory cell 410 is selected or activated by applying high voltage (Vdd) to the word line 420, and applying a low voltage to the read bit line 454 to turn "off the read access transistor 480 of the memory cell 410. When the write enable line 430 is at low voltage relative to the anode region 466 of the GLT device 460, no current flows in the GLT device 460 until a voltage pulse 522 (e.g., 0.0 volts) is applied to the write enable line 430. Writing operations take place by applying a voltage pulse 522, 526 to the write enable line 430, which causes a current to flow in the GLT device 460 allowing either a zero (0) or one (1) to be written to the memory cell 410.[0065] For the write one (1) operation that occurs during the write one (1) mode 590, a low voltage, for example, between 0.0 volts to 0.5 volts, is applied to both the read and write bit lines 452, 454 thereby applying a low voltage to the source electrode 472 of the write access transistor 470 and the source electrode 482 of the read access transistor 480, and high voltage is applied to the word line 420 and hence to the gate electrodes 475, 485 of the write access transistor 470 and the read access transistor 480. The write enable line is coupled to the gated electrode 465 of the GLT device 460. A one (1) is written to the memory cell 410 when voltage pulse 526 is applied to the write enable line 430. [0066] For the write zero (0) operation that occurs during the write zero (0) mode 594, high voltage is applied to the write bit line 452 thereby applying a high voltage to the source electrode 472 of the write access transistor 470, while the word line 420 is held at high potential thereby applying a high voltage to the gate electrodes 475, 485 of the write access transistor 470 and the read access transistor 480, and the read bit line 454 is held at low voltage thereby applying a low voltage to the source electrode 482 of the read access transistor 480. The write enable line 430 is coupled to the gated electrode 465 which is capacitively coupled to the p-base 463 of the GLT device 460. A zero (0) is written to the memory cell 410 when voltage pulse 522 is applied to the write enable line 430 since the voltage pulse 522 decreases the potential of the p-base 463 of the GLT device 460 thereby turning off the GLT device 460. [0067] During either read operation, the memory cell 410 is selected or activated by applying high voltage to the word line 420, applying a low voltage to or grounding the write bit line 452, and applying low voltage to the write enable line 430 so that no current flows in the GLT device 460 thereby preventing a write operation from taking place. Because the write bit line 452 is kept at low voltage during read operations 592, 596 the read disturbance problem can be eliminated. Moreover, memory cell 410 can be operated without a periodic refreshing operation because the current between cathode region 464 and anode region 466 is not limited during the standby mode or "holding state" that occurs between read operations 596, 592 and write operations 594, 590. [0068] For the read one (1) operation that occurs during the read one (1) mode 592, the memory cell 410 will have previously been written with a one (1). The GLT device 460 will be in a high state (also referred to as a "forward breaking mode") that raises the potential of the node 444 between GLT device 460 and the write access transistor 474. High potential at node 444 turns the sensing transistor 490 "on." The read bit line 454 is pre-charged to ground (0.0 volts). When high voltage is applied to the word line 420 the read accesstransistor 480 turns on, and the sensing transistor 490 and read access transistor 480 allow a current to pass from the anode 466 to read bit line 454 via supply line 432. When the voltage applied on bit line 454 increases, the sense amplifier circuit 346 senses that data one (1) is being read from the memory cell 410. [0069] For the read zero (0) operation that occurs during the read zero (0) mode 596, the memory cell 410 will have previously been written with a zero (0). The GLT device 460 will be in a low state (also referred to as a "reverse breaking mode"). The potential at node 444 between GLT device 460 and the write access transistor 474 is approximately zero and no current is passing through the GLT device 460. When zero bias at node 444 is applied to the sensing transistor 490, the sensing transistor 490 will be in its "off state and current can not flow from the anode 466 to the read bit line 454. If the voltage on the pre-charged read bit line 454 does not change, then the sense amplifier circuit 346 senses that data zero (0) is being read from the memory cell 410. [0070] FIG. 24 is a circuit schematic which illustrates a memory cell 610 in accordance with another embodiment of the present invention. The memory cell 610 of FIG. 24 includes many of the same elements and interconnections as the memory cell 410 of FIG. 4. The same reference numerals used in FIG. 4 are reused in FIG. 24 unless the arrangement or structure of memory cell 610 has changed. For sake of brevity, commonly numbered elements in FIGS. 4 and 24 will not be described in detail here again, and only the differences between the memory cell 610 of FIG. 24 and that of FIG. 4 will be described below. As in FIG. 4, the memory cell 610 comprises a gated lateral thyristor (GLT) device 460, a write access transistor 470, a read access transistor 480 and a sensing transistor 490, and a plurality of control lines are used to operate the memory cell 610 including a word line 420, a write enable line 430, a supply line 632, a write bit line 452, and a read bit line 454. [0071] The memory cell 610 illustrated in FIG. 24 differs from the memory cell 410 of FIG. 4 in that the supply line 632 is relocated such that it is coupled to the source electrode 472 of the write access transistor 470 at node 633. In addition, the anode 466 of the GLT device 460 and drain 494 of the sensing transistor 490 are coupled to one another via conductive line 634 that couples node 448 to node 449. Nodes 448, 449 are also coupled to the write bit line 452 at node 635. The sensing transistor 490 senses the voltage at node 444 a similar way as described above with respect to FIG. 4, the write access transistor 470 controls write access in a similar way as described above with respect to FIG. 4, and the read access transistor 470 controls read access in a similar way as described above with respect to FIG. 4. As such, operation of these elements will not be described herein again. As in FIG. 4, the memory cell610 can eliminate the read disturbance problem mentioned above by providing separate write and read bit lines 452, 454 to decouple the read and write paths from one. Operation of the memory cell 610 will be described in greater detail below with reference to FIG. 26 following a description of method steps used to fabricate the memory cell 610. [0072] FIGS. 5-21 and 25 illustrate a memory cell 610 and method steps for its fabrication in accordance with various embodiments of the invention. FIGS. 5-21 have been described above, and for sake of brevity will not be repeated. Method steps for the fabrication of memory cell 610 will now be described with reference to FIG. 25, which illustrates a top plan view of the memory cell 610. In the alternative memory cell 610 layout of FIG. 25, a metal- 1 layer is deposited overlying the vias 442, 446, 448, 449 and remaining portions of the layer of insulating material 409, and patterned, for example by etching, to form a supply line 632, a write enable line 430 and metal line 634 that couples via 448 to via 449. Via 448 electrically contacts contacts a suicide region (not illustrated) formed on the P-type anode 466 of the GLT device 460, and via 449 electrically contacts a suicide region (not illustrated) formed on the N-type drain region 494 of the sensing transistor 490. The supply line 632 electrically contacts via 441, which electrically contacts a suicide region (not illustrated) of the source electrode 472 of the write access transistor 470. Another layer of insulating material (not illustrated) is deposited overlying the insulating material 409, the supply line 632, the write enable line 430 and metal line 634, and portions of the insulating material are then anisotropically etched to form a via hole that extends through the insulating material 411 to via 442 and the metal line 634. The via hole can then be filled with conductive material to form a via that electrically contacts the via 442 and the metal line 634. Thereafter, a metal-2 layer (not shown) can then be deposited overlying at least vias 455, 635 and remaining portions of the layer of insulating material, and patterned to form a write bit line 452 that electrically contacts via 635 and a read bit line 454 that electrically contacts via 455. [0073] FIG. 26 is a timing diagram which illustrates voltage waveforms 710, 720, 730, 740 applied to control lines 420, 430, 454, 452 of the memory cell 610 of FIG. 24 during reading and writing operations of the memory cell 610 in accordance with an embodiment of the present invention. As described in detail below, the memory cell 610 can be operated in any one of a number of different modes including write one (1) mode 790, read one (1) mode 792, write zero (0) mode 794, and read zero (0) mode 796. [0074] The memory cell 610 can be designed to operate using different voltages, and any values specified below are merely exemplary and provided to illustrate one particular non- limiting implementation. The power supply line 632 is grounded throughout operation of thememory cell 610, and therefore is not illustrated in FIG. 26. The voltage waveform 710 applied to the word line 420 ranges from a low value of approximately 0.0 volts to a high value of approximately 1.2 volts. Voltage waveform 710 transitions from the low value to the high value when the word line 420 is activated. The voltage waveform 720 applied to the write enable line 430 ranges from a low value of approximately -1.5 volts to a high value of approximately 0.0 volts. Voltage waveform 720 transitions from the low value to the high value when the write enable line 430 is activated during either a write one (1) operation that occurs during the write one (1) mode 790 or a write zero (0) operation that occurs during the write zero (0) mode 794. The voltage waveforms 730, 740 applied to the write and read bit lines 452, 454 range from a low value of approximately 0.0 volts to a high value of approximately 1.2 volts. In particular, voltage waveform 730 transitions from the low value of zero (0) volts to the high value of 1.0 volts when the read bit line 454 is activated during a read one (1) mode 792, and the voltage waveform 740 applied on the write bit line 452 transitions from the high value to the low value when the write bit line 452 is activated during the write zero (0) mode 790. [0075] During either write operation, the memory cell 610 is selected or activated by applying high voltage (Vdd) to the word line 420, and applying a low voltage to the read bit line 454 to turn "off the read access transistor 480 of the memory cell 610. When the write enable line 430 is at low voltage relative to the anode region 466 of the GLT device 460, no current flows in the GLT device 460 until a voltage pulse 722 (e.g., 0.0 volts) is applied to the write enable line 430. Writing operations take place by applying a voltage pulse 722, 726 to the write enable line 430, which causes a current to flow in the GLT device 460 allowing either a zero (0) or one (1) to be written to the memory cell 610. [0076] For the write one (1) operation that occurs during the write one (1) mode 790, a low voltage, for example, between 0.0 volts to 0.5 volts, is applied to the read bit line 454 thereby applying a low voltage to the source electrode 482 of the read access transistor 480, a high voltage, for example, between 1.0 volts and 1.5 volts, is applied to both the write bit line 452 thereby applying a high voltage to the source electrode 472 of the write access transistor 470, and high voltage is applied to the word line 420 and hence to the gate electrodes 475, 485 of the write access transistor 470 and the read access transistor 480. The write enable line is coupled to the gated electrode 465 of the GLT device 460. A one (1) is written to the memory cell 610 when voltage pulse 726 is applied to the write enable line 430. [0077] For the write zero (0) operation that occurs during the write zero (0) mode 794, a low voltage between 0.0 volts and 0.5 volts is applied to the write bit line 452 thereby applying alow voltage to the source electrode 472 of the write access transistor 470, while the word line 420 is held at high potential thereby applying a high voltage to the gate electrodes 475, 485 of the write access transistor 470 and the read access transistor 480, and the read bit line 454 is held at low voltage thereby applying a low voltage to the source electrode 482 of the read access transistor 480. The write enable line 430 is coupled to the gated electrode 465 which is capacitively coupled to the p-base 463 of the GLT device 460. A zero (0) is written to the memory cell 610 when voltage pulse 722 is applied to the write enable line 430 since the voltage pulse 722 decreases the potential of the p-base 463 of the GLT device 460. [0078] During either read operation, the memory cell 610 is selected or activated by applying high voltage to the word line 420, applying a high voltage to the write bit line 452, and applying low voltage to the write enable line 430 so that no current flows in the GLT device 460 thereby preventing a write operation from taking place. Because the write bit line 452 is kept at high voltage during read operations 792, 796 the read disturbance problem can be eliminated. Moreover, memory cell 610 can be operated without a periodic refreshing operation because the current between anode and cathode 464 is not limited during the standby mode or "holding state" that occurs between read operations 796, 792 and write operations 794, 790. [0079] For the read one (1) operation that occurs during the read one (1) mode 792, the memory cell 610 will have previously been written with a one (1). The GLT device 460 will be in a high state (also referred to as a "forward breaking mode") that raises the potential of the node 444 between GLT device 460 and the write access transistor 474. High potential at node 444 turns the sensing transistor 490 "on." The read bit line 454 is pre-charged to ground (0.0 volts). When high voltage is applied to the word line 420 the read access transistor 480 turns on, and the sensing transistor 490 and read access transistor 480 allow a current to pass from the anode 466 to write bit line 452 and to the drain 494 of sensing transistor 490 via line 634. When the voltage applied on bit line 454 increases, the sense amplifier circuit 346 senses that data one (1) is being read from the memory cell 610. [0080] For the read zero (0) operation that occurs during the read zero (0) mode 796, the memory cell 610 will have previously been written with a zero (0). The GLT device 460 will be in a low state (also referred to as a "reverse breaking mode"). The potential at node 444 between GLT device 460 and the write access transistor 474 is approximately zero and no current is passing through the GLT device 460. When zero bias at node 444 is applied to the sensing transistor 490, the sensing transistor 490 will be in its "off state and current can not flow from the anode 466 to the write bit line 452 and to the drain 494 of sensing transistor490 via line 634. If the voltage on the pre-charged read bit line 454 does not change, then the sense amplifier circuit 346 senses that data zero (0) is being read from the memory cell 610. [0081] While at least one exemplary embodiment has been presented in the foregoing detailed description, it should be appreciated that a vast number of variations exist. It should also be appreciated that the exemplary embodiment or exemplary embodiments are only examples, and are not intended to limit the scope, applicability, or configuration of the invention in any way. Rather, the foregoing detailed description will provide those skilled in the art with a convenient road map for implementing the exemplary embodiment or exemplary embodiments. It should be understood that various changes can be made in the function and arrangement of elements without departing from the scope of the invention as set forth in the appended claims and the legal equivalents thereof. |
A multiple channel transistor provides a transistor with an improved drive current and speed by using tunable hot carrier effects. A thin gate oxide has a carrier confinement layer formed on top thereof. Holes produced by hot carrier effects are retained by the carrier confinement layer directly above the gate oxide layer. The holes switch on the bottom transistor of the multi-channel transistor, thereby increasing the drive current. |
What is claimed is:1. A multiple channel transistor comprising:a silicon substrate;a first gate oxide layer on the substrate;a carrier confinement layer directly on the first gate oxide layer;a silicon layer with a heavily doped region at one end on the carrier confinement layer, the silicon layer and the carrier confinement layer having sidewalls;a second gate oxide layer on the silicon layer;a gate on the second gate oxide layer; andsource and drain regions in the substrate and on the silicon layer sidewalls and the carrier confinement layer sidewalls.2. The transistor of claim 1, wherein the carrier confinement layer is between about 50 Ȧ to about 200 Ȧ thick.3. The transistor of claim 2, wherein the carrier confinement layer consists of SiGe.4. The transistor of claim 2, wherein the carrier confinement layer consists of doped Ge.5. The transistor of claim 2, wherein the carrier confinement layer consists of a high dielectric constant semiconductor.6. The transistor of claim 1, wherein the silicon layer is a strained silicon layer.7. The transistor of claim 1, wherein the substrate includes a strained silicon layer.8. The transistor of claim 1, wherein the carrier confinement layer is a hole confinement layer.9. The transistor of claim 1, wherein the carrier confinement layer is an electron confinement layer.10. The transistor of claim 1, wherein the heavily doped region is doped to a concentration between 1*10<17 > to 1*10<20> . |
This application is a continuation of application Ser. No. 10/754,619, filed, Jan. 12, 2004 now abandoned.FIELD OF THE INVENTIONThe present invention relates to the field of semiconductor devices, and more particularly, to multi-channel devices.BACKGROUND OF THE INVENTIONA conventional MOSFET operates by driving current through a channel region between the source and drain of a device. The conductivity of the channel region is modulated by the application of a voltage on the conducting gate above the channel surface and insulated from it. Efforts are ongoing within many MOS integrated circuit manufacturing companies as well as at many universities and government laboratories to improve the speed and available drive currents with MOSFETs to reduce their power consumption, and to improve their reliability and radiation hardness for applications in harsher remote environments, including space.FIG. 1 shows a conventional partially depleted SOI (silicon-on-insulator) MOSFET that has been provided to achieve some of the improvements in speed and drive currents that have been needed. The OSI transistor 10 includes a silicon substrate 12 on which a buried oxide layer 14 is provided. A body layer 16, made of silicon, forms the area in which the semiconductor devices are located. The SOI transistor 10 includes a source region 18, a drain region 20 and a gate 26 that is provided on a gate oxide layer 22. Spacers 24 are formed on the sidewalls of the gate 26 and are employed as masks during the source/drain implantation process.One of the concerns of a traditional partially depleted SOI MOSFET, such as the SOI transistor 10 of FIG. 1, is the decrease in the threshold voltage Vt of the transistor 10 due to hot carrier effects. As is well known, hot carrier effects in a transistor generate electron/hole pairs. Driven by electric fields, the electrons drift towards the gate 26, while the holes tend to drift toward the buried oxide layer 14. This movement of the holes toward the buried oxide layer undesirably decrease the threshold voltage Vt of the transistor 10.A plot of Ids vs. Vds is shown in FIG. 2 for a conventional SOI MOSFET transistor 10, such as that depicted in FIG. 1. As can be readily appreciated, the well-known "kink effect" as depicted in FIG. 2, is due to the holes that have drifted near the buried oxide, the uncontrolled kinking increasing the substrate bias and thereby decreasing the threshold voltage Vt.One of the goals in semiconductor processing is to maximize the use of the available silicon area. This allows increased miniaturization of the electronic circuitry. In particular, it is desirable to maximize the drive current for a given silicon area.SUMMARY OF THE INVENTIONThere is a need for providing a MOSFET in which the transistor drive current is increased, without increasing the gate voltage or increasing leakage current. The structure should be compatible with existing fabrication techniques and improve transistor operating speed without requiring more lithography levels or change in overall layout designs.These and other needs are met by embodiments of the present invention that provide a multiple channel transistor comprising a silicon substrate, a first gate oxide layer on a substrate, and a carrier confinement layer on the first gate oxide layer. A silicon layer is provided on the carrier confinement layer, the silicon layer and the carrier confinement layer having sidewalls. A second gate oxide layer is provided on the silicon layer, and a gate is formed on the second gate oxide layer. Source and drain regions are provided in the substrate and on the silicon layer sidewalls and the carrier confinement layer sidewalls.The use of a carrier confinement layer formed on a first gate oxide layer provides a multiple channel transistor in which charge carriers are confined to the region above the gate oxide layer, which is much thinner than the buried oxide layer of SOI MOSFETs. Hence, the hot carrier effects produces holes, for example, that are confined by the carrier confinement layer above the gate oxide layer. These holes near the bottom gate oxide layer switch on the bottom channel formed in the substrate. The hot carrier effect is thereby tunable to produce a controlled kinking in the Ids vs. Vds plot. This improves the drive current in comparison to conventional devices.The earlier stated needs are also met by other embodiments of the present invention which provide a multi-channel transistor comprising a first channel and a second channel, and a carrier confinement layer between the first and second channels. The carrier confinement layer operates to confine carriers produced by hot carrier effects in the first channel these carriers switching on the second channel.The foregoing and other features, aspects and advantages of the present invention will become more apparent from the following detailed description of the present invention when taken in conjunction with the accompanying drawings.BRIEF DESCRIPTION OF THE DRAWINGSFIG. 1 is a schematic, cross-sectional view of an SOI MOSFET constructed in accordance with prior art methodologies.FIG. 2 is a plot of Ids vs. Vds for the SOI MOSFET of FIG. 1.FIG. 3 is a plot of Ids vs. Vds for the multi-channel transistor of FIG. 9 of the present invention.FIG. 4 is a schematic, cross-sectional view of a multi-channel transistor during one phase of manufacture in accordance with embodiments of the present invention.FIG. 5 shows the structure of FIG. 4 following the deposition of a sacrificial layer, a gate oxide layer and a polysilicon gate layer in accordance with embodiments of the present invention.FIG. 6 shows the structure of FIG. 5 after etching to form the gate in accordance with embodiments of the present invention.FIG. 7 depicts the structure of FIG. 6 following the formation of a protective layer over the gate in accordance with embodiments of the present invention.FIG. 8 shows the structure of FIG. 7 after etching of the protective layer to form protective sidewall spacers on the gate and a removal of the sacrificial layer in accordance with embodiments of the present invention.FIG. 9 depicts the structure of FIG. 8 following the conformal deposition of a silicon layer and etching of the silicon layer to form source and drain regions in accordance with embodiments of the present invention.DETAILED DESCRIPTIONThe present invention addresses and solves problems related to increasing the drive current of transistors and achieves this, in part, by the proper adjustment and tuning of hot carrier effects in a multi-channel transistor. The invention provides for a multi-channel transistor having a relatively thin gate oxide layer provided on a silicon substrate and a charge confinement layer on the gate oxide layer. The charge confinement layer, which may be silicon germanium or other high dielectric constant semiconductor material, operates to confine charge carriers produced by the hot carrier effects. Rather than allowing the holes created by hot carrier effects to drift towards a buried oxide layer and decrease the threshold voltage of a transistor, the holes created by the hot carrier effects in the present invention are confined by the charge confinement layer directly above a gate oxide layer to switch on the channel formed in the substrate underneath the gate oxide layer. This produces an increased drive current for a given silicon area. The multiple-channel approach of the present invention achieves an improved MOSFET drive current with no increase in the gate voltage or leakage current, and improves transistor operating speed without increasing the number of lithography levels or requiring a change in layout designs.FIGS. 4-9 describe the method of making a multiple-channel device in accordance with embodiments of the present invention. The description will discuss certain materials and process steps in an exemplary manner, but it should be recognized that these materials and process steps are exemplary only as other materials and process steps may be employed without departing from the scope of the present invention.FIG. 4 is a schematic, cross-sectional view of a portion of a semiconductor device during one phase of manufacture in accordance with embodiments of the present invention. A stack 30 has been created on a substrate 32 by a dry etching of layers that have been previously formed. The stack 30 of FIG. 4 includes a first gate oxide layer 34 that has been formed to a thickness of between about 10 to about 20 Ȧ in exemplary embodiments. Conventional techniques for forming a gate oxide layer may be employed.A charge confinement layer 36 is formed on the first gate oxide layer 34. The charge confinement layer 36, in preferred embodiments, is a high dielectric constant semiconductor material. Candidate materials include silicon germanium and p-Ge material. These materials are particularly well suited for use as a charge confinement layer for confining holes for n-channel devices. For p-channel devices, a material should be selected that will confine electrons suitably. Candidate materials include SiGeC. These materials are exemplary only, as other charge confinement materials may be employed without departing from the scope of the present invention. An exemplary thickness range for the charge confinement layer is between about 50 Ȧ to about 200 Ȧ.A silicon layer 38 is formed on the carrier confinement layer 36. In certain embodiments of the invention, the silicon layer 38 is a strained silicon layer as there is a mismatch between the silicon in layer 38 and the silicon germanium (or p-Ge) in layer 36. This serves to improve carrier mobility in the silicon layer 38.A specially doped region 40 is provided that has been doped by an angled, or tilted, doping to create a more heavily doped region. For example, the doping may be between 1*10<17 > to 1*10<20> . The specially doped region will provide the desired hot carrier effects in the present invention. Suitable techniques for creating a specially doped region, such as by tilted doping or angled doping, are well-known to those of skill in the art.Once the layers are formed, the stack 30 is created by a dry etch technique, such as by reactive ion etching. The doping of region 40 is performed after the dry etching has been performed. Conventional spacer formation, extension implants and source/drain implants are then performed at this stage of the process.FIG. 5 shows the structure of FIG. 4 following the further processing of layers on the substrate 32 and the stack 30. These layers includes a sacrificial layer 41 that is deposited and then planarized. An exemplary material for the sacrificial layer 41 is a nitride, such as silicon nitride. The planarization may be accomplished by chemical-mechanical planarization, for example.Following the formation of the sacrificial layer 41, a second gate oxide layer 42 is formed by conventional methodologies. An exemplary thickness for the second gate oxide layer 42 is between about 10 to about 30 Ȧ, for example. This range of thicknesses is exemplary only, however. A polysilicon gate layer 44 is deposited over the gate oxide layer 42 in a conventional manner.FIG. 6 shows the structure of FIG. 5 following the dry etching of the polysilicon gate layer 44 and the second gate oxide layer 42. The etching is selective so that the etch steps upon the sacrificial layer 41.The sides of the gate 44 must be protected from contact with the source and drain regions that will be formed on the sides thereof. Accordingly, in FIG. 7, a protective layer 46 is deposited over the gate 44, as well as being formed on the sacrificial layer 41. In certain embodiments of the invention, the protective layer 46 is an oxide, for example. A relatively thin layer 46 may be employed, and is advantageous in that etching may be performed more rapidly with a thinner layer, while still providing a sufficiently thick protective spacer on the sidewalls of the gate 44. Conventional deposition and etching techniques may be employed to form the protective layer 46.FIG. 8 depicts the structure of FIG. 7 following the dry, anisotropic etching of the protective layer 46, leaving the protective spacers 48 on the sidewalls of the gate 44. Following this first, dry etching of the protective layer 46, a wet etching is then performed to remove the sacrificial layer 41. A selective wet etch is used to remove the nitride, for example, of the sacrificial layer 41 without etching the protective spacers 48 or the gate oxides 42, 34.FIG. 9 shows the structure of FIG. 8 after a silicon layer has been deposited, by chemical vapor deposition (CVD), for example. Deposition of a doped silicon layer is performed, or a post-deposition implantation is used to dope deposited silicon. Dopant concentration in these regions 50 is between about 5*10<17 > to about 1*10<20> , for example. A silicon etch is employed to form the silicon regions 50 that contact the stack 30. These silicon regions 50 are electrically isolated from the gate electrode 44 by the second gate oxide 42 and the protective spacers 48. Hence, source and drain regions 50 are created by the silicon regions.In operation, the specially doped region 40 serves to create hot carrier effects that produce holes (in an n-channel transistor) that drift towards first gate oxide layer 34. These charge carriers are retained by the carrier confinement layer 36. Without such a layer, the carriers would tend to drift throughout the silicon region 38 and not provide the desired effect. Thus, with the holes retained by the carrier confinement layer 36 directly above the first gate oxide layer 34, a channel is formed in the substrate 32, switching the transistor on. Also, in the silicon region 38, another channel is formed underneath the second gate oxide layer 42. There are therefore two channels formed in this multi-channel transistor. The hot carrier effects are controlled by the present invention to produce increased drive current, as shown in FIG. 3, that depicts an exemplary plot of Ids vs. Vds. The kink is a controlled kink, leading to the increased drive current provided by the multi-channel transistor of the present invention, including the carrier confinement layer 36.Although the present invention has been described and illustrated in detail, it is to be clearly understood that the same is by way of illustration and example only and is not to be taken by way of limitation, the scope of the present invention being limited only by the terms of the appended claims. |
An apparatus and method are described for a sharing aware snoop filter. For example, one embodiment of a processor comprises: a plurality of caches, each of the caches comprising a plurality of cache lines, at least some of which are to be shared by two or more of the caches; a snoop filter to monitor accesses to the plurality of cache lines shared by the two or more caches, the snoop filter comprising: a primary snoop filter comprising a first plurality of entries, each entry associated with one of the plurality of cache lines and comprising a N unique identifiers to uniquely identify up to N of the plurality of caches currently storing the cache line; an auxiliary snoop filter comprising a second plurality of entries, each entry associated with one of the plurality of cache lines, wherein once a particular cache line has been shared by more than N caches, an entry for that cache line is allocated in the auxiliary snoop filter to uniquely identify one or more additional caches storing the cache line. |
CLAIMSWhat is claimed is:1 . A processor comprising:a plurality of caches, each of the caches comprising a plurality of cache lines, at least some of which are to be shared by two or more of the caches;a snoop filter to monitor accesses to the plurality of cache lines shared by the two or more caches, the snoop filter comprising:a primary snoop filter comprising a first plurality of entries, each entry associated with one of the plurality of cache lines and comprising a N unique identifiers to uniquely identify up to N of the plurality of caches currently storing the cache line;an auxiliary snoop filter comprising a second plurality of entries, each entry associated with one of the plurality of cache lines, wherein once a particular cache line has been shared by more than N caches, an entry for that cache line is allocated in the auxiliary snoop filter to uniquely identify one or more additional caches storing the cache line.2. The processor as in claim 1 wherein the N identifiers comprise first identification data uniquely identifying a current owner of the cache line and second identification data uniquely identifying a first sharer of the cache line.3. The processor as in claim 1 wherein once the cache line is stored by more than the current owner and the first sharer, an allocation for the cache line is made in the auxiliary snoop fiiter to uniqueiy identify one or more additional caches storing the cache line.4. The processor as in claim 1 wherein a first entry associated with a first cache line is to be initially stored in the primary snoop filter but not in the auxiliary snoop filter, the first entry to be copied from the primary snoop filter to the auxiliary snoop filter when the first cache line has been shared by more than N caches.5. The processor as in claim 4 wherein the first entry is removed from the primary snoop filter upon being copied to the auxiliary snoop filter, the first entry to be copied back to the primary snoop fiiter and removed from the auxiliary snoop filter upon being shared by N or fewer caches.6. The processor as in claim 1 wherein a first entry associated with a first cache line is to be initially allocated in the primary snoop filter, a first auxiliary entry to be allocated within the auxiliary snoop filter when the first cache line has been shared by more than N caches, the first auxiliary entry to include identification data for each individual cache sharing the first cache line.7. The processor as in claim 6 wherein the first entry in the primary snoop filter includes coarse-grained identification data identifying a group of caches which may be caching the first cache line.8. The processor as in claim 1 wherein the plurality of caches comprise a plurality of intra-core caches.9. The processor as in claim 8 wherein the intra-core caches comprise Level 1 (L1 ) and/or Level 2 (L2) caches.10. The processor as in claim 9 wherein the plurality of caches include one or more uncore caches.1 1 . A method comprising:allocating a first entry for a first cache line in a primary snoop filter, the first entry comprising a N unique identifiers to uniquely identify up to N caches currently storing the cache line;detecting that a number of caches currently storing the first cache line is greater than N; andresponsively allocating a first auxiliary entry for the first cache line in an auxiliary snoop filter to uniquely identify one or more additional caches storing the cache line.12. The method as in claim 1 1 wherein the N identifiers comprise first identification data uniqueiy identifying a current owner of the first cache line and second identification data uniquely identifying a first sharer of the first cache line.13. The processor as in claim 11 wherein once the first cache line is stored by more than the current owner and the first sharer, an allocation for the first cache line is made in the auxiliary snoop filter to uniquely identify one or more additional caches storing the cache line.14. The processor as in claim 11 wherein a first entry associated with the first cache line is to be initially stored in the primary snoop filter but not in the auxiliary snoop filter, the first entry to be copied from the primary snoop filter to the auxiliary snoop filter when the first cache line has been shared by more than N caches.15. The processor as in claim 14 wherein the first entry is removed from the primary snoop filter upon being copied to the auxiliary snoop filter, the first entry to be copied back to the primary snoop filter and removed from the auxiliary snoop filter upon being shared by N or fewer caches.16. The processor as in claim 1 1 wherein a first auxiliary entry is to be allocated within the auxiliary snoop filter when the first cache line has been shared by more than N caches, the first auxiliary entry to include identification data for each individual cache sharing the first cache line.17. The processor as in claim 16 wherein the first entry in the primary snoop filter includes coarse-grained identification data identifying a group of caches which may be caching the first cache line.18. The processor as in claim 1 1 wherein the plurality of caches comprise a plurality of intra-core caches.19. The processor as in claim 18 wherein the intra-core caches comprise Level 1 (L1 ) and/or Level 2 (L2) caches.20. The processor as in claim 19 wherein the plurality of caches include one or more uncore caches.21 . A system comprising:a memory to store instructions and data;a processor to execute the instructions and process the data; a graphics processor to perform graphics operations in response to graphics instructions;a network interface to receive and transmit data over a network;an interface for receiving user input from a mouse or cursor control device, the plurality of cores executing the instructions and processing the data responsive to the user input:the processor comprising:a plurality of caches, each of the caches comprising a plurality of cache lines, at least some of which are to be shared by two or more of the caches;a snoop filter to monitor accesses to the plurality of cache lines shared by the two or more caches, the snoop filter comprising:a primary snoop filter comprising a first plurality of entries, each entry associated with one of the plurality of cache lines and comprising a N unique identifiers to uniquely identify up to N of the plurality of caches currently storing the cache line;an auxiliary snoop filter comprising a second plurality of entries, each entry associated with one of the plurality of cache lines, wherein once a particular cache line has been shared by more than N caches, an entry for that cache line is allocated in the auxiliary snoop filter to uniquely identify one or more additional caches storing the cache line.22. The system as in claim 21 wherein the N identifiers comprise first identification data uniquely identifying a current owner of the cache line and second identification data uniquely identifying a first sharer of the cache line.23. The system as in claim 21 wherein once the cache line is stored by more than the current owner and the first sharer, an allocation for the cache line is made in the auxiliary snoop filter to uniquely identify one or more additional caches storing the cache line.24. The system as in claim 21 wherein a first entry associated with a first cache line is to be initially stored in the primary snoop filter but not in the auxiliary snoop filter, the first entry to be copied from the primary snoop filter to the auxiliary snoop filter when the first cache line has been shared by more than N caches.25. The system as in claim 24 wherein the first entry is removed from the primary snoop filter upon being copied to the auxiliary snoop filter, the first entry to be copied back to the primary snoop filter and removed from the auxiiiary snoop filter upon being shared by N or fewer caches. |
SHARING AWARE SNOOP FILTER APPARATUS AND METHODBACKGROUNDField of the Invention[0001] This invention relates generally to the field of computer processors. More particularly, the invention relates to a sharing aware snoop filter apparatus and method.Description of the Related Art1. Processor Microarchitectures[0002] An instruction set, or instruction set architecture (ISA), is the part of the computer architecture related to programming, including the native data types, instructions, register architecture, addressing modes, memory architecture, interrupt and exception handling, and external input and output (I/O). It should be noted that the term "instruction" generally refers herein to macro-instructions - that is instructions that are provided to the processor for execution - as opposed to micro-instructions or micro- ops - that is the result of a processor's decoder decoding macro-instructions. The micro-instructions or micro-ops can be configured to instruct an execution unit on the processor to perform operations to implement the logic associated with the macro- instruction.[0003] The ISA is distinguished from the microarchitecture, which is the set of processor design techniques used to implement the instruction set. Processors with different microarchitectures can share a common instruction set. For example, Intel® Pentium 4 processors, Intel® Core™ processors, and processors from Advanced Micro Devices, Inc. of Sunnyvale CA implement nearly identical versions of the x86 instruction set (with some extensions that have been added with newer versions), but have different internal designs. For example, the same register architecture of the ISA may be implemented in different ways in different microarchitectures using well-known techniques, including dedicated physical registers, one or more dynamically allocated physical registers using a register renaming mechanism (e.g., the use of a Register Alias Table (RAT), a Reorder Buffer (ROB) and a retirement register file). Unless otherwise specified, the phrases register architecture, register file, and register are used herein to refer to that which is visible to the software/programmer and the manner in which instructions specify registers. Where a distinction is required, the adjective logical," "architectural," or "software visible" will be used to indicate registers/files in the register architecture, while different adjectives will be used to designate registers in al given microarchitecture (e.g., physical register, reorder buffer, retirement register, register pooi).2. Snoop Filters[0004] The snoop filter (SF), also known as a "tag-directory," is an on-die structure that tracks the presence of cache lines in the different levels of the cache hierarchy across the tiles on a die. The term "tiles" is used here to represent an agent that has a cache associated with it and accesses memory via an intra-die interconnection network. A tile, for example, can be associated with a single core, multiple cores, accelerators and/or f/O agents. Cache line tracking in the SF is done using presence bits, known as core-valid bits (CV bits), and helps maintain coherence for data streams and instruction streams across the caches. To reduce the area occupied by the SF, the tracking bits in each entry follow an encoded pattern. For every cached line up to two unique sharers of the line can be tracked perfectly, i.e., their exact identity is stored in the SF entry. If a cache line is shared by more than 2 caches however it is tracked in a coarse-grained manner with each CV bit used to represent multiple caches/tiles. As the number of tiles (and hence caches) on the die increase the level of coarse-grained encoding used for the CV bits has also been steadily increasing (from one bit representing 1 tile to one bit representing 6 or more tiles). This coarse-grained representation leads to CV bit aliasing whereby a core which never accesses a line can appear as currently caching it.[0005] The CV bit aliasing outlined above can lead to the following inefficiencies in the coherence protocol. First, aliased CV bits can cause multiple spurious messages being sent out on the intra-die interconnect network leading to performance loss, extra network messages and energy usage. For example read-for-ownership requests and capacity evictions from the SF, which send invalidate messages based on the number of CV bits set in the SF, will send unnecessary invalidations to cores that never even cached the address and cause extra acknowledgement messages. Second, due to the aliased nature of CV bits for widely shared lines, evictions from the core caches are unable to clear the CV bit that represents them in the SF, leading to stale entries (i.e., entries which are no longer required but still have valid bits). In order to counter this problem certain processors provision the SF to be larger than the capacity of the on-die caches so that back-invalidations from the SF do not incur a performance loss. As applications become more multi-threaded and more cores are integrated on-die this problem is expected to be exacerbated. While all application segments with parallel codes could be impacted by this problem, high-performance computing (HPC) in particular is a key target for this inefficiency as it tends to use many parallel threads.BRIEF DESCRIPTION OF THE DRAWINGS[0006] A better understanding of the present invention can be obtained from the following detailed description in conjunction with the following drawings, in which:[0007] FIGS. 1 A and 1 B are block diagrams illustrating a generic vector friendly instruction format and instruction templates thereof according to embodiments of the invention;[0008] FIG. 2A-D is a block diagram illustrating an exemplary specific vector friendly instruction format according to embodiments of the invention;[0009] FIG. 3 is a block diagram of a register architecture according to one embodiment of the invention; and[0010] FIG. 4A is a block diagram illustrating both an exemplary in-order fetch, decode, retire pipeline and an exemplary register renaming, out-of-orderissue/execution pipeline according to embodiments of the invention;[0011] FIG. 4B is a block diagram illustrating both an exemplary embodiment of an in-order fetch, decode, retire core and an exemplary register renaming, out-of-order issue/execution architecture core to be included in a processor according toembodiments of the invention;[0012] FIG. 5A is a block diagram of a single processor core, along with its connection to an on-die interconnect network;[0013] FIG. 5B illustrates an expanded view of part of the processor core in FIG 5A according to embodiments of the invention;[0014] FIG. 6 is a block diagram of a single core processor and a multicore processor with integrated memory controller and graphics according to embodiments of the invention;[0015] FIG. 7 illustrates a block diagram of a system in accordance with one embodiment of the present invention;[0016] FIG. 8 illustrates a block diagram of a second system in accordance with an embodiment of the present invention;[0017] FIG. 9 illustrates a block diagram of a third system in accordance with an embodiment of the present invention;[0018] FIG. 10 illustrates a block diagram of a system on a chip (SoC) in accordance with an embodiment of the present invention; [0019] FIG. 11 illustrates a block diagram contrasting the use of a software instruction converter to convert binary instructions in a source instruction set to binary instructions in a target instruction set according to embodiments of the invention;[0020] FIGS. 12A-B illustrate exemplary snoop filter operations;[0021] FIG. 13 illustrates exemplary percentages of cache lines having greater than 2 sharers for different applications;[0022] FIG. 14 illustrates frequency of accesses to shared data for a particular HPC application;[0023] FIG. 15 illustrates one embodiment of a processor architecture on which a sharing aware snoop filter may be implemented;[0024] FIGS. 16A-B illustrate two embodiments of a sharing aware snoop filter; and[0025] FIG. 17 illustrates a method in accordance with one embodiment of the invention.DETAILED DESCRIPTION[0026] In the following description, for the purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of the embodiments of the invention described below. It will be apparent, however, to one skilled in the art that the embodiments of the invention may be practiced without some of these specific details. In other instances, well-known structures and devices are shown in block diagram form to avoid obscuring the underlying principles of the embodiments of the invention.EXEMPLARY PROCESSOR ARCHITECTURES AND DATA TYPES[0027] An instruction set includes one or more instruction formats. A given instruction format defines various fields (number of bits, location of bits) to specify, among other things, the operation to be performed (opcode) and the operand(s) on which that operation is to be performed. Some instruction formats are further broken down though the definition of instruction templates (or subformats). For example, the instruction templates of a given instruction format may be defined to have different subsets of the instruction format's fields (the included fields are typically in the same order, but at least some have different bit positions because there are less fields included) and/or defined to have a given field interpreted differently. Thus, each instruction of an ISA is expressed using a given instruction format (and, if defined, in a given one of the instruction templates of that instruction format) and includes fields for specifying the operation and the operands. For example, an exemplary ADD instruction has a specific opcode and an instruction format that includes an opcode field to specify that opcode and operand fields to select operands (sourcel /destination and source2); and an occurrence of this ADD instruction in an instruction stream will have specific contents in the operand fields that select specific operands. A set of SIMD extensions referred to the Advanced Vector Extensions (AVX) (AVX1 and AVX2) and using the Vector Extensions (VEX) coding scheme, has been released and/or published (e.g., see Intel® 64 and IA-32 Architectures Software Developers Manual, October 201 1 ; and see Intel® Advanced Vector Extensions Programming Reference, June 201 1 ).Exemplary instruction Formats[0028] Embodiments of the instruction(s) described herein may be embodied in different formats. Additionally, exemplary systems, architectures, and pipelines are detailed below. Embodiments of the instruction(s) may be executed on such systems, architectures, and pipelines, but are not limited to those detailed.A. Generic Vector Friendly Instruction Format[0029] A vector friendly instruction format is an instruction format that is suited for vector instructions (e.g., there are certain fields specific to vector operations). While embodiments are described in which both vector and scalar operations are supported through the vector friendly instruction format, alternative embodiments use only vector operations the vector friendly instruction format.[0030] Figures 1 A-1 B are block diagrams illustrating a generic vector friendly instruction format and instruction templates thereof according to embodiments of the invention. Figure 1 A is a block diagram illustrating a generic vector friendly instruction format and class A instruction templates thereof according to embodiments of the invention; while Figure 1 B is a block diagram illustrating the generic vector friendly instruction format and class B instruction templates thereof according to embodiments of the invention. Specifically, a generic vector friendly instruction format 100 for which are defined class A and class B instruction templates, both of which include no memory access 105 instruction templates and memory access 120 instruction templates. The term generic in the context of the vector friendly instruction format refers to the instruction format not being tied to any specific instruction set.[0031] While embodiments of the invention will be described in which the vector friendly instruction format supports the following: a 64 byte vector operand length (or size) with 32 bit (4 byte) or 64 bit (8 byte) data element widths (or sizes) (and thus, a 64 byte vector consists of either 16 doubleword-size elements or alternatively, 8 quadword- size elements); a 64 byte vector operand length (or size) with 16 bit (2 byte) or 8 bit (1 byte) data element widths (or sizes); a 32 byte vector operand length (or size) with 32 bit (4 byte), 64 bit (8 byte), 16 bit (2 byte), or 8 bit (1 byte) data element widths (or sizes); and a 16 byte vector operand length (or size) with 32 bit (4 byte), 64 bit (8 byte), 16 bit (2 byte), or 8 bit (1 byte) data element widths (or sizes); alternative embodiments may support more, less and/or different vector operand sizes (e.g., 256 byte vector operands) with more, less, or different data element widths (e.g., 128 bit (16 byte) data element widths).[0032] The class A instruction templates in Figure 1A include: 1 ) within the no memory access 105 instruction templates there is shown a no memory access, full round control type operation 1 10 instruction template and a no memory access, data transform type operation 1 15 instruction template; and 2) within the memory access 120 instruction templates there is shown a memory access, temporal 125 instruction template and a memory access, non-temporal 130 instruction template. The class B instruction templates in Figure 1 B include: 1 ) within the no memory access 105 instruction templates there is shown a no memory access, write mask control, partial round control type operation 1 12 instruction template and a no memory access, write mask control, vsize type operation 11 7 instruction template; and 2) within the memory access 120 instruction templates there is shown a memory access, write mask control 127 instruction template.[0033] The generic vector friendly instruction format 100 includes the following fields listed below in the order illustrated in Figures 1 A-1 B.[0034] Format field 140 - a specific value (an instruction format identifier value) in this field uniquely identifies the vector friendly instruction format, and thus occurrences of instructions in the vector friendly instruction format in instruction streams. As such, this field is optional in the sense that it is not needed for an instruction set that has only the generic vector friendly instruction format.[0035] Base operation field 142 - its content distinguishes different base operations.[0036] Register index field 144 - its content, directly or through address generation, specifies the locations of the source and destination operands, be they in registers or in memory. These include a sufficient number of bits to select N registers from a PxQ (e.g. 32x512, 16x128, 32x1024, 64x1024) register file. While in one embodiment N may be up to three sources and one destination register, alternative embodiments may support more or less sources and destination registers (e.g., may support up to two sources where one of these sources also acts as the destination, may support up to three sources where one of these sources a!so acts as the destination, may support up to two sources and one destination).[0037] Modifier field 146 - its content distinguishes occurrences of instructions in the generic vector instruction format that specify memory access from those that do not; that is, between no memory access 105 instruction templates and memory access 120 instruction templates. Memory access operations read and/or write to the memory hierarchy (in some cases specifying the source and/or destination addresses using values in registers), while non-memory access operations do not (e.g., the source and destinations are registers). While in one embodiment this field also selects between three different ways to perform memory address calculations, alternative embodiments may support more, less, or different ways to perform memory address calculations.[0038] Augmentation operation field 150 - its content distinguishes which one of a variety of different operations to be performed in addition to the base operation. This field is context specific. In one embodiment of the invention, this field is divided into a class field 168, an alpha field 152, and a beta field 154. The augmentation operation field 150 allows common groups of operations to be performed in a single instruction rather than 2, 3, or 4 instructions.[0039] Scale field 160 - its content allows for the scaling of the index field's content for memory address generation (e.g., for address generation that uses 2scaie *index + base).[0040] Displacement Field 162A- its content is used as part of memory address generation (e.g., for address generation that uses 2scaie *index + base + displacement).[0041] Displacement Factor Field 162B (note that the juxtaposition of displacement field 162A directly over displacement factor field 162B indicates one or the other is used) - its content is used as part of address generation; it specifies a displacement factor that is to be scaled by the size of a memory access (N) - where N is the number of bytes in the memory access (e.g., for address generation that uses 2scale *index + base + scaled displacement). Redundant low-order bits are ignored and hence, the displacement factor field's content is multiplied by the memory operands total size (N) in order to generate the final displacement to be used in calculating an effective address. The value of N is determined by the processor hardware at runtime based on the full opcode field 174 (described later herein) and the data manipulation field 154C. The displacement field 162A and the displacement factor field 162B are optional in the sense that they are not used for the no memory access 105 instruction templates and/or different embodiments may implement only one or none of the two. [0042] Data element width field 164 - its content distinguishes which one of a number of data element widths is to be used (in some embodiments for all instructions; in other embodiments for only some of the instructions). This field is optional in the sense that it is not needed if only one data element width is supported and/or data element widths are supported using some aspect of the opcodes.[0043] Write mask field 170 - its content controls, on a per data element position basis, whether that data element position in the destination vector operand reflects the result of the base operation and augmentation operation. Class A instruction templates support merging-writemasking, while class B instruction templates support both merging- and zeroing-writemasking. When merging, vector masks allow any set of elements in the destination to be protected from updates during the execution of any operation (specified by the base operation and the augmentation operation); in other one embodiment, preserving the old value of each element of the destination where the corresponding mask bit has a 0. In contrast, when zeroing vector masks allow any set of elements in the destination to be zeroed during the execution of any operation (specified by the base operation and the augmentation operation); in one embodiment, an element of the destination is set to 0 when the corresponding mask bit has a 0 value. A subset of this functionality is the ability to control the vector length of the operation being performed (that is, the span of elements being modified, from the first to the last one); however, it is not necessary that the elements that are modified be consecutive. Thus, the write mask field 170 allows for partial vector operations, including loads, stores, arithmetic, logical, etc. While embodiments of the invention are described in which the write mask field's 170 content selects one of a number of write mask registers that contains the write mask to be used (and thus the write mask field's 170 content indirectly identifies that masking to be performed), alternative embodiments instead or additional allow the mask write field's 170 content to directly specify the masking to be performed.[0044] Immediate field 172 - its content allows for the specification of an immediate. This field is optional in the sense that is it not present in an implementation of the generic vector friendly format that does not support immediate and it is not present in instructions that do not use an immediate.[0045] Class field 168 - its content distinguishes between different classes of instructions. With reference to Figures 1 A-B, the contents of this field select between class A and class B instructions, in Figures 1A-B, rounded corner squares are used to indicate a specific value is present in a field (e.g., class A 168A and class B 168B for the class field 168 respectively in Figures 1A-B).Instruction Templates of Class A[0046] In the case of the non-memory access 105 instruction templates of class A, the alpha field 152 is interpreted as an RS field 152A, whose content distinguishes which one of the different augmentation operation types are to be performed (e.g., round 152A.1 and data transform 152A.2 are respectively specified for the no memory access, round type operation 1 10 and the no memory access, data transform type operation 1 15 instruction templates), while the beta field 154 distinguishes which of the operations of the specified type is to be performed. In the no memory access 105 instruction templates, the scale field 160, the displacement field 162A, and the displacement scale filed 162B are not present.No-Memory Access Instruction Templates - Full Round Control Type Operation[0047] In the no memory access full round control type operation 1 10 instruction template, the beta field 154 is interpreted as a round control field 154A, whose content(s) provide static rounding. While in the described embodiments of the invention the round control field 154A includes a suppress all floating point exceptions (SAE) field 156 and a round operation control field 158, alternative embodiments may support may encode both these concepts into the same field or only have one or the other of these concepts/fields (e.g., may have only the round operation control field 158).[0048] SAE field 156 - its content distinguishes whether or not to disable the exception event reporting; when the SAE field's 156 content indicates suppression is enabled, a given instruction does not report any kind of floating-point exception flag and does not raise any floating point exception handler.[0049] Round operation control field 158 - its content distinguishes which one of a group of rounding operations to perform (e.g., Round-up, Round-down, Round-towards- zero and Round-to-nearest). Thus, the round operation control field 158 allows for the changing of the rounding mode on a per instruction basis. In one embodiment of the invention where a processor includes a control register for specifying rounding modes, the round operation control field's 150 content overrides that register value.No Memory Access Instruction Templates - Data Transform Type Operation[0050] In the no memory access data transform type operation 1 15 instruction template, the beta field 154 is interpreted as a data transform field 154B, whose content distinguishes which one of a number of data transforms is to be performed (e.g., no data transform, swizzle, broadcast). [0051] In the case of a memory access 120 instruction template of class A, the alpha field 152 is interpreted as an eviction hint field 152B, whose content distinguishes which one of the eviction hints is to be used (in Figure 1 A, temporal 152B.1 and non- temporal 152B.2 are respectively specified for the memory access, temporal 125 instruction template and the memory access, non-temporal 130 instruction template), while the beta field 154 is interpreted as a data manipulation field 154C, whose content distinguishes which one of a number of data manipulation operations (also known as primitives) is to be performed (e.g., no manipulation; broadcast; up conversion of a source; and down conversion of a destination). The memory access 120 instruction templates include the scale field 160, and optionally the displacement field 162A or the displacement scale field 162B.[0052] Vector memory instructions perform vector loads from and vector stores to memory, with conversion support. As with regular vector instructions, vector memory instructions transfer data from/to memory in a data element-wise fashion, with the elements that are actually transferred is dictated by the contents of the vector mask that is selected as the write mask.Memory Access instruction Templates - Temporal[0053] Temporal data is data likely to be reused soon enough to benefit from caching. This is, however, a hint, and different processors may implement it in different ways, including ignoring the hint entirely.Memory Access Instruction Templates - Non-Temporal[0054] Non-temporal data is data unlikely to be reused soon enough to benefit from caching in the 1 st-level cache and should be given priority for eviction. This is, however, a hint, and different processors may implement it in different ways, including ignoring the hint entirely.Instruction Templates of Class B[0055] In the case of the instruction templates of class B, the alpha field 152 is interpreted as a write mask control (Z) field 152C, whose content distinguishes whether the write masking controlled by the write mask field 170 should be a merging or a zeroing.[0056] In the case of the non-memory access 105 instruction templates of class B, part of the beta field 154 is interpreted as an RL field 157A, whose content distinguishes which one of the different augmentation operation types are to be performed (e.g., round 157A.1 and vector length (VSIZE) 157A.2 are respectively specified for the no memory access, write mask control, partial round control type operation 1 12 instruction template and the no memory access, write mask control, VSIZE type operation 1 17 instruction template), while the rest of the beta field 154 distinguishes which of the operations of the specified type is to be performed. In the no memory access 105 instruction templates, the scale field 160, the displacement field 162A, and the displacement scale filed 162B are not present.[0057] In the no memory access, write mask control, partial round control type operation 1 10 instruction template, the rest of the beta field 154 is interpreted as a round operation field 159A and exception event reporting is disabled (a given instruction does not report any kind of floating-point exception flag and does not raise any floating point exception handler).[0058] Round operation control field 159A - just as round operation control field 158, its content distinguishes which one of a group of rounding operations to perform (e.g., Round-up, Round-down, Round-towards-zero and Round-to-nearest). Thus, the round operation control field 159A allows for the changing of the rounding mode on a per instruction basis. In one embodiment of the invention where a processor includes a control register for specifying rounding modes, the round operation control field's 150 content overrides that register value.[0059] In the no memory access, write mask control, VSIZE type operation 117 instruction template, the rest of the beta field 154 is interpreted as a vector length field 159B, whose content distinguishes which one of a number of data vector lengths is to be performed on (e.g., 128, 256, or 512 byte).[0060] In the case of a memory access 120 instruction template of class B, part of the beta field 154 is interpreted as a broadcast field 157B, whose content distinguishes whether or not the broadcast type data manipulation operation is to be performed, while the rest of the beta field 154 is interpreted the vector length field 159B. The memory access 120 instruction templates include the scale field 160, and optionally the displacement field 162A or the displacement scale field 162B.[0061] With regard to the generic vector friendly instruction format 100, a full opcode field 174 is shown including the format field 140, the base operation field 142, and the data element width field 164. While one embodiment is shown where the full opcode field 174 includes all of these fields, the full opcode field 174 includes less than all of these fields in embodiments that do not support all of them. The full opcode field 174 provides the operation code (opcode). [0062] The augmentation operation field 150, the data element width field 164, and the write mask field 170 allow these features to be specified on a per instruction basis in the generic vector friendly instruction format.[0063] The combination of write mask field and data element width field create typed instructions in that they allow the mask to be applied based on different data element widths.[0064] The various instruction templates found within class A and class B are beneficial in different situations, in some embodiments of the invention, different processors or different cores within a processor may support only class A, only class B, or both classes. For instance, a high performance general purpose out-of-order core intended for general-purpose computing may support only class B, a core intended primarily for graphics and/or scientific (throughput) computing may support only class A, and a core intended for both may support both (of course, a core that has some mix of templates and instructions from both classes but not all templates and instructions from both classes is within the purview of the invention). Also, a single processor may include multiple cores, all of which support the same class or in which different cores support different class. For instance, in a processor with separate graphics and general purpose cores, one of the graphics cores intended primarily for graphics and/or scientific computing may support only class A, while one or more of the general purpose cores may be high performance general purpose cores with out of order execution and register renaming intended for general-purpose computing that support only class B. Another processor that does not have a separate graphics core, may include one more general purpose in-order or out-of-order cores that support both class A and class B. Of course, features from one class may also be implement in the other class in different embodiments of the invention. Programs written in a high level language would be put (e.g., just in time compiled or statically compiled) into an variety of different executable forms, including: 1 ) a form having only instructions of the class(es) supported by the target processor for execution; or 2) a form having alternative routines written using different combinations of the instructions of all classes and having control flow code that selects the routines to execute based on the instructions supported by the processor which is currently executing the code.B. Exemplary Specific Vector Friendly instruction Format[0065] Figure 2 is a block diagram illustrating an exemplary specific vector friendly instruction format according to embodiments of the invention. Figure 2 shows a specific vector friendly instruction format 200 that is specific in the sense that it specifies the location, size, interpretation, and order of the fields, as well as values for some of those fields. The specific vector friendly instruction format 200 may be used to extend the x86 instruction set, and thus some of the fields are similar or the same as those used in the existing x86 instruction set and extension thereof (e.g., AVX). This format remains consistent with the prefix encoding field, real opcode byte field, MOD R/M field, SIB field, displacement field, and immediate fields of the existing x86 instruction set with extensions. The fields from Figure 1 into which the fields from Figure 2 map are illustrated.[0066] It should be understood that, although embodiments of the invention are described with reference to the specific vector friendly instruction format 200 in the context of the generic vector friendly instruction format 100 for illustrative purposes, the invention is not limited to the specific vector friendly instruction format 200 except where claimed. For example, the generic vector friendly instruction format 100 contemplates a variety of possible sizes for the various fields, while the specific vector friendly instruction format 200 is shown as having fields of specific sizes. By way of specific example, while the data element width field 164 is illustrated as a one bit field in the specific vector friendly instruction format 200, the invention is not so limited (that is, the generic vector friendly instruction format 100 contemplates other sizes of the data element width field 164).[0067] The generic vector friendly instruction format 100 includes the following fields listed below in the order illustrated in Figure 2A.[0068] EVEX Prefix (Bytes 0-3) 202 - is encoded in a four-byte form.[0069] Format Field 140 (EVEX Byte 0, bits [7:0]) - the first byte (EVEX Byte 0) is the format field 140 and it contains 0x62 (the unique value used for distinguishing the vector friendly instruction format in one embodiment of the invention).[0070] The second-fourth bytes (EVEX Bytes 1-3) include a number of bit fields providing specific capability.[0071] REX field 205 (EVEX Byte 1 , bits [7-5]) - consists of a EVEX.R bit field (EVEX Byte 1 , bit [7] - R), EVEX.X bit field (EVEX byte 1 , bit [6] - X), and 157BEX byte 1 , bit[5] - B). The EVEX.R, EVEX.X, and EVEX.B bit fields provide the same functionality as the corresponding VEX bit fields, and are encoded using 1 s complement form, i.e. ZMMO is encoded as 1 1 1 1 B, ZMM15 is encoded as OOOOB. Other fields of the instructions encode the lower three bits of the register indexes as is known in the art (rrr, xxx, and bbb), so that Rrrr, Xxxx, and Bbbb may be formed by adding EVEX.R, EVEX.X, and EVEX.B. [0072] REX' field 1 10 - this is the first part of the REX*field 1 10 and is the EVEX.R' bit field (EVEX Byte 1 , bit [4] - R') that is used to encode either the upper 16 or lower 16 of the extended 32 register set. In one embodiment of the invention, this bit, along with others as indicated below, is stored in bit inverted format to distinguish (in the well- known x86 32-bit mode) from the BOUND instruction, whose real opcode byte is 62, but does not accept in the MOD R/M field (described below) the value of 1 1 in the MOD field; alternative embodiments of the invention do not store this and the other indicated bits below in the inverted format. A value of 1 is used to encode the lower 16 registers. In other words, R'Rrrr is formed by combining EVEX.R', EVEX.R, and the other RRR from other fields.[0073] Opcode map field 215 (EVEX byte 1 , bits [3:0] - mmmm) - its content encodes an implied leading opcode byte (OF, OF 38, or OF 3).[0074] Data element width field 164 (EVEX byte 2, bit [7] - W) - is represented by the notation EVEX.W. EVEX.W is used to define the granularity (size) of the datatype (either 32-bit data elements or 64-bit data elements).[0075] EVEX.vvvv 220 (EVEX Byte 2, bits [6:3j-vvw)- the role of EVEX.wvv may include the following: 1 ) EVEX.vvvv encodes the first source register operand, specified in inverted (1 s complement) form and is valid for instructions with 2 or more source operands; 2) EVEX.wvv encodes the destination register operand, specified in 1 s complement form for certain vector shifts; or 3) EVEX.vvvv does not encode any operand, the field is reserved and should contain 1 1 1 1 b. Thus, EVEX.vvvv field 220 encodes the 4 low-order bits of the first source register specifier stored in inverted (1 s complement) form. Depending on the instruction, an extra different EVEX bit field is used to extend the specifier size to 32 registers.[0076] EVEX.U 168 Class field (EVEX byte 2, bit [2j-U) - If EVEX.U = 0, it indicates class A or EVEX.U0; if EVEX.U = 1 , it indicates class B or EVEX.U1 .[0077] Prefix encoding field 225 (EVEX byte 2, bits [1 :0]-pp) - provides additional bits for the base operation field. In addition to providing support for the legacy SSE instructions in the EVEX prefix format, this also has the benefit of compacting the SiMD prefix (rather than requiring a byte to express the SiMD prefix, the EVEX prefix requires only 2 bits). In one embodiment, to support legacy SSE instructions that use a SIMD prefix (66H, F2H, F3H) in both the legacy format and in the EVEX prefix format, these legacy SIMD prefixes are encoded into the SIMD prefix encoding field; and at runtime are expanded into the legacy SIMD prefix prior to being provided to the decoder's PLA (so the PLA can execute both the legacy and EVEX format of these legacy instructions without modification). Although newer instructions could use the EVEX prefix encoding field's content directly as an opcode extension, certain embodiments expand in a similar fashion for consistency but allow for different meanings to be specified by these legacy SiMD prefixes. An alternative embodiment may redesign the PLA to support the 2 bit SIMD prefix encodings, and thus not require the expansion.[0078] Alpha field 152 (EVEX byte 3, bit [7] - EH; also known as EVEX. EH, EVEX.rs, EVEX.RL, EVEX.write mask control, and EVEX.N; also illustrated with a) - as previously described, this field is context specific.[0079] Beta field 154 (EVEX byte 3, bits [6:4j-SSS, also known as EVEX.s2-o, EVEX.r2-o, EVEX.rrl , EVEX.LL0, EVEX.LLB; also illustrated with βββ) - as previously described, this field is context specific.[0080] REX' field 1 10 - this is the remainder of the REX' field and is the EVEX.V bit field (EVEX Byte 3, bit [3] - V) that may be used to encode either the upper 16 or lower 16 of the extended 32 register set. This bit is stored in bit inverted format. A value of 1 is used to encode the lower 16 registers. In other words, V'VVVV is formed by combining EVEX.V, EVEX.vwv.[0081] Write mask field 170 (EVEX byte 3, bits [2:0]-kkk) - its content specifies the index of a register in the write mask registers as previously described, in one embodiment of the invention, the specific value EVEX.kkk=000 has a special behavior implying no write mask is used for the particular instruction (this may be implemented in a variety of ways including the use of a write mask hardwired to all ones or hardware that bypasses the masking hardware).[0082] Real Opcode Field 230 (Byte 4) is also known as the opcode byte. Part of the opcode is specified in this field.[0083] MOD R/M Field 240 (Byte 5) includes MOD field 242, Reg field 244, and R/M field 246. As previously described, the MOD field's 242 content distinguishes between memory access and non-memory access operations. The role of Reg field 244 can be summarized to two situations: encoding either the destination register operand or a source register operand, or be treated as an opcode extension and not used to encode any instruction operand. The role of R/M field 246 may include the following: encoding the instruction operand that references a memory address, or encoding either the destination register operand or a source register operand.[0084] Scale, Index, Base (SIB) Byte (Byte 6) - As previously described, the scale field's 150 content is used for memory address generation. SIB.xxx 254 and SIB.bbb 256 - the contents of these fields have been previously referred to with regard to the register indexes Xxxx and Bbbb.[0085] Displacement field 162A (Bytes 7-10) - when MOD field 242 contains 10, bytes 7-10 are the displacement field 162A, and it works the same as the legacy 32-bit displacement (disp32) and works at byte granularity.[0086] Displacement factor field 162B (Byte 7) - when MOD field 242 contains 01 , byte 7 is the displacement factor field 162B. The location of this field is that same as that of the legacy x86 instruction set 8-bit displacement (disp8), which works at byte granularity. Since disp8 is sign extended, it can only address between -128 and 127 bytes offsets; in terms of 64 byte cache lines, disp8 uses 8 bits that can be set to only four really useful values -128, -64, 0, and 64; since a greater range is often needed, disp32 is used; however, disp32 requires 4 bytes. In contrast to disp8 and disp32, the displacement factor field 162B is a reinterpretation of disp8; when using displacement factor field 162B, the actual displacement is determined by the content of the displacement factor field multiplied by the size of the memory operand access (N). This type of displacement is referred to as disp8*N. This reduces the average instruction length (a single byte of used for the displacement but with a much greater range). Such compressed displacement is based on the assumption that the effective displacement is multiple of the granularity of the memory access, and hence, the redundant low-order bits of the address offset do not need to be encoded. In other words, the displacement factor field 162B substitutes the legacy x86 instruction set 8-bit displacement. Thus, the displacement factor field 162B is encoded the same way as an x86 instruction set 8-bit displacement (so no changes in the ModRM/SiB encoding rules) with the only exception that disp8 is overloaded to disp8*N. In other words, there are no changes in the encoding rules or encoding lengths but only in the interpretation of the displacement value by hardware (which needs to scale the displacement by the size of the memory operand to obtain a byte-wise address offset).[0087] Immediate field 172 operates as previously described.Full Opcode Field[0088] Figure 2B is a block diagram illustrating the fields of the specific vector friendly instruction format 200 that make up the full opcode field 174 according to one embodiment of the invention. Specifically, the full opcode field 174 includes the format field 140, the base operation field 142, and the data element width (W) field 164. The base operation field 142 includes the prefix encoding field 225, the opcode map field 215, and the real opcode field 230.Register Index Field[0089] Figure 2C is a block diagram illustrating the fields of the specific vector friendly instruction format 200 that make up the register index field 144 according to one embodiment of the invention. Specifically, the register index field 144 includes the REX field 205, the REX' field 210, the MODR/M.reg field 244, the MODR/M.r/m field 246, the WW field 220, xxx field 254, and the bbb field 256.Augmentation Operation Field[0090] Figure 2D is a block diagram illustrating the fields of the specific vector friendly instruction format 200 that make up the augmentation operation field 150 according to one embodiment of the invention. When the class (U) field 168 contains 0, it signifies EVEX.U0 (class A 168A); when it contains 1 , it signifies EVEX.U1 (class B 168B). When U=0 and the MOD field 242 contains 1 1 (signifying a no memory access operation), the alpha field 152 (EVEX byte 3, bit [7] - EH) is interpreted as the rs field 152A. When the rs field 152A contains a 1 (round 152A.1 ), the beta field 154 (EVEX byte 3, bits [6:4]- SSS) is interpreted as the round control field 154A. The round control field 154A includes a one bit SAE field 156 and a two bit round operation field 158. When the rs field 152A contains a 0 (data transform 152A.2), the beta field 154 (EVEX byte 3, bits [6:4]- SSS) is interpreted as a three bit data transform field 154B. When U=0 and the MOD field 242 contains 00, 01 , or 10 (signifying a memory access operation), the alpha field 152 (EVEX byte 3, bit [7] - EH) is interpreted as the eviction hint (EH) field 152B and the beta field 154 (EVEX byte 3, bits [6:4]- SSS) is interpreted as a three bit data manipulation field 154C.[0091] When U=1 , the alpha field 152 (EVEX byte 3, bit [7] - EH) is interpreted as the write mask control (Z) field 152C. When U=1 and the MOD field 242 contains 1 1 (signifying a no memory access operation), part of the beta field 154 (EVEX byte 3, bit [4]- So) is interpreted as the RL field 157A; when it contains a 1 (round 157A.1 ) the rest of the beta field 154 (EVEX byte 3, bit [6-5]- S2-1) is interpreted as the round operation field 159A, while when the RL field 157A contains a 0 (VSIZE 157.A2) the rest of the beta field 154 (EVEX byte 3, bit [6-5]- S2-1) is interpreted as the vector length field 159B (EVEX byte 3, bit [6-5]- Uo). When U=1 and the MOD field 242 contains 00, 01 , or 10 (signifying a memory access operation), the beta field 154 (EVEX byte 3, bits [6:4]- SSS) is interpreted as the vector length field 159B (EVEX byte 3, bit [6-5]- Uo) and the broadcast field 157B (EVEX byte 3, bit [4]- B). C. Exemplary Register Architecture[0092] Figure 3 is a block diagram of a register architecture 300 according to one embodiment of the invention. In the embodiment illustrated, there are 32 vector registers 310 that are 512 bits wide; these registers are referenced as zmmO through zmm31 . The lower order 256 bits of the lower 16 zmm registers are overlaid on registers ymmO-16. The lower order 128 bits of the lower 16 zmm registers (the lower order 128 bits of the ymm registers) are overlaid on registers xmmO-15. The specific vector friendly instruction format 200 operates on these overlaid register file as illustrated in the below tables.[0093] In other words, the vector length field 159B selects between a maximum length and one or more other shorter lengths, where each such shorter length is half the length of the preceding length; and instructions templates without the vector length field 159B operate on the maximum vector length. Further, in one embodiment, the class B instruction templates of the specific vector friendly instruction format 200 operate on packed or scalar single/double-precision floating point data and packed or scalar integer data. Scalar operations are operations performed on the lowest order data element position in an zmm/ymm/xmm register; the higher order data element positions are either left the same as they were prior to the instruction or zeroed depending on the embodiment.[0094] Write mask registers 315 - in the embodiment illustrated, there are 8 write mask registers (kO through k7), each 64 bits in size. In an alternate embodiment, the write mask registers 315 are 16 bits in size. As previously described, in oneembodiment of the invention, the vector mask register kO cannot be used as a write mask; when the encoding that would normally indicate kO is used for a write mask, it selects a hardwired write mask of OxFFFF, effectively disabling write masking for that instruction.[0095] General-purpose registers 325 - in the embodiment illustrated, there are sixteen 64-bit general-purpose registers that are used along with the existing x86 addressing modes to address memory operands. These registers are referenced by the names RAX, RBX, RCX, RDX, RBP, RSI, RDI, RSP, and R8 through R15.[0096] Scalar floating point stack register file (x87 stack) 345, on which is aliased the MMX packed integer flat register file 350 - in the embodiment illustrated, the x87 stack is an eight-element stack used to perform scalar floating-point operations on 32/64/80-bit floating point data using the x87 instruction set extension; while the MMX registers are used to perform operations on 64-bit packed integer data, as well as to hold operands for some operations performed between the MMX and XMM registers.[0097] Alternative embodiments of the invention may use wider or narrower registers. Additionally, alternative embodiments of the invention may use more, less, or different register files and registers.D. Exemplary Core Architectures, Processors, and Computer Architectures[0098] Processor cores may be implemented in different ways, for different purposes, and in different processors. For instance, implementations of such cores may include: 1 ) a general purpose in-order core intended for general-purpose computing; 2) a high performance general purpose out-of-order core intended for general-purpose computing; 3) a special purpose core intended primarily for graphics and/or scientific (throughput) computing. Implementations of different processors may include: 1 ) a CPU including one or more general purpose in-order cores intended for general-purpose computing and/or one or more general purpose out-of-order cores intended for general-purpose computing; and 2) a coprocessor including one or more special purpose cores intended primarily for graphics and/or scientific (throughput). Such different processors lead to different computer system architectures, which may include: 1 ) the coprocessor on a separate chip from the CPU; 2) the coprocessor on a separate die in the same package as a CPU; 3) the coprocessor on the same die as a CPU (in which case, such a coprocessor is sometimes referred to as special purpose logic, such as integrated graphics and/or scientific (throughput) logic, or as special purpose cores); and 4) a system on a chip that may include on the same die the described CPU (sometimes referred to as the application core(s) or application processor(s)), the above described coprocessor, and additional functionality. Exemplary core architectures are described next, followed by descriptions of exemplary processors and computer architectures.[0099] Figure 4A is a block diagram illustrating both an exemplary in-order pipeline and an exemplary register renaming, out-of-order issue/execution pipeline according to embodiments of the invention. Figure 4B is a block diagram illustrating both an exemplary embodiment of an in-order architecture core and an exemplary register renaming, out-of-order issue/execution architecture core to be included in a processor according to embodiments of the invention. The solid lined boxes in Figures 4A-B illustrate the in-order pipeline and in-order core, while the optional addition of the dashed lined boxes illustrates the register renaming, out-of-order issue/execution pipeline and core. Given that the in-order aspect is a subset of the out-of-order aspect, the out-of-order aspect will be described.[00100] In Figure 4A, a processor pipeline 400 includes a fetch stage 402, a length decode stage 404, a decode stage 406, an allocation stage 408, a renaming stage 410, a scheduling (also known as a dispatch or issue) stage 412, a register read/memory read stage 414, an execute stage 416, a write back/memory write stage 418, an exception handling stage 422, and a commit stage 424.[00101] Figure 4B shows processor core 490 including a front end unit 430 coupled to an execution engine unit 450, and both are coupled to a memory unit 470. The core 490 may be a reduced instruction set computing (RISC) core, a complex instruction set computing (CISC) core, a very long instruction word (VLIW) core, or a hybrid or alternative core type. As yet another option, the core 490 may be a special-purpose core, such as, for example, a network or communication core, compression engine, coprocessor core, general purpose computing graphics processing unit (GPGPU) core, graphics core, or the like.[00102] The front end unit 430 includes a branch prediction unit 432 coupled to an instruction cache unit 434, which is coupled to an instruction translation lookaside buffer (TLB) 436, which is coupled to an instruction fetch unit 438, which is coupled to a decode unit 440. The decode unit 440 (or decoder) may decode instructions, and generate as an output one or more micro-operations, micro-code entry points, microinstructions, other instructions, or other control signals, which are decoded from, or which otherwise reflect, or are derived from, the original instructions. The decode unit 440 may be implemented using various different mechanisms. Examples of suitable mechanisms include, but are not limited to, look-up tables, hardware implementations, programmable logic arrays (PLAs), microcode read only memories (ROMs), etc. In one embodiment, the core 490 includes a microcode ROM or other medium that stores microcode for certain macroinstructions (e.g., in decode unit 440 or otherwise within the front end unit 430). The decode unit 440 is coupled to a rename/allocator unit 452 in the execution engine unit 450.[00103] The execution engine unit 450 includes the rename/allocator unit 452 coupled to a retirement unit 454 and a set of one or more scheduler unit(s) 456. The scheduler unit(s) 456 represents any number of different schedulers, including reservations stations, central instruction window, etc. The scheduler unit(s) 456 is coupled to the physical register file(s) unit(s) 458. Each of the physical register file(s) units 458 represents one or more physical register files, different ones of which store one or more different data types, such as scalar integer, scalar floating point, packed integer, packed floating point, vector integer, vector floating point,, status (e.g., an instruction pointer that is the address of the next instruction to be executed), etc. In one embodiment, the physical register file(s) unit 458 comprises a vector registers unit, a write mask registers unit, and a scalar registers unit. These register units may provide architectural vector registers, vector mask registers, and general purpose registers. The physical register file(s) unit(s) 458 is overlapped by the retirement unit 454 to illustrate various ways in which register renaming and out-of-order execution may be implemented (e.g., using a reorder buffer(s) and a retirement register file(s); using a future file(s), a history buffer(s), and a retirement register fiie(s); using a register maps and a pool of registers; etc.). The retirement unit 454 and the physical register file(s) unit(s) 458 are coupled to the execution cluster(s) 460. The execution cluster(s) 460 includes a set of one or more execution units 462 and a set of one or more memory access units 464. The execution units 462 may perform various operations (e.g., shifts, addition, subtraction, multiplication) and on various types of data (e.g., scalar floating point, packed integer, packed floating point, vector integer, vector floating point). While some embodiments may include a number of execution units dedicated to specific functions or sets of functions, other embodiments may include only one execution unit or multiple execution units that all perform all functions. The scheduler unit(s) 456, physical register file(s) unit(s) 458, and execution cluster(s) 460 are shown as being possibly plural because certain embodiments create separate pipelines for certain types of data/operations (e.g., a scalar integer pipeline, a scalar floating point/packed integer/packed floating point/vector integer/vector floating point pipeline, and/or a memory access pipeline that each have their own scheduler unit, physical register file(s) unit, and/or execution cluster - and in the case of a separate memory access pipeline, certain embodiments are implemented in which only the execution cluster of this pipeline has the memory access unit(s) 464). it should also be understood that where separate pipelines are used, one or more of these pipelines may be out-of-order issue/execution and the rest in-order.[00104] The set of memory access units 464 is coupled to the memory unit 470, which includes a data TLB unit 472 coupled to a data cache unit 474 coupled to a level 2 (L2) cache unit 476. In one exemplary embodiment, the memory access units 464 may include a load unit, a store address unit, and a store data unit, each of which is coupled to the data TLB unit 472 in the memory unit 470. The instruction cache unit 434 is further coupled to a level 2 (L2) cache unit 476 in the memory unit 470. The L2 cache unit 476 is coupled to one or more other levels of cache and eventually to a main memory.[00105] By way of example, the exemplary register renaming, out-of-order issue/execution core architecture may implement the pipeline 400 as follows: 1 ) the instruction fetch 438 performs the fetch and length decoding stages 402 and 404; 2) the decode unit 440 performs the decode stage 406; 3) the rename/allocator unit 452 performs the allocation stage 408 and renaming stage 410; 4) the scheduler unit(s) 456 performs the schedule stage 412; 5) the physical register file(s) unit(s) 458 and the memory unit 470 perform the register read/memory read stage 414; the execution cluster 460 perform the execute stage 416; 6) the memory unit 470 and the physical register file(s) unit(s) 458 perform the write back/memory write stage 418; 7) various units may be involved in the exception handling stage 422; and 8) the retirement unit 454 and the physical register file(s) unit(s) 458 perform the commit stage 424.[00106] The core 490 may support one or more instructions sets (e.g., the x86 instruction set (with some extensions that have been added with newer versions); the MIPS instruction set of MIPS Technologies of Sunnyvale, CA; the ARM instruction set (with optional additional extensions such as NEON) of ARM Holdings of Sunnyvale, CA), including the instruction(s) described herein, in one embodiment, the core 490 includes logic to support a packed data instruction set extension (e.g., AVX1 , AVX2), thereby allowing the operations used by many multimedia applications to be performed using packed data.[00107] It should be understood that the core may support multithreading (executing two or more parallel sets of operations or threads), and may do so in a variety of ways including time sliced multithreading, simultaneous multithreading (where a single physical core provides a logical core for each of the threads that physical core is simu!taneousiy multithreading), or a combination thereof (e.g., time sliced fetching and decoding and simultaneous multithreading thereafter such as in the Intel®Hyperthreading technology).[00108] While register renaming is described in the context of out-of-order execution, it should be understood that register renaming may be used in an in-order architecture. While the illustrated embodiment of the processor also includes separate instruction and data cache units 434/474 and a shared L2 cache unit 476, alternative embodiments may have a single interna! cache for both instructions and data, such as, for example, a Level 1 (L1 ) internal cache, or multiple levels of internal cache. In some embodiments, the system may include a combination of an internal cache and an external cache that is external to the core and/or the processor. Alternatively, all of the cache may be external to the core and/or the processor.[00109] Figures 5A-B illustrate a block diagram of a more specific exemplary in- order core architecture, which core would be one of several logic blocks (including other cores of the same type and/or different types) in a chip. The logic blocks communicate through a high-bandwidth interconnect network (e.g., a ring network) with some fixed function logic, memory I/O interfaces, and other necessary I/O logic, depending on the application.[00110] Figure 5A is a block diagram of a single processor core, along with its connection to the on-die interconnect network 502 and with its local subset of the Level 2 (L2) cache 504, according to embodiments of the invention. In one embodiment, an instruction decoder 500 supports the x86 instruction set with a packed data instruction set extension. An L1 cache 506 allows low-latency accesses to cache memory into the scalar and vector units. While in one embodiment (to simplify the design), a scalar unit 508 and a vector unit 510 use separate register sets (respectively, scalar registers 512 and vector registers 514) and data transferred between them is written to memory and then read back in from a level 1 (L1 ) cache 506, alternative embodiments of the invention may use a different approach (e.g., use a single register set or include a communication path that allow data to be transferred between the two register files without being written and read back).[00111 ] The local subset of the L2 cache 504 is part of a global L2 cache that is divided into separate local subsets, one per processor core. Each processor core has a direct access path to its own local subset of the L2 cache 504. Data read by a processor core is stored in its L2 cache subset 504 and can be accessed quickly, in parallel with other processor cores accessing their own local L2 cache subsets. Data written by a processor core is stored in its own L2 cache subset 504 and is flushed from other subsets, if necessary. The ring network ensures coherency for shared data. The ring network is bi-directional to allow agents such as processor cores, L2 caches and other logic blocks to communicate with each other within the chip. Each ring data-path is 1012-bits wide per direction.[001 2] Figure 5B is an expanded view of part of the processor core in Figure 5A according to embodiments of the invention. Figure 5B includes an L1 data cache 506A part of the L1 cache 504, as well as more detail regarding the vector unit 510 and the vector registers 514. Specifically, the vector unit 510 is a 16-wide vector processing unit (VPU) (see the 16-wide ALU 528), which executes one or more of integer, single- precision float, and double-precision float instructions. The VPU supports swizzling the register inputs with swizzle unit 520, numeric conversion with numeric convert units 522A-B, and replication with replication unit 524 on the memory input. Write mask registers 526 allow predicating resulting vector writes.[00113] Figure 6 is a block diagram of a processor 600 that may have more than one core, may have an integrated memory controller, and may have integrated graphics according to embodiments of the invention. The solid lined boxes in Figure 6 illustrate a processor 600 with a single core 602A, a system agent 610, a set of one or more bus controller units 616, while the optional addition of the dashed lined boxes illustrates an alternative processor 600 with multiple cores 602A-N, a set of one or more integrated memory controller unit(s) 614 in the system agent unit 610, and special purpose logic 608.[001143 Thus, different implementations of the processor 600 may include: 1 ) a CPU with the special purpose logic 608 being integrated graphics and/or scientific(throughput) logic (which may include one or more cores), and the cores 602A-N being one or more general purpose cores (e.g., general purpose in-order cores, general purpose out-of-order cores, a combination of the two); 2) a coprocessor with the cores 602A-N being a large number of special purpose cores intended primarily for graphics and/or scientific (throughput); and 3) a coprocessor with the cores 602A-N being a large number of general purpose in-order cores. Thus, the processor 600 may be a general- purpose processor, coprocessor or special-purpose processor, such as, for example, a network or communication processor, compression engine, graphics processor, GPGPU (general purpose graphics processing unit), a high-throughput many integrated core (MiC) coprocessor (including 30 or more cores), embedded processor, or the like. The processor may be implemented on one or more chips. The processor 600 may be a part of and/or may be imp!emented on one or more substrates using any of a number of process technologies, such as, for example, BiCMOS, CMOS, or NMOS.[00115] The memory hierarchy includes one or more levels of cache within the cores, a set or one or more shared cache units 606, and external memory (not shown) coupled to the set of integrated memory controller units 614. The set of shared cache units 606 may include one or more mid-level caches, such as level 2 (L2), level 3 (L3), level 4 (L4), or other levels of cache, a last level cache (LLC), and/or combinations thereof. While in one embodiment a ring based interconnect unit 612 interconnects the integrated graphics logic 608, the set of shared cache units 606, and the system agent unit 610/integrated memory controller unit(s) 614, alternative embodiments may use any number of well-known techniques for interconnecting such units. In one embodiment, coherency is maintained between one or more cache units 606 and cores 602-A-N.[00116] In some embodiments, one or more of the cores 602A-N are capable of multi-threading. The system agent 610 includes those components coordinating and operating cores 602A-N. The system agent unit 610 may include for example a power control unit (PCU) and a display unit. The PCU may be or include logic andcomponents needed for regulating the power state of the cores 602A-N and the integrated graphics logic 608. The display unit is for driving one or more externally connected displays.[00117] The cores 602A-N may be homogenous or heterogeneous in terms of architecture instruction set; that is, two or more of the cores 602A-N may be capable of execution the same instruction set, while others may be capable of executing only a subset of that instruction set or a different instruction set.[001183 Figures 7-10 are block diagrams of exemplary computer architectures.Other system designs and configurations known in the arts for laptops, desktops, handheld PCs, personal digital assistants, engineering workstations, servers, network devices, network hubs, switches, embedded processors, digital signal processors (DSPs), graphics devices, video game devices, set-top boxes, micro controllers, cell phones, portable media players, hand held devices, and various other electronic devices, are also suitable, in general, a huge variety of systems or electronic devices capable of incorporating a processor and/or other execution logic as disclosed herein are generally suitable.[001193 Referring now to Figure 7, shown is a block diagram of a system 700 in accordance with one embodiment of the present invention. The system 700 may include one or more processors 710, 715, which are coupled to a controller hub 720. In one embodiment the controller hub 720 includes a graphics memory controller hub (GMCH) 790 and an Input/Output Hub (IOH) 750 (which may be on separate chips); the GMCH 790 includes memory and graphics controllers to which are coupled memory 740 and a coprocessor 745; the IOH 750 is couples input/output (I/O) devices 760 to the GMCH 790. Alternatively, one or both of the memory and graphics controllers are integrated within the processor (as described herein), the memory 740 and the coprocessor 745 are coupled directly to the processor 710, and the controller hub 720 in a single chip with the IOH 750.[00120] The optional nature of additional processors 715 is denoted in Figure 7 with broken lines. Each processor 710, 715 may include one or more of the processing cores described herein and may be some version of the processor 600.[00121 ] The memory 740 may be, for example, dynamic random access memory (DRAM), phase change memory (PCM), or a combination of the two. For at least one embodiment, the controller hub 720 communicates with the processor(s) 710, 715 via a multi-drop bus, such as a frontside bus (FSB), point-to-point interface such asQuickPath interconnect (QPI), or similar connection 795.[00122] In one embodiment, the coprocessor 745 is a special-purpose processor, such as, for example, a high-throughput MIC processor, a network or communication processor, compression engine, graphics processor, GPGPU, embedded processor, or the like. In one embodiment, controller hub 720 may include an integrated graphics accelerator.[00123] There can be a variety of differences between the physical resources 710, 715 in terms of a spectrum of metrics of merit including architectural, microarchitectural, thermal, power consumption characteristics, and the like.[00124] In one embodiment, the processor 710 executes instructions that control data processing operations of a general type. Embedded within the instructions may be coprocessor instructions. The processor 710 recognizes these coprocessor instructions as being of a type that should be executed by the attached coprocessor 745.Accordingly, the processor 710 issues these coprocessor instructions (or control signals representing coprocessor instructions) on a coprocessor bus or other interconnect, to coprocessor 745. Coprocessor(s) 745 accept and execute the received coprocessor instructions.[00125] Referring now to Figure 8, shown is a block diagram of a first more specific exemplary system 800 in accordance with an embodiment of the present invention. As shown in Figure 8, multiprocessor system 800 is a point-to-point interconnect system, and includes a first processor 870 and a second processor 880 coupied via a point-to- point interconnect 850. Each of processors 870 and 880 may be some version of the processor 600. In one embodiment of the invention, processors 870 and 880 are respectively processors 710 and 715, while coprocessor 838 is coprocessor 745. In another embodiment, processors 870 and 880 are respectively processor 710 coprocessor 745.[00126] Processors 870 and 880 are shown including integrated memory controller (IMC) units 872 and 882, respectively. Processor 870 also includes as part of its bus controller units point-to-point (P-P) interfaces 876 and 878; similarly, second processor 880 includes P-P interfaces 886 and 888. Processors 870, 880 may exchange information via a point-to-point (P-P) interface 850 using P-P interface circuits 878, 888. As shown in Figure 8, IMCs 872 and 882 couple the processors to respective memories, namely a memory 832 and a memory 834, which may be portions of main memory locally attached to the respective processors.[00127] Processors 870, 880 may each exchange information with a chipset 890 via individual P-P interfaces 852, 854 using point to point interface circuits 876, 894, 886, 898. Chipset 890 may optionally exchange information with the coprocessor 838 via a high-performance interface 839. In one embodiment, the coprocessor 838 is a special- purpose processor, such as, for example, a high-throughput MIC processor, a network or communication processor, compression engine, graphics processor, GPGPU, embedded processor, or the like.[00128] A shared cache (not shown) may be included in either processor or outside of both processors, yet connected with the processors via P-P interconnect, such that either or both processors' local cache information may be stored in the shared cache if a processor is placed into a low power mode.[00129] Chipset 890 may be coupled to a first bus 816 via an interface 896. In one embodiment, first bus 816 may be a Peripheral Component Interconnect (PCI) bus, or a bus such as a PCI Express bus or another third generation I/O interconnect bus, although the scope of the present invention is not so limited.[00130] As shown in Figure 8, various I/O devices 814 may be coupled to first bus 816, along with a bus bridge 818 which couples first bus 816 to a second bus 820. In one embodiment, one or more additional processor(s) 815, such as coprocessors, high- throughput MIC processors, GPGPU's, accelerators (such as, e.g., graphicsaccelerators or digital signal processing (DSP) units), field programmable gate arrays, or any other processor, are coupled to first bus 816. In one embodiment, second bus 820 may be a low pin count (LPC) bus. Various devices may be coupled to a second bus 820 including, for example, a keyboard and/or mouse 822, communication devices 827 and a storage unit 828 such as a disk drive or other mass storage device which may include instructions/code and data 830, in one embodiment. Further, an audio I/O 824 may be coupled to the second bus 820. Note that other architectures are possible. For example, instead of the point-to-point architecture of Figure 8, a system may implement a muiti-drop bus or other such architecture.[00131] Referring now to Figure 9, shown is a block diagram of a second more specific exemplary system 900 in accordance with an embodiment of the present invention. Like elements in Figures 8 and 9 bear like reference numerals, and certain aspects of Figure 8 have been omitted from Figure 9 in order to avoid obscuring other aspects of Figure 9.[00132] Figure 9 illustrates that the processors 870, 880 may include integrated memory and i/O control logic ("CL") 872 and 882, respectively. Thus, the CL 872, 882 include integrated memory controller units and include I/O control logic. Figure 9 illustrates that not only are the memories 832, 834 coupled to the CL 872, 882, but also that I/O devices 914 are also coupled to the control logic 872, 882. Legacy I/O devices 915 are coupled to the chipset 890.[00133] Referring now to Figure 10, shown is a block diagram of a SoC 1000 in accordance with an embodiment of the present invention. Similar elements in Figure 6 bear like reference numerals. Also, dashed lined boxes are optional features on more advanced SoCs. In Figure 10, an interconnect unit(s) 1002 is coupled to: an application processor 1010 which includes a set of one or more cores 202A-N and shared cache unit(s) 606; a system agent unit 610; a bus controller unit(s) 616; an integrated memory controller unit(s) 614; a set or one or more coprocessors 1020 which may include integrated graphics logic, an image processor, an audio processor, and a video processor; an static random access memory (SRAM) unit 1030; a direct memory access (DMA) unit 1032; and a display unit 1040 for coupling to one or more external displays. In one embodiment, the co process or(s) 1020 include a special-purpose processor, such as, for example, a network or communication processor, compression engine, GPGPU, a high-throughput MIC processor, embedded processor, or the like.[00134] Embodiments of the mechanisms disclosed herein may be implemented in hardware, software, firmware, or a combination of such implementation approaches. Embodiments of the invention may be implemented as computer programs or program code executing on programmable systems comprising at least one processor, a storage system (including volatile and non-volatile memory and/or storage elements), at least one input device, and at least one output device.[00135] Program code, such as code 830 illustrated in Figure 8, may be applied to input instructions to perform the functions described herein and generate output information. The output information may be applied to one or more output devices, in known fashion. For purposes of this application, a processing system includes any system that has a processor, such as, for example; a digital signal processor (DSP), a microcontroller, an application specific integrated circuit (ASIC), or a microprocessor.[00136] The program code may be implemented in a high level procedural or object oriented programming language to communicate with a processing system. The program code may also be implemented in assembly or machine language, if desired. In fact, the mechanisms described herein are not limited in scope to any particular programming language. In any case, the language may be a compiled or interpreted language.[00137] One or more aspects of at least one embodiment may be implemented by representative instructions stored on a machine-readable medium which represents various logic within the processor, which when read by a machine causes the machine to fabricate logic to perform the techniques described herein. Such representations, known as "IP cores" may be stored on a tangible, machine readable medium and supplied to various customers or manufacturing facilities to load into the fabrication machines that actually make the logic or processor.[00138] Such machine-readable storage media may include, without limitation, non- transitory, tangible arrangements of articles manufactured or formed by a machine or device, including storage media such as hard disks, any other type of disk including floppy disks, optical disks, compact disk read-only memories (CD-ROMs), compact disk rew itable's (CD-RWs), and magneto-optical disks, semiconductor devices such as read-only memories (ROMs), random access memories (RAMs) such as dynamic random access memories (DRAMs), static random access memories (SRAMs), erasable programmable read-only memories (EPROMs), flash memories, electrically erasable programmable read-only memories (EEPROMs), phase change memory (PCM), magnetic or optical cards, or any other type of media suitable for storing electronic instructions.[00139] Accordingly, embodiments of the invention also include non-transitory, tangible machine-readable media containing instructions or containing design data, such as Hardware Description Language (HDL), which defines structures, circuits, apparatuses, processors and/or system features described herein. Such embodiments may also be referred to as program products.[00140] In some cases, an instruction converter may be used to convert an instruction from a source instruction set to a target instruction set. For example, the instruction converter may translate (e.g., using static binary translation, dynamic binary translation including dynamic compilation), morph, emulate, or otherwise convert an instruction to one or more other instructions to be processed by the core. The instruction converter may be implemented in software, hardware, firmware, or a combination thereof. The instruction converter may be on processor, off processor, or part on and part off processor.[00141] Figure 1 is a block diagram contrasting the use of a software instruction converter to convert binary instructions in a source instruction set to binary instructions in a target instruction set according to embodiments of the invention. In the illustrated embodiment, the instruction converter is a software instruction converter, although alternatively the instruction converter may be implemented in software, firmware, hardware, or various combinations thereof. Figure 11 shows a program in a high level language 1 102 may be compiled using an x86 compiler 1 104 to generate x86 binary code 1 106 that may be natively executed by a processor with at least one x86 instruction set core 1 1 16. The processor with at least one x86 instruction set core 1 1 16 represents any processor that can perform substantially the same functions as an Intel processor with at least one x86 instruction set core by compatibly executing or otherwise processing (1 ) a substantial portion of the instruction set of the Intel x86 instruction set core or (2) object code versions of applications or other software targeted to run on an Intel processor with at least one x86 instruction set core, in order to achieve substantially the same result as an Intel processor with at least one x86 instruction set core. The x86 compiler 1 104 represents a compiler that is operable to generate x86 binary code 1 106 (e.g., object code) that can, with or without additional linkage processing, be executed on the processor with at least one x86 instruction set core 1 1 16. Similarly, Figure 11 shows the program in the high level language 1102 may be compiled using an alternative instruction set compiler 1 108 to generate alternative instruction set binary code 11 10 that may be natively executed by a processor without at least one x86 instruction set core 1 1 14 (e.g., a processor with cores that execute the MIPS instruction set of MIPS Technologies of Sunnyvale, CA and/or that execute the ARM instruction set of ARM Holdings of Sunnyvale, CA). The instruction converter 1 1 12 is used to convert the x86 binary code 1 106 into code that may be natively executed by the processor without an x86 instruction set core 1 1 14. This converted code is not likely to be the same as the alternative instruction set binary code 1 1 10 because an instruction converter capable of this is difficult to make; however, the converted code will accomplish the general operation and be made up of instructions from the alternative instruction set. Thus, the instruction converter 1 1 12 represents software, firmware, hardware, or a combination thereof that, through emulation, simulation or any other process, allows a processor or other electronic device that does not have an x86 instruction set processor or core to execute the x86 binary code 1 106.SHARING AWARE SNOOP FILTER APPARATUS AND METHOD[00142] Embodiments of the invention augment the traditional snoop filter (SF) with an auxiliary structure that can be used to perfectly track a few frequently-used shared cache lines. These embodiments also provide an efficient allocation algorithm for the auxiliary structure that improves performance and reduces network traffic while incurring minimal area overhead.[00143] As mentioned, the snoop filter, also known as a "tag-directory," is an on-die structure that tracks the presence of cache lines in the different levels of the cache hierarchy across the tiles on a die. The term "tile" is used here to represent an agent that has a cache associated with it and accesses memory via an intra-dieinterconnection network. A tile, for example, can be associated with a single core, multiple cores, accelerators and/or i/O agents. Cache line tracking in the SF is done using presence bits, known as core-valid bits (CV bits), and helps maintain coherence for data streams and instruction streams across the caches. To reduce the area occupied by the SF, the tracking bits in each entry follow an encoded pattern. For every cached line up to two unique sharers of the line can be tracked perfectly, i.e., their exact identity is stored in the SF entry. If a cache line is shared by more than 2 caches however it is tracked in a coarse-grained manner with each CV bit used to represent multiple caches/tiles. As the number of tiles (and hence caches) on the die increase the level of coarse-grained encoding used for the CV bits has also been steadily increasing (from one bit representing 1 tile to one bit representing 6 or more tiles). This coarsegrained representation leads to CV bit aliasing whereby a core which never accesses a line can appear as currently caching it.[00144] The CV bit aliasing outlined above can lead to the following inefficiencies in the coherence protocol. First, aliased CV bits can cause multiple spurious messages being sent out on the intra-die interconnect network leading to performance loss, extra network messages and energy usage. For example read-for-ownership requests and capacity evictions from the SF, which send invalidate messages based on the number of CV bits set in the SF, will send unnecessary invalidations to cores that never even cached the address and cause extra acknowledgement messages. Second, due to the aliased nature of CV bits for widely shared lines, evictions from the core caches are unable to clear the CV bit that represents them in the SF, leading to stale entries (i.e., entries which are no longer required but still have valid bits), in order to counter this problem certain processors provision the SF to be larger than the capacity of the on-die caches so that back-invalidations from the SF do not incur a performance loss. As applications become more multi-threaded and more cores are integrated on-die this problem is expected to be exacerbated. While all application segments with parallel codes could be impacted by this problem, high-performance computing (HPC) in particular is a key target for this inefficiency as it tends to use many parallel threads.[00145] Several proposals in industry and academia have attempted to improve the tracking accuracy and efficiency of snoop filters. As shown in Figure 12A, an aliased CV bit requires a single message sent to one cache 1205 to be compulsorily multicast to all other caches 1206-1208 that map to its bit causing extra interconnection network traffic and power. Moreover, data evictions from the core caches are unable to clean aliased CV bits since they cannot be sure that the line is not present in any of the other caches that map to this bit.[00146] A simple but costly improvement would be to add as many CV bits in each snoop filter line as there are tiles on the die, as illustrated in Figure 12B. This would increase its area dramatically but also improve performance of several applications. For example perfectly tracking 64-tiles in the snoop filter would increase its area by 57%. However the performance advantages of such a perfect snoop filter can be significant.[00147] To address the severe area overhead of a perfect SF, hierarchical snoop filters have been proposed which provide a dedicated CV bit per tile on-die but group the tiles in hierarchical groups called domains so that any one SF line only has to track its domain perfectly. This reduces the tracking state but makes the coherence protocol difficult to verify due to the expanding number of coherence states possible between the different SF levels.[00148] The embodiments of the invention take advantage of the insight that of all unique cache lines that are tracked by the snoop filter, only a small fraction of them are simultaneously cached by more than two unique tiles (sometimes referred to as "widely- shared lines"). Hence, the performance and energy benefits of incorporating perfect CV bits for the entire SF may be achieved by tracking the smali fraction of widely-shared lines perfectly. One embodiment supplements the basic snoop filter with a second, smaller structure that can perfectly encode all on-die caches with minimal area impact, resulting in a sharing-aware SF. The sharing-aware SF can be turned off if required without impacting the basic SF operation.[00149] Figure 13 shows the percent of SF lines that have more than two consumers for a number of key HPC applications. As can be seen, provisioning just 12% of the SF lines with the extra bits to perfectly track the cores can achieve all the performance advantages full tracking with a minimal area overhead. An analysis of the data shows that while lines that are shared by more than two caches are a small fraction of the total cached lines they are accessed very frequently.[00150] Figure 14 shows the frequency of accesses to shared data for an 8-threaded run of a particular application (UMT). Over 25% of the total accesses are to lines shared by more than two caches (the buckets marked Th3 and more). This explains some of the high performance benefits realized from the embodiments of the invention.[00151] In one embodiment, rather than change some of the entries of an existing SF to encode more bits, a new, smaller perfect SF is added to the baseline SF to track these 12% lines. Using a separate structure means that all applications which ran well on the coarse-grained SF will see no negative impact on performance or power and all the applications that can benefit from the auxiliary structure will benefit from it.[00152] Figure 15 illustrates an exemplary processor architecture on which the embodiments of the invention may be implemented which includes a core region 1501 and an uncore region 1510. The core region 1501 includes a plurality of cores 1501 a-c, which may be multithreaded cores capable of concurrently executing multiple instruction streams. Although only three cores 1501 a-c are illustrated in Figure 15, it will be appreciated that the core region 1501 may include any number of cores and/or other form of processing devices with a local cache (e.g., accelerators). Each of the cores 1501 a-c may include well known instruction pipeline components for performing out-of- order or in-order execution of the instruction streams including an instruction fetch unit; a decode unit; an execution unit; a write-back/retirement unit; general purpose, vector, and mask registers; a branch prediction unit; a translation lookaside buffer (TLB); and various cache levels including a Level 1 (L1 ) cache, and Level 2 (L2) cache (illustrated generally as cache(s) 1505a-c). However, it should be noted, that the underlying principles of the invention are not limited to any particular processor architecture. [00153] In the illustrated embodiment, an interconnect 1506 such as a point-to-point interconnect communicatively couples the cores 1501 a-c to one another and to various components within the uncore 1510 including shared cache(s) 1520 (e.g., a L3 cache), an integrated memory controller 1530 providing access to a system memory 1560, and one or more input/output (I/O) agents 1535 (e.g., such as a PC! express or similar agent interface).[00154] In one embodiment, a sharing aware snoop filter 1550 is coupled to the caching agents 1507a-c of the core caches 1505a-c, the shared cache(s) 1520, the I/O agent, and/or any other processor elements (not shown) adapted to coherently cache data/instructions on the processor 1500. As mentioned, in one embodiment, the sharing aware snoop filter 1550 includes an auxiliary SF component which perfectly tracks a subset of frequently accessed cache lines.[00155] Figures 16A-B illustrates two possible embodiments of the snoop filter 1550 which includes a primary snoop filter 1610 and the auxiliary snoop filter 161 1 . Only the SF bits that are relevant for the embodiments of the invention are illustrated; other bits representing the physical address are not shown. The primary snoop filter 1610 may operate using one or more traditional modes of operation. For example, in one embodiment, in a first mode ("Mode 0"), the SF may perfectly represent up to two sharers of a cache line. That is, the owner 1601 and a first sharer 1602 can be uniquely identified in the first mode. In a second mode ("Mode 1 "), the SF can represent the owner 1601 of the cache line and several CV bits 1603 each of which represent some number of tiles. The number of tiles that alias to a bit depends on the total number of on-die tiles. Each line also includes a valid bit 1600 to indicate whether the line is valid.[00156] In one embodiment the auxiliary snoop filter 161 1 holds widely shared lines. The auxiliary structure has the same valid 1600 and owner 1601 fields as the primary snoop filter 1610. However, it additionally holds CV bits 1603 which includes bits to uniquely identify each tile on the die.[00157] A first embodiment is shown in Figure 16A in which the auxiliary snoop filter 161 1 is mutually exclusive with the baseline snoop filter 1610, also referred to as the Overflow" design. As long as a cache line has up to two sharers it resides in the baseline SF 1610. In this embodiment, if a third tile accesses the line, it is migrated to the auxiliary snoop filter 1611 and it stays there as long as it is cached by two tiles. If and when the line drops down to being accessed by two or fewer tiles, it is moved back to the primary snoop filter 1610. This auxiliary snoop filter 161 1 may be provisioned to hold up to 20% of the baseline SF 1610 lines to avoid frequent capacity evictions and increases the total area of the SF by 2% (compared to 57% for a perfect SF).[00158] The design in Figure 16A migrates the cache line between the two SF structures 1610-161 1 , which may not be desirable for certain applications. Hence, in a second embodiment shown in Figure 16B, the baseline SF 1610 exists in both modes (i.e., one sharer or N tiles per bit as indicated at 1612) but the auxiliary snoop filter 161 1 acts like a cache for frequently accessed, widely shared lines. This design is called the "Cache" design. As long as a cache line has up to two sharers, it resides in the baseline SF 1610 as usual. If a third tile accesses the line, it is maintained in the baseline SF 1610, which is switched to coarse-grained mode (i.e., N tiles per bit or "Mode 1 "). In addition, an entry is allocated for that cache line in the auxiliary SF 161 1 which tracks its CV bits perfectly (e.g., uses 1 bit per tile to uniquely identify each tile). All subsequent updates to the cache line are made to both the primary snoop filter 1610 and the auxiliary snoop filter 1611 structures. A line can be dropped from the auxiliary SF 161 1 in the Cache design since an up-to-date coarse-grained version of the line exists in the baseline SF 1610 and no migration back to the baseline SF is required. In this design, 20% of the lines do not need to be active in the auxiliary SF 161 1 since they already have a copy in the baseline SF 1610, resulting in a 3-7% area increase of the SF.[00159] Both the Overflow design and the Cache Design dramatically reduce the area overhead of perfect tracking (2% and up to 7% respectively) as compared to a perfect SF (57%) and are still able to achieve the performance improvements from a perfect SF.[00160] While the embodiments of the invention described herein focus on the snoop filter structure, products with last-level caches (LLCs) also use similar encoding schemes for the CV bits. Consequently, the auxiliary SF structure can be used for them as well. As mentioned, the embodiments of the invention may be utilized for any architecture comprising multiple coherent caches.[00161] A method in accordance with one embodiment of the invention is illustrated in Figure 17. The method may be implemented within the context of thesystem/processor architectures described above but is not limited to any particular architecture.[00162] At 1701 a first entry for a first cache line is allocated in a primary snoop filter. As mentioned above, the entry may include a valid indication, a current owner indication, and one or more sharer indications. Once the first cache line is being shared by more than N caches, determined at 1702, then a first auxiliary entry for the first cache line is allocated in the auxiliary snoop filter at 1704. Until the cache line is being shared by more than N caches, then the first entry is maintained only in the primary snoop filter at 1 703.[00163] In the foregoing specification, the embodiments of invention have been described with reference to specific exemplary embodiments thereof. It will, however, be evident that various modifications and changes may be made thereto without departing from the broader spirit and scope of the invention as set forth in the appended claims. The specification and drawings are, accordingly, to be regarded in an illustrative rather than a restrictive sense.[00164] Embodiments of the invention may include various steps, which have been described above. The steps may be embodied in machine-executable instructions which may be used to cause a general-purpose or special-purpose processor to perform the steps. Alternatively, these steps may be performed by specific hardware components that contain hardwired logic for performing the steps, or by anycombination of programmed computer components and custom hardware components.[00165] As described herein, instructions may refer to specific configurations of hardware such as application specific integrated circuits (ASICs) configured to perform certain operations or having a predetermined functionality or software instructions stored in memory embodied in a non-transitory computer readable medium. Thus, the techniques shown in the Figures can be implemented using code and data stored and executed on one or more electronic devices (e.g., an end station, a network element, etc.). Such electronic devices store and communicate (internally and/or with other electronic devices over a network) code and data using computer machine-readable media, such as non-transitory computer machine-readable storage media (e.g., magnetic disks; optical disks; random access memory; read only memory; flash memory devices; phase-change memory) and transitory computer machine-readablecommunication media (e.g., electrical, optical, acoustical or other form of propagated signals - such as carrier waves, infrared signals, digital signals, etc.). In addition, such electronic devices typically include a set of one or more processors coupled to one or more other components, such as one or more storage devices (non-transitory machine- readable storage media), user input/output devices (e.g., a keyboard, a touchscreen, and/or a display), and network connections. The coupling of the set of processors and other components is typically through one or more busses and bridges (also termed as bus controllers). The storage device and signals carrying the network trafficrespectively represent one or more machine-readable storage media and machine- readable communication media. Thus, the storage device of a given electronic device typically stores code and/or data for execution on the set of one or more processors of that electronic device. Of course, one or more parts of an embodiment of the invention may be implemented using different combinations of software, firmware, and/or hardware. Throughout this detailed description, for the purposes of explanation, numerous specific details were set forth in order to provide a thorough understanding of the present invention. It will be apparent, however, to one skilled in the art that the invention may be practiced without some of these specific details. In certain instances, well known structures and functions were not described in elaborate detail in order to avoid obscuring the subject matter of the present invention. Accordingly, the scope and spirit of the invention should be judged in terms of the claims which follow. |
A method and apparatus for debugging are described. In one embodiment, a target construct is selected for debugging. Data related to an operation of the target construct is accessed by a debug construct in real time. At least a portion of this data is retrieved without disturbing the operation of the target construct to debug the target construct. |
1. A method for interactive debugging in a multi-channel, multi-service system, the method comprising:providing a shared memory in which separate portions of the shared memory are assigned to a plurality of executing services and a debug construct, respectively; selecting a target construct from the plurality of executing services for debugging; accessing data in the shared memory related to an operation of the target construct by the debug construct in real time; monitoring at least a portion of the accessed data without disturbing the operation of the target construct; and debugging the target construct using the monitored portion of the accessed data. 2. The method of claim 1 further comprising modifying at least a portion of the data.3. The method of claim 1 wherein the target construct is one selected from the group consisting of a service, a socket, a service stack, a set of services, and a set of sockets.4. The method of claim 1 wherein the debug construct comprises at least one service, at least one socket, or a combination of at least one service and at least one socket.5. The method of claim 1 wherein selecting a target construct further comprises:providing information about a plurality of services; and selecting the target construct from the plurality of services. 6. The method of claim 5 wherein the information includes a current state of each of the plurality of services.7. The method of claim 5 further comprising:providing information about a plurality of sockets; and selecting the target construct from the plurality of sockets. 8. The method of claim 7 wherein the information includes a current state of each of the plurality of services.9. The method of claim 1 further comprising accessing a memory of the target construct by the debug construct, the accessing corresponding to reading the memory or writing to the memory.10. The method of claim 1 further comprising accessing state of the target construct by the debug construct, the accessing corresponding to reading the state or modifying the state.11. The method of claim 1 further comprising dynamically allocating the debug construct.12. The method of claim 1 further comprising dynamically de-allocating the debug construct once the monitoring is completed.13. The method of claim 1 further comprising collecting statistics related to the target construct.14. The method of claim 1 further comprising transmitting the data to at least one host system.15. The method of claim 14 wherein the data is transmitted based upon a request sent by a host application.16. The method of claim 14 wherein an operating system determines which data is to be transmitted.17. The method of claim 14 wherein the debug construct specifies which data is to be transmitted.18. The method of claim 1 further comprising notifying the debug construct upon a completion of a certain operation by the target construct.19. The method of claim 14 further comprising:measuring bandwidth required to transmit the data; and transmitting at least a portion of data based upon available bandwidth. 20. The method of claim 1 wherein debugging is performed in a multi-channel, multi-service environment.21. The method of claim 15 wherein sending the request and transmitting the response are performed over a network.22. The method of claim 1 further comprising:collecting at least a portion of the data; allocating a copy of the target construct in a simulated environment; and debugging the operation of the target construct using the collected data in the simulated environment. 23. The method of claim 1 further comprising:generating a request by a host application; transmitting the request to an operating system; performing the request by the operating system; and sending a response to the host application. 24. A method for multi-channel, multi-service debugging, comprising:providing information about a plurality of running services; selecting a target construct for debugging from the plurality of running services; providing a shared memory in which separate portions of the shared memory are assigned to the plurality of running services and a debug construct, respectively; and dynamically loading one or more of the plurality of running services into the target construct. 25. The method of claim 24 wherein the information about the plurality of running services includes a current state of each service.26. The method of claim 24 further comprising:providing information about at least one socket; maintaining an isolated debugging environment for each of the at least one socket; and selecting a target construct for debugging from the at least one socket. 27. The method of claim 26 wherein the information about the at least one socket includes a current state of each socket.28. The method of claim 24 wherein the target construct is one selected from the group consisting of a service, a socket, a service stack, a set of services, and a set of sockets.29. The method of claim 28 further comprising switching between services and sockets during a debugging process.30. The method of claim 24 wherein the isolated debugging environment is maintained by an operating system in cooperation with a host application.31. The method of claim 24 wherein the target construct is selected based upon a request from a host application.32. The method of claim 24 further comprising:generating a request by a host application; transmitting the request to an operating system; performing the request by the operating system; and sending a response to the host application. 33. The method of claim 32 wherein transmitting the request and sending a response are performed over a network.34. The method of claim 24 further comprising:sending a request by a host application; and receiving a response by the host application once a requested operation is completed. 35. The method of claim 34 wherein sending a request and receiving a response are performed over a network.36. The method of claim 24 further comprising:receiving a request by an operating system; performing a requested operation; and transmitting a response once the requested operation is completed. 37. The method of claim 36 wherein receiving a request and transmitting a response are performed over a network.38. The method of claim 24 further comprising dynamically allocating at least one service into the target construct.39. The method of claim 38 further comprising instantiating any of at least one service, at least one service stack, and at least one socket.40. The method of claim 24 further comprising substituting input and output data for at least one socket.41. The method of claim 40 further comprising:collecting data for at least one socket; allocating a copy of the target construct in a simulated environment; and debugging the operation of the target construct using the collected data. 42. An apparatus for interactive debugging comprising:a shared memory in which separate portions of the shared memory are assigned to a plurality of executing services and a debug construct, respectively; means for selecting a target construct from the plurality of executing services for debugging; means for accessing data in the shared memory related to an operation of the target construct by the debug construct in real time; means for monitoring at least a portion of the accessed data without disturbing the operation of the target construct; and debugging the target construct using the monitored portion of the accessed data. 43. An apparatus for multi-channel, multi-service debugging, comprising:means for providing information about a plurality of running services; means for selecting a target construct for debugging from the plurality of running services; a shared memory in which separate portions of the shared memory are assigned to the plurality of running services and a debug construct, respectively; and means for dynamically loading one or more of the plurality of running services into the target construct. 44. An apparatus for interactive debugging in a multi-channel, multi-service system comprising:a shared memory in which separate portions of the shared memory are assigned to a plurality of executing services and a debug construct, respectively; and a target construct selected from the plurality of executing services, wherein the debug construct is configured to access data in the shared memory related to an operation of the target construct in real time and to monitor at least a portion of the data without disturbing the operation of the target construct. 45. The apparatus of claim 44 wherein the debug construct is further configured to modify at least a portion of the data.46. The apparatus of claim 44 wherein the target construct is one selected from the group consisting of a service, a socket, a service stack, a set of services, and a set of sockets.47. The apparatus of claim 44 wherein the debug construct comprises at least one service, at least one socket, or a combination of at least one service and at least one socket.48. The apparatus of claim 44 further comprising a user interface for providing information about a plurality of services and selecting the target construct from the plurality of services upon a user request.49. The apparatus of claim of claim 48 wherein the information about a plurality of services includes a current state of each of the plurality of services.50. The apparatus of claim 48 wherein the user interface further provides information about a plurality of sockets and allows the user to select the target construct from the plurality of sockets.51. The apparatus of claim of claim 50 wherein the information about a plurality of sockets includes a current state of each of the plurality of sockets.52. The apparatus of claim 48 wherein the user interface is a text-based interface or graphical user interface.53. The apparatus of claim 44 further comprising a platform control socket configured to dynamically allocate the debug construct.54. The apparatus of claim 44 further comprising a platform control socket further configured to dynamically de-allocate the debug construct once the monitoring is completed.55. The apparatus of claim 44 further comprising a profiler collecting statistics related to the target construct.56. The apparatus of claim 44 further comprising:at least one host processor; and a communications infrastructure for transmitting the data to the host processor. 57. The apparatus of claim 56 further comprising an operating system configured to determine which data is to be transmitted, measure bandwidth required to transmit the data, and determine a portion of the data to be transmitted based upon available bandwidth.58. The apparatus of claim 56 wherein the debug construct is further configured to specify which portion of the data is to be transmitted.59. The apparatus of claim 56 wherein the data is transmitted based upon the request sent by a host application.60. The apparatus of claim 44 wherein debugging is performed in a multi-channel, multi-service environment.61. The apparatus of claim 56 further comprising:a host application generating a request; a communications infrastructure transmitting the request to the debug construct; and the debug construct configured to perform the request and to send a response to the host application. 62. The apparatus of claim 61 wherein the communications infrastructure is a network.63. The apparatus of claim 56 further comprising a host application sending a request and receiving a response once a requested operation is completed.64. The apparatus of claim 63 wherein the host application sends a request and receives a response over a network.65. The apparatus of claim 56 wherein the debug construct is further configured to receive a request, perform a requested operation, and transmit a response once the requested operation is completed.66. The apparatus of claim 65 wherein the debug construct receives the request and transmits the response over a network.67. An apparatus for multi-channel, multi-service debugging, comprising:a graphical user interface for providing information about a plurality of running services; a debug core configured to select a target construct for debugging from the plurality of running services upon a user request; a shared memory in which separate portions of the shared memory are assigned to the plurality of running services and the debug core, respectively; and means for dynamically loading one or more of the plurality of running services into the target construct. 68. The apparatus of claim 67 wherein the information about the at least one service includes a current state of each service.69. The apparatus of claim 67 wherein the graphical user interface provides information about at least one socket, the operating system maintains an isolated debugging environment for each of the at least socket, and the debug core is configured to select a target construct for debugging from the at least one socket upon a user request.70. The apparatus of claim 69 wherein the information about the at least one socket includes a current state of each socket.71. The apparatus of claim 67 wherein the target construct is one selected from the group consisting of a service, a socket, a service stack, a set of services, and a set of sockets.72. The apparatus of claim 67 wherein the debug core is further configured to switch between services and sockets during a debugging process upon a user request.73. The apparatus of claim 67 further comprising a host application configured to send a request to select the target construct.74. The apparatus of claim 73 further comprising:a communications infrastructure transmitting the request to an operating system; and the operating system configured to perform the request. 75. The apparatus of claim 74 wherein the communications infrastructure is a network.76. The apparatus of claim 67 further comprising a host application sending a request for a debugging operation and receiving a response once the operation is completed.77. The apparatus of claim 67 wherein the operating system receives a request for a debugging operation, performs the operation, and transmits a response once the requested operation is completed.78. The apparatus of claim 67 further comprising a host application requesting to dynamically allocate at least one service into the target construct and to instantiate at least one service or at least one service stack.79. The apparatus of claim 67 wherein a host application cooperates with the operating system to substitute input and output data for at least one socket.80. The apparatus of claim 79 wherein the host application is configured to request to collect data for at least one socket, to allocate a copy of the target construct in a simulated environment, and to debug the operation of the target construct using the collected data in the simulated environment.81. A computer readable medium comprising instructions, which when executed on a processor, perform a method for interactive debugging in a multi-channel, multi-service system, the method comprising:providing a shared memory in which separate portions of the shared memory are assigned to a plurality of executing services and a debug construct, respectively; selecting a target construct for debugging from the plurality of executing services; accessing data in the shared memory related to an operation of the target construct by the debug construct in real time; monitoring at least a portion of the accessed data without disturbing the operation of the target construct; and debugging the target construct using the monitored portion of the accessed data. 82. A computer readable medium comprising instructions, which when executed on a processor, perform a method for multi-channel, multi-service debugging, comprising:providing information about a plurality of running services; selecting a target construct for debugging from the plurality of running services; providing a shared memory in which separate portions of the shared memory are assigned to the plurality of running services and a debug construct, respectively; and dynamically loading one or more of the plurality of running services into the target construct. |
RELATED APPLICATIONThis present application is related to U.S. patent application Ser. No. 09/564,592, which was filed on May 3, 2000 entitled "System And Method For Multi-Channel Transfer Of Data. This application is also related to U.S. patent application Ser. No. 09/565,580, filed May 4, 2000 entitled "Multi-Channel, Multi-Service Development Architecture".FIELD OF THE INVENTIONThe present invention relates to interactive debugging and more specifically to interactive debugging in a multi-channel, multi-service environment.BACKGROUND OF THE INVENTIONTraditionally, Digital Signal Processors (DSPs) have been used to run single channels, such as, for example, a single DS0 or time division multiplexed (TDM) slot, that handle single services, such as modem, vocoder, or packet processing. Multiple services or multiple channels require multiple DSPs, each running its own small executive program (small kernel) and application. The executive programs reserve some area in memory for application code. When applications need to be switched, these executive programs overlay this memory with the new application.Channels may take one of the following forms: one channel carried on a physical wire or wireless medium between systems (also referred to as a circuit); time division multiplexed (TDM) channels in which signals from several sources such as telephones and computers are merged into a single stream of data and separated by a time interval; and frequency division multiplexed (FDM) channels in which signals from many sources are transmitted over a single cable by modulating each signal on a carrier at different frequencies.Recent advances in processing capacity now allow a single chip to run multiple channels. With this increase in capacity has come a desire to run different services simultaneously and to switch between services.A current method to implement multiple services or multiple channels involves writing custom versions of all control, overlay, and task-switching code. This requirement causes additional engineering overhead for development and debugging of the applications. In addition, not all services may fit into the memory available to the DSP, and the services must be swapped in from the host system. This swapping-overlaying-adds significant complexity to the implementation of the DSP services. The extra development activity consumes DSP application development time.The fact that DSPs have a single thread of control creates problems to developing and debugging in the multi-channel, multi-service environment. Debugging an application on a single processor stops all other applications and channels running on that processor. If the processor is running, real-time diagnostics on a channel or service cannot be obtained without interfering with the operation of the other channels and services. In addition, a debugging system typically needs to have direct access to the chip being diagnosed. That is, a conventional debugging system must use a special development board or a physical debug interface (such as a Joint Test Access Group (JTAG) interface) to provide debugging access. This makes debugging in a production environment an inflexible and cumbersome process.Therefore, what is required is an efficient way of debugging a target application in a multi-channel, multi-service environment, which will allow the developer to obtain real-time diagnostics without interfering with the operation of the target application and other running applications and which will perform debugging services remotely.SUMMARY OF THE INVENTIONA method and apparatus for debugging are described. In one embodiment, a target construct is selected for debugging. Data related to an operation of the target construct is accessed by a debug construct in real time. At least a portion of this data is retrieved without disturbing the operation of the target construct to debug the target construct.BRIEF DESCRIPTION OF THE DRAWINGSThe present invention is illustrated by way of example, and not by way of limitation in the figures of the accompanying drawings in which like reference numerals refer to similar elements.FIG. 1 is a system architecture of one embodiment for a multi-channel, multi-service system;FIG. 2 is a block diagram of one embodiment for a processing chip of FIG. 1;FIG. 3 is a block diagram of one embodiment for multiple sockets/services within a processing chip;FIG. 4a is an exemplary diagram of channel sockets within the system of FIG. 1;FIG. 4b is a block diagram of one embodiment for a service control socket (SCS) configuration;FIG. 5a is a block diagram of one embodiment for an interactive debugging system;FIGS. 5b and 5c are block diagrams of two alternate embodiments for an interactive debugging system operating over a network;FIG. 6 is a block diagram of one embodiment for a debugging process;FIG. 7 is a flow diagram of one embodiment for an interactive debugging system;FIG. 8 is a flow diagram of one embodiment for a multi-channel, multi-service debugging system; andFIG. 9 illustrates an exemplary implementation of one embodiment for a multi-channel, multi-service debugging system.DETAILED DESCRIPTIONA method and system for interactive debugging are described. In one embodiment, a target construct is selected for debugging. Data related to an operation of the target construct is accessed by a debug construct in real time. At least a portion of this data is then retrieved without disturbing the operation of the target construct to debug the target construct.In the following detailed description of the present invention, numerous specific details are set forth in order to provide a thorough understanding of the present invention. However, it will be apparent to one skilled in the art that the present invention may be practiced without these specific details. In some instances, well-known structures and devices are shown in block diagram form, rather than in detail, in order to avoid obscuring the present invention.Some portions of the detailed descriptions that follow are presented in terms of algorithms and symbolic representations of operations on data bits within a computer memory. These algorithmic descriptions and representations are the means used by those skilled in the data processing arts to most effectively convey the substance of their work to others skilled in the art. An algorithm is here, and generally, conceived to be a self-consistent sequence of steps leading to a desired result. The steps are those requiring physical manipulations of physical quantities. Usually, though not necessarily, these quantities take the form of electrical or magnetic signals capable of being stored, transferred, combined, compared, and otherwise manipulated. It has proven convenient at times, principally for reasons of common usage, to refer to these signals as bits, values, elements, symbols, characters, terms, numbers, or the like.It should be borne in mind, however, that all of these and similar terms are to be associated with the appropriate physical quantities and are merely convenient labels applied to these quantities. Unless specifically stated otherwise as apparent from the following discussion, it is appreciated that throughout the description, discussions utilizing terms such as "processing" or "computing" or "calculating" or "determining" or "displaying" or the like, refer to the action and processes of a computer system, or similar electronic computing device, that manipulates and transforms data represented as physical (electronic) quantities within the computer system's registers and memories into other data similarly represented as physical quantities within the computer system memories or registers or other such information storage, transmission or display devices.The present invention also relates to apparatus for performing the operations herein. This apparatus may be specially constructed for the required purposes, or it may comprise a general purpose computer selectively activated or reconfigured by a computer program stored in the computer. Such a computer program may be stored in a computer readable storage medium, such as, but is not limited to, any type of disk including floppy disks, optical disks, CD-ROMs, and magnetic-optical disks, read-only memories (ROMs), random access memories (RAMs), EPROMs, EEPROMs, magnetic or optical cards, or any type of media suitable for storing electronic instructions, and each coupled to a computer system bus.The algorithms and displays presented herein are not inherently related to any particular computer or other apparatus. Various general purpose systems may be used with programs in accordance with the teachings herein, or it may prove convenient to construct more specialized apparatus to perform the required method steps. The required structure for a variety of these systems will appear from the description below. In addition, the present invention is not described with reference to any particular programming language. It will be appreciated that a variety of programming languages may be used to implement the teachings of the invention as described herein.FIG. 1 is a system architecture of one embodiment for a multi-channel, multi-service system 100. Referring to FIG. 1, host 102 is connected via system bus 104 and bridge 106 to one or more processing chips 108, 110, 112, 114. In addition, bridge 106 is connected to buffer memory 116. Bridge 106 is connected via bus 118 to the processing chips 108-114. Processing chips 108-114 are connected via bus 120 to time division multiplexing (TDM) interface 122. TDM interface 122 is connected to a number of modules and ports installed on the TDM bus 124. In addition, TDM interface 122 is connected to TDM signaling interface 126.TDM is a base-band technology in which individual channels of data or voice are interleaved into a single stream of bits (or framed bits) on a communications channel. Each input channel receives an interleave time segment in order that all channels equally share the medium that is used for transmission. If a channel has nothing to send, the slot is still dedicated to the channel and remains empty.In one embodiment, an operating system running within multi-channel, multi-service system 100 supports telecommunication and data communication applications. These applications involve running multiple channels of protocol stacks built from multiple services. Multi-channel, multi-service system 100 enables the dynamic configuration of services within the embedded telecommunication and data communication environment. In addition, the operating system automatically defines the allocation of resources for the channels within system 100.FIG. 2 is a block diagram of one embodiment for a processing chip 108. Each processing chip 108 contains clusters 202 and main processor 204. Each cluster 202 contains a cluster processor 208 and a number of basic functional units (BFUs) 210. Main processor 204 is configured to perform all control code and operations including receiving control messages from host 102 and allocating channels to the various clusters 202.Processing chip 108 also includes a shared static random access memory (shared SRAM) 206. Shared SRAM 206 may be accessed directly by all the cluster processors 202 and main processor 204. An instruction store contained within the BFUs 210 can also access shared SRAM 206. Shared SRAM 206 is used for storing operating system and application code as well as hosting the data for code running on main processor 204.Each cluster 202 contains cluster SRAM 212. Cluster SRAM 212 is responsible for maintaining channel data running on each individual cluster 202. Cluster SRAM 212 includes I/O buffers and program stacks. The operating system of system 100 uses the hardware to enforce memory protection to prevent a channel from inadvertently corrupting another channel's data or code.External dynamic random access memory (DRAM) 214 may be used for application data too large to fit on the on-chip cluster SRAM 212 or shared SRAM 206 and may be used as a swap area for application code.Each processing chip 108 includes two line side ports 216 and two bus ports 218. These ports are used for packet side data and control transport. In addition, host port 220 is used to communicate with the host 102 and is accessible only from main processor 204 and serial boot port 222 that is used to send the boot stream to the chip.FIG. 3 is a block diagram of another embodiment for a portion of a multi-channel, multi-service system 100. Referring to FIG. 3, service 302 is a self contained set of instructions that has data input/output, control, and a defined interface. Service 302 performs defined processing upon a certain amount and a certain format of data. In addition, service 302 emits a certain amount and a certain format of data. In an alternate embodiment, service 302 may process data in a bidirectional manner. Service stack 304 is a linked set of services 302 that provide a larger processing unit. Service stack 304 is a unique, ordered collection of services 302, such as, for example, echo cancellation services, tone detection services, and voice conferencing services. The services 302 within the service stack 304 are processed in-order.Socket 306 is a virtual construct that provides a set of services 302 in the form of a service stack 304. The operating system processes services 302 that are encapsulated in socket 306 including connecting the line and/or packet data flow. Processing within socket 306 is data driven. That is, services 302 are invoked by sockets 306 only after the required data has arrived at socket 306. In one embodiment, applications may build protocol stacks by installing a service stack 304 into a socket 306. Services 302, service stacks 304, and sockets 306 are allocated and de-allocated as required by system 100.FIG. 4a is an exemplary diagram of channel sockets (CSs) 430 (422, 424, 426) within system 100. CSs 430 are specialized sockets 306 that direct the flow of information through the system 100 between two or more devices or end points 402, 404, 406, 408. End points may be, for example, physical devices. CS 430 is a socket 306 that accepts a service stack 304 and processes channel data. CS 430 connects any line side slot or bus channel on one end of CS 430 to any other line side slot or bus channel on the opposite end of CS 430. CS 430 is defined by external, physical interface points and provides the ability to process the service stack 304. Information may flow from a physical end point 402 via connection 418 to CS 424. The information is processed by services 302 within CS 424 and is transferred via connection 420 to end point 406. The operating system may dynamically change the flow of information through different CSs 430 depending upon the needs of the end points 402-408. For example, data may be initially set to flow from end point 404 via connection 410 through CS 422 and via connection 412 to end point 408. However, if service stack 304 within CS 422 is incompatible with the data, CS 422 notifies the operating system to break the flow and redirect the information. The operating system then redirects the flow to an existing CS 430 with the proper service stack 304 or creates a new CS 430. Referring to FIG. 4a, the operating system may redirect the flow from end point 404 to end point 408 through connection 414, CS 426, and connection 416. In addition, the operating system may replace the service stack in CS 422 with another stack compatible with the data.A CS 430 is defined by the external, physical interface end points 402, 404, 406, and 408 and the data flowing through the CS 430. Each end point 402-408 may be different physical devices or the same physical interface or device. CS 422 services may perform a conversion of data. The CS 430 mechanism allows a service stack 304 to be built into the information flow in which services 302 may direct or process the data as it flows through the system. For example, if a first service outputs a 40 byte data frame and a second service uses an 80 byte frame, in one embodiment, the second service waits until the first service outputs enough data in order for the second service to process the data. In an alternate embodiment, the first service delays sending data to the second service until it accumulates enough data. Services 302 are independent modules and are standalone plug-ins. Thus, in one embodiment, services 302 may be dynamically downloaded into shared SRAM 206 in real-time to build CSs 430 as required by the data.Applications may be written without regard for particular input/output channels or physical interfaces. The operating system is in charge of dynamically allocating and deallocating sockets and connecting input/output components. Thus, the CS 430 mechanism provides single channel programming with multiple channel execution. In addition, an application may be written to provide flow of information between end points 402-408 independent of the type of the operating system and independent of the type of data being processed. CS 430 functions are independent of both the operating system and the hardware configuration. The mechanism also relieves applications of the management of channels and places the management into the operating system, thus producing channel independent applications. In addition, the CS 430 mechanism allows the applications and services 302 to be platform independent.FIG. 4b is a block diagram of another embodiment for a portion of a multi-channel, multi-service system 100. Referring to FIG. 4b, system 100 includes SCS 452 which is connected to a host and to a plurality of CSs 450. Service control socket (SCS) 452 is a socket 306 containing the control portion of the services 302 for a service stack 304. Each unique service stack 454 has its own SCS 452. Each SCS 452 controls multiple instances of the same CS 450. Each service 302 within SCS 502 is the control portion for the respective service 302 within CS 510. Services 302 in a CS 450 service stack may receive control messages from that stack's SCS 452. Each service 302 has a data domain and a control domain. The data domain is maintained within socket 306 and the control domain is maintained within SCS 452.In one embodiment (not shown), a specialized socket, a platform control socket (PCS) runs on the main processor when the system boots. It is the only socket 306 that has knowledge of system wide resources. The PCS manages all resources, including allocating the SCSs to clusters 202, allocating TDM time slots, and allocating bus channels. Applications may not allocate or deallocate any services within the PCS. Specifically, the PCS boots clusters 202 and chips 108, loads and unloads services 302, creates and destroys SCSs, sends a heartbeat to the host 102, and detects if a cluster 202 is inoperative.In one embodiment, the CS 430 mechanism is used in debugging of applications and services. Since services may be loaded dynamically, the user may choose not to have the debugger in the system if there is no need for debugging operations.FIG. 5a is a block diagram of one embodiment for an interactive debugging system. Referring to FIG. 5a, debugging system 500 includes debug core 520, graphical user interface (GUI) 510, and abstract machine interface (AMI) 530. Debug core 520 is coupled to GUI 510 via a text-based bi-directional interface 505. GUI 510 provides an application developer with a simple and convenient way of debugging an application or a service. The tools provided by GUI 510 may include, for example, top-level menus, context menus, windows, dialog boxes, and setting of user preferences. Text-based interface 505 provides two-way communication between debug core 520 and GUI 510. In one embodiment, GUI 510 may receive a command from the application developer and send it to debug core 520 using text-based interface 505. Debug core 520, in turn, may send data to GUI 510 using text-based interface 505. GUI 510 may then display this data to the application developer in various ways. For example, debug core 520 may pass information about currently running sockets and services to GUI 510. GUI may then display this information, allow the application developer to select a socket or service for debugging, and transfer data identifying the selected socket or service back to debug core 520.Debug core 520 is coupled to AMI 530 via text-based bi-directional interface 525. AMI 530 directly communicates with chip 550 or simulator 540. Chip 550 represents processing chips 108-114. Simulator 540 may be used to perform diagnostics of an application or a service in a simulated environment. Simulator 540 allows loading and running an application as if it were running on the chip itself. All the features and capabilities inherent in chip 550 are available through simulator 540.In one embodiment, AMI 530 provides an abstract view of multi-channel, multi-service system 100 at the hardware and operating system level. AMI 530 may work with a single target chip or simulator at a time and may view the target chip or simulator as a single entity. AMI 530 allows debug core 520 to provide an isolated debugging environment for each socket or service. In one embodiment, debug core 520 uses AMI 530 to provide an application developer with the ability to control all possible debugging and diagnostic activity on a target socket or service.Text-based interface 525 enables a two-way communication between debug core 520 and AMI 530. The use of text-based interface 525 simplifies the development process by allowing the design of debug core 520 and AMI 530 as independent modules. In addition, text-based interface 525 allows running debug core 520 and AMI 530 as stand-alone applications. Text-based interface 525 may also improve the quality assurance (QA) process by providing a QA user with the ability to enter the command and get the response back in an automated environment.In one embodiment, debugging system 500 may operate in various modes. For example, a simulator direct mode (Simulator Direct) allows debug core 520 to communicate with simulator 540 using AMI 530. This mode may provide significant visibility into the BFUs 210 and the state of the system 108, but may not be aware of sockets and other high-level operating system constructs. Simulator Direct provides full control over the simulator. Hence, debug core 520 may obtain all performance analysis results that are supported by the simulator. In one embodiment, AMI 530 may analyze the run-time state of system 108 to determine information about sockets and services directly from the data structures of the operating system.Debugging system 500 may also operate in an in-circuit emulator mode (ICE). ICE allows debug core 520 to communicate with chip 550 through AMI 530 using the Joint Test Access Group (JTAG) interface of chip 550. ICE supports debugging of the operating system by controlling the cluster processors 208. ICE does not provide access to BFUs 210 and is not capable of controlling or accessing sockets, although one skilled in the art will realize that such functionality can be added easily.Another exemplary mode is an application debug mode (Application Debug). Application Debug may work with either simulator 540 or chip 550. Application Debug relies on the assistance of the operating system to provide access to system resources (e.g., BFUs 210 and cluster processors 208). Application Debug is capable of controlling and accessing sockets and allows debug core 520 to maintain information about running sockets and services. In one embodiment, this information includes the current state of sockets and/or services which may be identified as, for example, running, stopped, or not started. Debug core 520 may communicate the information to GUI 510. GUI 510 may then present this information to the application developer for selecting a target construct on which to perform debugging operations. It will be recognized by one skilled in the art that the modes described above are merely exemplary and that a wide variety of modes other than those discussed above may be used by debugging system 500 without loss of generality.FIGS. 5b and 5c are block diagrams of two alternate embodiments for an interactive debugging system operating over a network. Referring to FIG. 5b, client computer system 560 includes a debugger which communicates with server computer system 570 over a network connection 564. Client 560 contains a debug core and GUI 562. Network connection 564 may include, for example, a local area network and a wide area network. Server 570 includes server application 572 which enables communication between chip 574 residing on server 570 and the debugger residing on client 560. In one embodiment, the debugger may operate in ICE debugging mode. In this embodiment, server application 572 communicates commands from the debugger to chip 574 and then communicates the resulting data from chip 574 to client 560.Alternatively, the debugger may operate in Application Debug mode. In Application Debug mode, a debugging request from client 560 is sent over network 564 to server 570. Server application 572 communicates the request directly to chip 574. The operating system on chip 574 interprets the request into commands (e.g., set breakpoints or watchpoints, stop the execution, read memory, get status, or display a variable), performs these commands, and generates the appropriate response. The response is then transferred back to client 560 over network connection 564 using server application 572. Network connection 564 may be packet-based (e.g. TCP/IP), cell-based (e.g. ATM) or serial based (e.g. SpiceBus or Utopia). In one embodiment, in a multi-channel, multi-service environment, the operating system on chip 574 may transfer information about running services to client 560 over network connection 564 and allow the debugger on client 560 to operate on an individual service or on a set of services.Referring to FIG. 5c, another embodiment for a debugging system operating over a network is illustrated. In this embodiment, the debugger on client computer 560 described above in conjunction with FIG. 5b communicates with access router 590 over a network connection. The network connection may include, for example, a local area network such as Ethernet 586 and a wide area network such as ATM 584. The debugger on client 560 may operate in ICE debugging mode or Application Debug mode as described above in conjunction with FIG. 5b. Router 590 includes host processor 592 which controls operations on router 590 and enables communication between the debugger on client 560 and one or more chips 594 on router 590. Host processor 592 may provide more than one network connections (e.g., Ethernet 586 and ATM 584) between client 560 and router 590 at the same time.FIG. 6 is a block diagram of one embodiment for a debugging process. Referring to FIG. 6, processing environment 600 may have a number of processing elements (or constructs) running. In one embodiment, construct 610 may run a real time application and construct 660 may run a control task or an operating system task. Construct 610 has independent local memory 620, and construct 660 has independent local memory 640. In one embodiment, constructs 610 and 660 may have shared memory 630, in which separate portions of memory 630 may be assigned to constructs 610 and 660 respectively. Within processing environment 600, each construct has a state. Such state may include the current value of program counters, registers, or performance counters. State 650 illustrates the state of construct 610. In one embodiment, construct 660 may act as a debug agent and may have the capability of accessing data related to the operation of target construct 610. Debug construct 660 may communicate with host 102, or host 560 over a network, and perform the commands received from host 102 or 560.In one embodiment, debug construct 660 may access and monitor the data related to the operation of target construct 610 without affecting the real time environment of target construct 610. For example, debug construct 660 may be able to look at ("snoop" on) local memory 620, state 650, and the portion of shared memory 630 which is assigned to target construct 610. In one embodiment, debug construct 660 is configured to monitor the above data on the regular basis, e.g. read local memory 620 every 10 milliseconds and retrieve certain data in real time. Alternatively, a minor modification may be made to the application running by target construct 610 to notify (e.g. send a control signal) debug construct 660 when target construct 610 completes a certain task. This notification allows debug construct 660 to avoid reading the data while this data is being modified by target construct 610.In one embodiment, the data read by debug construct 660 may be transferred to host 102 or 160. Host 102 or 160 may then present data to application developers in real time and may allow them to request a certain level of detail and a particular type of data to be retrieved. Thus, an application developer can visualize the operation of target construct 610 from outside of the construct 610without interfering with the real time environment of target construct 610. In a multi-channel, multi-service environment, the application developer can monitor the operation of multiple services at the same time.In another embodiment, the debugging process may directly intercede with the real time environment of construct 610. Debug construct 660 may, for example, modify state 650 to set a breakpoint register or a watchpoint register, request a notification when target construct 610 hits a breakpoint, and stop the operation of target construct 610. Subsequently, debug construct 660 may restart the operation of target construct 610 upon receiving a command from host 102 or 560.FIG. 7 is a flow diagram of one embodiment for an interactive debugging system. Initially at processing block 712, a target construct is selected for debugging. In one embodiment, the target construct is a service operating in the processing environment 600 in real time. In alternate embodiments, the target construct may be a set of services, a service stack, a socket, or a set of sockets. At processing block 714, data related to an operation of the target construct is accessed by a debug construct in real time. The debug construct may be a service, a set of services, a service stack, or a socket. In one embodiment, the debug construct may be dynamically allocated on the chip by the operating system similarly to other services and sockets described above. When the debugging operation is completed, the operating system may deallocate the debug construct. Alternatively, debugging can be performed on the simulator. The simulator has all the features and capabilities inherent in the chip. An application developer may load and run an application on the simulator as if the application were running on the chip. In addition, the simulator includes a profiler which provides detailed statistics about running applications.In yet another embodiment, data may be collected during the real-time operation of the chip. Subsequently, a service, a set of services, a socket, or a set of sockets may be initialized in a simulated environment using the collected data to reproduce and thoroughly debug a problem that occurred in the real-time system.At processing block 716, the data related to the operation of the target construct or certain portion of this data is monitored by the debug construct. That is, the debug construct snoops on a local memory of the target construct, a section of a shared memory which is assigned to the target construct, or the state of the target construct. The debug construct monitors the above data without disturbing the operation of the target construct.In one embodiment, the operating system can decide which data is to be snooped on. In addition, the operating system may retrieve (e.g. command the debug construct to retrieve) this data in real time and send it to a host application to provide interactive debugging. In one embodiment, the host system may run a debugger which communicates with the operating system running the debug construct. The host system may present the retrieved data to application developers, receive their input and communicate it back to the debug construct. In one embodiment, the host system includes a GUI which simplifies the use of the debugging system by application developers. For example, the GUI provides the application developers with easy-to-use tools for selecting a chip for debugging, creating new sockets and service stacks on the chip, setting up input and output files for each created socket, and monitoring the operation of any socket or service stack on the chip. In one embodiment, the debug construct and the host system communicate remotely through a communications infrastructure.In one embodiment, the operating system may measure the bandwidth required to transfer the retrieved data. The operating system may then make a decision on the completeness of the data to be sent based on the available bandwidth. In one embodiment, the data may be sent over a network. Various network interfaces may be used including, for example, a packet-based network interface, a cell-based network interface, or a serial interface. In one embodiment, more than one host system communicate with the operating system on the chip. In this embodiment, host processors may interface an external network protocol (e.g. TCP/IP) to an internal protocol (e.g. serial) connecting to the chip.FIG. 8 is a flow diagram of one embodiment for a multi-channel, multi-service debugging system. At processing block 812, information about a plurality of services currently running on processor 108, 110, 112 or 114 is provided to the application developer. In one embodiment, the information includes the current state of the plurality of services. The information may also relate to one or more sockets and include the current state of each socket. The current state may be identified as running, stopped, or not started.In one embodiment, the information is obtained by the operating system which passes it to the host system. In one embodiment, the host system includes a debugger running on the host system. The debugger presents the information about currently running services to the application developer.At processing block 814, an isolated debugging environment is maintained for a plurality of running service. The isolated debugging environment may provide a separate context (e.g. breakpoints, watchpoints, or variable display) for each running service. In one embodiment, the debugger running on the host system and the operating system running on processor 108, 110, 112 or 114 cooperate to provide the isolated environment for each running service.At processing block 816, a target construct is selected for debugging from the plurality of running services. In one embodiment, the target construct may be a service, a set of services, a socket, or a set of sockets. Thus, more than one service or socket may be selected by the application developer for performing simultaneous debugging operations.In one embodiment, the debugger allows the user to dynamically load services into the target construct. The debugger may then cooperate with the operating system to create one or more instantiations of loaded services. In addition, the debugger may allow the user to specify input/output data that supercedes physical interfaces. The substitution may be done for a certain socket or on a whole-interface level in cooperation with the operating system or the debug construct. In one embodiment, all input/output data and socket data is saved on each frame. Subsequently, this data may be read into a simulator for more controlled debug.In one embodiment, the operating system provides a debugging environment that allows the application developer to debug the operation of the target construct without affecting the real time environment of other running services. The application developer may debug the operation of the target construct by setting breakpoints on each selected service and may arbitrarily switch between the services during the debugging process. In one embodiment, the multi-channel, multi-service debugging may be performed remotely over a network. Remote debugging is described in more detail above.FIG. 9 illustrates an exemplary display window of one embodiment for a multi-channel, multi-service debugging system. Referring to FIG. 9, various views on an application are provided by the debugger. The application developer may see, for example, input and output files, C++ classes, and raw memory addresses. In addition, the debugger provides the application developer with a list of currently running sockets and services. The application developer may select one or more service from the list and view various information related to the operation of the selected service.A method and system for interactive debugging have been described. The method allows selecting a target construct for debugging. The method may provide for accessing data related to an operation of the target construct by a debug construct in real time. At least a portion of this data may be monitored without disturbing the operation of the target construct to debug the target construct. If needed, the method may retrieve at least the portion of this data and transfer it to a host application. Further, the method may allow the host application to communicate with the debug construct over a network. The method may operate in a multi-channel, multi-service environment. With the present invention, an efficient way of debugging a target application in a multi-channel, multi-service environment is provided, which allows obtaining real-time diagnostics without interfering with the operation of the target application and other running applications and which is capable to perform debugging services remotely.Several variations in the implementation of the method for interactive debugging have been described. The specific arrangements and methods described here are illustrative of the principles of this invention. Numerous modifications in form and detail may be made by those skilled in the art without departing from the true spirit and scope of the invention. Although this invention has been shown in relation to a particular embodiment, it should not be considered so limited. Rather it is limited only by the appended claims. |
An air mover external to a mobile computing device provides enhanced cooling to the device by generating forced air delivered to the device via cooling channels connected to openings in the device chassis. If the mobile computing device is passively cooled (is a fanless device), the enhanced cooling can enable the device or device components to operate at a higher power consumption level without exceeding device/component thermal limits or for features that consume high amounts of power (e.g., fast charging) to be incorporated into the device. The air mover can be integrated into or attached to a cable that provides power to the mobile computing device. The air mover can be powered by the cable. The air mover can dynamically adjust the flow rate of the forced air based on device/component performance information (temperature, power consumption, current consumption) or operational state information of the device. |
An apparatus comprising:a cable comprising a plurality of wires;one or more cooling channels external to the cable;an air mover connected to the one or more cooling channels, the air mover to generate forced air and provide the forced air to the cooling channels; anda connector located at an end of the cable to connect the wires and the cooling channels to a mobile computing device.The apparatus of claim 1, wherein the air mover is integrated into the cable.The apparatus of claim 1 or 2, wherein the air mover is external to the cable.The apparatus of any one of claims 1-3, wherein the end of the cable is a first end of the cable and the air mover is positioned at a point along the cable between the first end of the cable and a second end of the cable.The apparatus of any one of claims 1-4, wherein the connector is a first connector and the end is a first end of the cable, the air mover connected to a power wire of the plurality of wires at a point along the cable between the first connector and a second connector located at a second end of the cable.The apparatus of any one of claims 1-5, wherein the cooling channels are internal to the cable.The apparatus of any one of claims 1-6, wherein the connector comprises a first connector portion that encloses the wires and one or more second connector portions that enclose the cooling channels, the one or more second connector portions releasably attachable to the first connector portion.The apparatus of any one of claims 1-7, further comprising an air mover controller to control a flow rate of the forced air generated by the air mover through the cooling channels.The apparatus of claim 8, further comprising current sensing circuitry, the air mover controller to control the flow rate of the forced air based on a measure of how much current is flowing through a power wire of the plurality of wires provided by the current sensing circuitry.The apparatus of claim 8, wherein the cable further comprises one or more data wires, the air mover controller to receive mobile computing device power consumption information over the one or more data wires, the air mover controller to control the flow rate of the forced air based on the mobile computing device power consumption information.The apparatus of claim 8, wherein the cable further comprises one or more data wires, the air mover controller to receive mobile computing device current consumption information over the one or more data wires, the air mover controller to control the flow rate of the forced air based on the mobile computing device current consumption information.The apparatus of claim 8, wherein the cable further comprises one or more data wires, the air mover controller to receive mobile computing device operational state information over the one or more data wires, the air mover controller to control the flow rate of the forced air based on the mobile computing device operational state information.The apparatus of claim 8, wherein the cable further comprises one or more data wires, the air mover controller to receive information indicating an operating temperature of the mobile computing device and a critical temperature of the mobile computing device, the air mover controller to control the flow rate of the forced air based on a difference between the critical temperature of the mobile computing device and the operating temperature of the mobile computing device.The apparatus of claim 8, wherein the cable further comprises one or more data wires, the air mover controller to receive information indicating an operating temperature of a mobile computing device component and a critical temperature of the mobile computing device component, the air mover controller to control the flow rate of the forced air based on a difference between the critical temperature of the mobile computing device component and the operating temperature of the mobile computing device component.The apparatus of any one of claims 1-14, wherein the apparatus is a power adapter to convert an external power supply signal to an input power supply signal that is suitable for use by the mobile computing device. |
BACKGROUNDSome existing mobile computing devices utilize a passive cooling approach as a thermal management solution to achieve a sleek profile and quiet operation. These fanless devices can use heat pipes, heat sinks, vapor chambers, heat spreaders, and other passive cooling elements to dissipate heat generated by device components. Passive cooling approaches may not provide the same level of cooling as active cooling approaches, and as a result, passively cooled mobile computing devices may not be able to perform at the same performance level as actively cooled mobile computing devices due to the tighter thermal constraints that passive cooling approaches can place on device performance.BRIEF DESCRIPTION OF THE DRAWINGSFIGS. 1A-1C illustrate perspective and cross-sectional views of an example air mover that can provide forced air to a mobile computing device.FIG. 2 shows example mobile computing device chassis openings to which a connector comprising cooling channels can connect.FIG. 3 illustrates a cross-sectional view of an example cooling channel.FIG. 4 is a block diagram of an example cable with an integrated air mover connected to a mobile computing device.FIG. 5 illustrates a first example method of controlling the flow rate of forced air provided to a mobile computing device.FIG. 6 illustrates a second example method of controlling the flow rate of forced air provided to a mobile computing device.FIG. 7 illustrates a third example method of controlling the flow rate of forced air provided to a mobile computing device.FIG. 8 illustrates a fourth example method of controlling the flow rate of forced air provided to a mobile computing device.FIG. 9 is a block diagram of an exemplary mobile computing device to which enhanced cooling can be applied in accordance with any of the embodiments disclosed herein.FIG. 10 is a block diagram of an exemplary processor unit that can execute instructions as part of implementing technologies described herein.DETAILED DESCRIPTIONThe thermal constraints placed on mobile computing device performance by a passive cooling thermal management solution (e.g., heat pipes, heat spreaders, heat sinks, vapor chambers) can preclude the use of adaptive performance technologies that allow for the dynamic adjustment of device performance in such devices (e.g., Intel@ Dynamic Thermal and Performance Framework (DTPF) and Intel@ Dynamic Tuning Technology (DTT)). Even when a passively cooled mobile computing device is connected to an external power source, the performance of the device may not be able to be increased as an increase in power consumption may cause the device to exceed its thermal limits.The external cooling technologies described herein provide for the enhanced air cooling of computing devices. Passively cooled mobile computing devices in particular can take advantage of the disclosed technologies. The addition of the flow of forced air over heat-generating components in a passively cooled device can create a power budget margin that can be utilized to operate the device at a higher level of power consumption and remain within thermal limits. The enhanced air cooling is provided by an air mover located external to a computing device. The air mover can be integrated into or attached to a cable that provides power (and additionally, in some embodiments, data connections) to a computing device. The enhanced air cooling can dynamically adjust the amount of forced air supplied to the computing device based on the performance or operational state of the computing device. The increased cooling of mobile computing devices can further allow for more comfortable usage by a user. For example, a user who is using a laptop computer or tablet computing device that employs the technologies described herein may find the device more comfortable to use since the device may be less prone to overheating the user's lap due to the device operating at a lower temperature. That is, the power budget margin created by the enhanced air cooling is utilized to operate the device at a lower temperature rather than to increase its performance. Further, utilization of the enhanced air cooling technologies disclosed herein may allow for the incorporation of adaptive performance technologies, such as Intel@ DTPF and DTT, into passively cooled mobile computing devices.In the following description, specific details are set forth, but embodiments of the technologies described herein may be practiced without these specific details. Well-known circuits, structures, and techniques have not been shown in detail to avoid obscuring an understanding of this description. Phrases such as "an embodiment," "various embodiments," "some embodiments," and the like may include features, structures, or characteristics, but not every embodiment necessarily includes the particular features, structures, or characteristics.Some embodiments may have some, all, or none of the features described for other embodiments. "First," "second," "third," and the like describe a common object and indicate different instances of like objects being referred to. Such adjectives do not imply objects so described must be in a given sequence, either temporally or spatially, in ranking, or any other manner. "Connected" may indicate elements are in direct physical or electrical contact with each other and "coupled" may indicate elements co-operate or interact with each other, but they may or may not be in direct physical or electrical contact. Furthermore, the terms "comprising," "including," "having," and the like, as used with respect to embodiments of the present disclosure, are synonymous. Terms modified by the word "substantially" include arrangements, orientations, dimensions, spacings, or positions that vary slightly from the meaning of the unmodified term. For example, reference to a dimension that is substantially an indicated amount covers dimensions that vary within a few percent of the indicated amount.As used herein, the term "integrated circuit component" refers to a packaged or unpacked integrated circuit product. A packaged integrated circuit component comprises one or more integrated circuits mounted on a package substrate. In one example, a packaged integrated circuit component contains one or more processor units mounted on a substrate, with an exterior surface of the substrate comprising a solder ball grid array (BGA). In one example of an unpackaged integrated circuit component, a single monolithic integrated circuit die comprises solder bumps attached to contacts on the die. The solder bumps allow the die to be directly attached to a printed circuit board. An integrated circuit component can comprise one or more of any computing system component described or referenced herein or any other computing system component, such as a processor unit (e.g., system-on-a-chip (SoC), processor core, graphics processor unit (GPU), accelerator), I/O controller, chipset processor, memory, or network interface controller.Reference is now made to the drawings, which are not necessarily drawn to scale, wherein similar or same numbers may be used to designate same or similar parts in different figures. The use of similar or same numbers in different figures does not mean all figures including similar or same numbers constitute a single or same embodiment. Like numerals having different letter suffixes may represent different instances of similar components. The drawings illustrate generally, by way of example, but not by way of limitation, various embodiments discussed in the present document.In the following description, for purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding thereof. It may be evident, however, that the novel embodiments can be practiced without these specific details. In other instances, well known structures and devices are shown in block diagram form in order to facilitate a description thereof. The intention is to cover all modifications, equivalents, and alternatives within the scope of the claims.FIGS. 1A-1C illustrate perspective and cross-sectional views of an example external air mover that can provide forced air to a mobile computing device. FIG. 1A illustrates a first perspective view of an air mover 104 connected to a mobile computing device 108, FIG. 1B illustrates a perspective cross-sectional view of the air mover 104 taken along the line A-A' of FIG. 1A , and FIG. 1C illustrates a second perspective view of the air mover 104. The air mover 104 generates forced air and provides the forced air to the cooling channels 112, which deliver the forced air to the mobile computing device 108. The air mover 104 is integrated into a cable 116 that encloses (or carries) one or more wires (or lines) that provide power, ground, and/or data signals to the mobile computing device 108. A connector 122 connects the cooling channels 112 to cooling channel chassis openings 132 in a chassis 134 of the mobile computing device 108 and also connects the wires enclosed in the cable 116 to the device 108. The wires in the cable 116 are connected to the device 108 by an electrical connector 124 of the cable 116. The forced air flows through the cooling channels 112 and the mobile computing device 108 as indicated by arrows 126. After flowing through the cooling channels 112 and into the mobile computing device 108, the forced air can pass over one or more device components, such as an SoC, charging circuitry, or one or more other integrated circuit components to absorb heat generated by components. The heated forced air can exit the device 108 at one or more exhaust vents.The air mover 104 can comprise a fan, blower, synthetic jet, or another component suitable for generating forced air. A portion of the cable 116 is enclosed by an air mover housing 136 and the air mover 104 is powered through connection to one or more of the wires in the cable 116 that deliver power to the mobile computing device 108. An air mover can be considered to be integrated into a cable if wires carried by the cable pass through an air mover housing or if the air mover is connected to one or more wires enclosed by the cable to, for example, receive power or data. An air mover can be considered to be external to a cable if air mover housing is separate from and external to a housing of the cable (e.g., housing 140) and receives power in a manner other than being connected to a power wire carried by the cable. For example, the air mover can be powered by a power wire carried by a second cable. Although the air mover 104 is illustrated in FIG. 1A as being cylindrical and axially aligned with the cable 116, the air mover 104 can have any shape and can be oriented in any manner relative to the cable 116. The mobile computing device 108 can be any mobile computing device described or referenced herein, such as a laptop, tablet, or smartphone, or any other mobile computing device.FIG. 2 illustrates a side view of a portion of the mobile computing device chassis of FIGS. 1A-1B . The chassis 134 comprises an electrical connection opening 146 through which the electrical connector 124 of the connector 122 can be inserted to connect the electrical connector 124 to a corresponding connector of a mobile computing device. In some embodiments, the electrical connector 124 can be a Universal Serial Bus Type-C (USB-C) plug that connects to a USB-C receptacle of the mobile computing device 108. When the electrical connector 124 is plugged into the electrical connector opening 146, the cooling channels 112 align with and connect to the cooling channel chassis openings 132. Although two cooling channel chassis openings 132 are illustrated in FIGS 1A-1B and 2 , in other embodiments, the chassis 134 can comprise additional or fewer cooling channel chassis openings 132. A first cooling channel chassis opening 132A is located adjacent to a first edge or end 142 of the electrical connection opening 146 and a second cooling channel chassis opening 132B is located adjacent to a second edge or end 148 of the opening 146. In other embodiments, the cooling channel openings 132 can be located adjacent to additional or other edges or ends of the opening 146, such as a top edge or end 152 and a bottom edge or end 156. For example, in some embodiments, a chassis can comprise four cooling channel chassis openings, with one cooling channel chassis opening being adjacent to each of the end or edges 142, 148, 152, and 156 of the opening 146. In other embodiments, a chassis with two cooling channel chassis openings can be present, with the openings adjacent to the top and bottom edges 152 and 156. In some embodiments, more than one cooling channel chassis opening can be adjacent to an electrical connection chassis opening edge or end.In some embodiments, the chassis 134 comprises spring-loaded doors 160 that open toward the interior of a mobile computing device to cover the cooling channel chassis openings 132 when cooling channels are not connected to the chassis openings 132. The doors 160 can aid in providing a more esthetically pleasing industrial design or keep dust and other debris from entering the mobile computing device 108. In some embodiments, elastomeric rings can be fitted to the cooling channel chassis openings 132 to prevent forced air from leaking from the cooling channels 112 to the outside environment when cooling channels 112 are connected.Returning to FIGS. 1A-1C , the air mover 104 is positioned at a point along the cable 116 that is proximate to the end of the cable comprising the connector 122. The closer an air mover is positioned to the connector that connects to a computing device, the lower the resistance that the cooling channels 112 present to the forced air. Thus, the closer an air mover is located to a connector 122, the higher the flow rate the air mover 104 may be able to provide to a connected mobile computing device. In other embodiments, an air mover can be positioned at any point along a cable between the connectors located at the cable ends. The air mover 104 can tap into a power line of the cable at any point along the cable to power the air mover 104. The cooling channels 112 are external to the cable 116. That is, a cable housing 140 does not enclose the cooling channels 112. In other embodiments, the cooling channels 112 are internal to the cable 116 and the cable housing 140 encloses the cooling channels 112. A housing 138 of the connector 122 encloses a portion of the cooling channels 112 and the electrical connector 124.The cable 116 can be of any length, comprise any number of wires, and comprise connectors of any type. In some embodiments, the cable 116 can be part of a power adapter that converts an external power supply signal (e.g., "wall power") to an input power supply signal suitable for use by the mobile computing device. In other embodiments, the cable 116 can be a charging cable, such as a USB, Ethernet, Thunderbolt, or HDMI (High-Definition Multimedia Interface) cable that delivers power to the mobile computing device in addition to providing data communication capabilities between the mobile computing device 108 and another computing device.In some embodiments, the air mover 104 and the cooling channels 112 can be part of an air mover component that is separate from the cable 116. In such embodiments, the cooling channels 112 can connect to the mobile computing device 108 via a connector that is separate from a connector that connects wires carried by a cable to a mobile computing device. In some embodiments, the separate air mover component is releasably attachable to the cable and/or the cable connector. For example, the air mover component can be snapped to, clipped to, pulled over, or otherwise releasably attached to the cable and/or the cable connector. The air mover component can comprise one or more cooling channel connectors that house a portion of the cooling channels and releasably attach to a cable connector. Thus, in embodiments where a separate air mover component is releasably attachable to an electrical connector of a cable, a connector that connects the cooling channels and the wires carried by the cable to a mobile computing device can comprise a first connector portion (e.g., electrical connector 124) that connects the cable wires to the device and one or more second connector portions that connect the cooling channels the device, with the second connector portions being releasably attachable to the first connector portion.FIG. 3 illustrates a cross-sectional view of an example cooling channel. The cooling channel 300 comprises an internal volume 304, a metal coil spring 308, and a shim 312 between the internal volume 304 and the metal coil spring 308. The shim 312 can provide an airtight structure to prevent leakage of forced air from the internal volume 304 to the external environment, have a smooth interior surface (e.g., a surface with a low surface roughness (Ra)) to present a low resistance to forced air flowing through the cooling channel 300, and provide stiffness to prevent sharp bends in the cooling channel 300. The metal coil spring 308 can reinforce the shim 312 and allow for some bending of the channel 300. The thickness of the shim 312, the thickness of the metal coil spring 308, and the inner and outer diameters of the cooling channel 300 can have any suitable values. In some embodiments, the shim 312 can have a thickness in the range of 0.05-0.10 mm. In some embodiments, the metal coil spring 308 can have a thickness 316 in the range of 0.5-0.8 mm. In some embodiments, the cooling channel 300 can have an inner diameter 320 of substantially 4.0 mm and an outer diameter 324 of substantially 5.0 mm.The external cooling technologies disclosed herein can adjust the rate at which forced air flows through cooling channels based on information indicating the mobile computing device's performance (e.g., power consumption information, current consumption information, temperature information) or operational state, or by determining the amount of current flowing through a power line in a cable connected to the computing device.FIG. 4 is a block diagram of an example cable with an integrated air mover connected to a mobile computing device. The air mover 404 is integrated into a cable 416 that connects a mobile computing device 408 to a remote device 410 (e.g., power adapter, computing device). The air mover 404 is in-line with the cable 416 and is powered by a power line 440 that delivers power to the mobile computing device 408 and receives data carried on one or more data wires 444 that allow communication between the mobile computing device 408 and a remote device 410.The air mover 404 comprises an air mover controller 448 and a blower, fan, or another component 452 capable of generating forced air that is provided to the device 408 by one or more cooling channels 412. The air mover controller 448 can control a flow rate of the generated forced air based on information received from the mobile computing device 408 over the data lines 444. For example, in embodiments where the forced air is generated by a blower or fan, the air mover controller 448 can control the flow rate of the forced air by controlling the speed at which the fan or blower spins. In embodiments where the flow rate of the forced air is based on a frequency at which a component vibrates, such as in piezoelectrically-driven synthetic jets, the air mover controller 448 can control the flow rate of the forced air by controlling the frequency at which the vibrating component vibrates. In some embodiments, the blower, fan, or another component 452 can be controlled via a pulse width modulated control signal or a variable supply voltage generated by the air mover controller 448. In some embodiments, the air mover controller 448 can comprise any of the processing units described herein.In some embodiments, the air mover controller 448 can control the flow rate of the forced air based on information received over the one or more data wires 444, such as mobile computing device performance information (e.g., power consumption information, current consumption information, temperature information), mobile computing device operational state information, or user presence information. Mobile computing device power consumption information can comprise, for example, information indicating an amount of power consumed by the device 408 as a whole, the amount of power consumed by an individual component of the device 408 (such as an SoC or a charging circuit), or the amount of power consumed by multiple components of the device 408. Mobile computing device current consumption information can comprise, for example, information indicating an amount of current drawn by the device 408, a component of the device 408, or multiple components of the device 408. Mobile computing device temperature information can comprise, for example, information indicating an operating temperature of the device 380 or of a component of the device 408. The power consumption information, current consumption information, or temperature information can comprise a power consumption metric, current consumption metric, or temperature metric sampled on a periodic or another basis, a power consumption metric, current consumption metric, or temperature metric averaged over a period, or information derived from a power consumption metric, current consumption metric, or temperature metric on another suitable basis.In some embodiments, the mobile computing device performance information can comprise critical temperature information. The critical temperature information can comprise information indicating a critical temperature that a component of the mobile computing device 408 (e.g., charging circuitry, SoC, another integrated circuit component) or the mobile computing device 408 is not to exceed. In such embodiments, the air mover controller 448 can determine a difference between the operating temperature of the mobile computing device 408 or a mobile computing device component and the critical temperature of the mobile computing device 408 or the mobile computing device component and control the flow rate of the forced air based on the temperature difference.Mobile computing device operational state information can comprise information indicating that the device 408 or a device component is in a particular state, such as a processing unit or integrated circuit component being in a particular active state or idle state, or that a particular feature or mode of the device 408 is active. As used herein, the term "active state" when referring to the state of a processor unit refers to a state in which the processor unit is executing instructions. As used herein, the term "idle state" means a state in which a processor unit is not executing instructions. Modern processor units can have various sleep states in which they can be placed, with the varying idle states being distinguished by how much power the processor unit consumes in the idle state and idle state exit costs (e.g., how much time and how much power it takes for the processor unit to transition from the idle state to an active state).Idle states for some existing processor units can be referred to as "C-states". In one example of a set of idle states, some Intel@ processors can be placed in C1, C1E, C3, C6, C7, and C8 idle states. This is in addition to a "C0" state, which is the processor's active state. P-states can further describe the active state of some Intel@ processors, with the various P-states indicating the processor's power supply voltage and operating frequency. The C1/C1E states are "auto halt" states in which all processes in a processor unit are performing a HALT or MWAIT instruction and the processor unit core clock is stopped. In the C1E state, the processor unit is operating in a state with its lowest frequency and supply voltage and with PLLs (phase-locked loops) still operating. In the C3 state, the processor unit's L1 (Level 1) and L2 (Level 2) caches are flushed to lower-level caches (e.g., L3 (Level 3) or LLC (last level cache)), the core clock and PLLs are stopped, and the processor unit operates at an operating voltage sufficient to allow it to maintain its state. In the C6 and deeper idle states, the processor unit stores its state to memory and its operating voltage is reduced to zero. As modern integrated circuit components can comprise multiple processor units, the individual processor units can be in their own idle states. These states can be referred to as C-states (core-states). Package C-states (PC-states) refer to idle states of integrated circuit components comprising multiple cores.In some embodiments, the operational state information can comprise information indicating a physical configuration of the mobile computing device 408. For example, operational state information for a convertible mobile computing device can indicate that the mobile computing device is in a desktop configuration (in which the angle between a display portion of the device and a base portion of the device is within a first range of angles, the display portion rotated away from the base portion such that display portion is conveniently viewable by a user interacting with a keyboard of the base portion) or in a tent configuration (in which the angle between the display portion and the base portion is within a second range of angles that is greater than the first range of angles, the display portion rotated behind the base portion to act as a stand to support the display portion). In embodiments where the mobile computing device comprises a display portion that is separable from a base portion (such as an attachable keyboard), the operational state information can comprise information indicating that the device is in a tablet mode when the display portion is separated from the base portion. In some embodiments, if the operational state information indicates that the mobile computing device is in a desktop configuration, the air mover controller 448 can reduce the flow rate of the forced air or set the flow rate of the forced air to a minimum value or a lower value relative to other flow rate settings. In some embodiments, if the operational state information indicates that the mobile computing device is in a tent configuration, the air mover controller 448 can reduce the flow rate of the forced air or set the flow rate of the forced air to a minimum value or a lower value relative to other flow rate settings. In some embodiments, if the operational state information indicates that the mobile computing device is in a tablet configuration, the air mover controller 448 can increase the flow rate of the forced air or set the flow rate of the forced air to a maximum value or a higher value relative to other flow rate settings.User presence information can indicate the presence of a user at the mobile computing device. User presence at a mobile computing device can be determined by, for example, an operating system of the mobile computing determining that input has been provided at an input device (e.g., keyboard, mouse, microphone) within a threshold period of time or by determining the presence of a user based on image data generated by a camera of a mobile computing device. If the user presence information indicates that no user is present, the air mover controller 448 can increase the flow rate of the forced air or set the flow rate of the forced air to a maximum value or a high value relative to other flow rate settings. Increasing the flow rate of the forced air if no user is present can serve, for example, to precool the mobile computing device prior to being used or to increase device performance to allow for quicker completion of one or more tasks, operations, or workloads executing on the device.The component 452 of the air mover 404 that generates the forced air can be sized based on the resistance presented by the cooling channel 412 to the forced air. For example, based on analytical calculations, a cooling channel 25 cm in length and having an inner diameter of 3 mm has an estimated pressure drop along the length of the cooling channel of 2 inches of water (498 Pa). Some existing miniature air movers have a size of 40 mm × 40 mm × 28 mm and can provide 32 cubic feet per minute (CFM of air flow at 4 inches of water.Thus, in embodiments where the operational information received by the air mover 404 comprising information indicating an active or idle state for a processing unit, package, or system (e.g., P-state, C-state, PC-state), the air mover can access a look-up table or other suitable data structure that indicates the control signal that the air mover controller 4048 is to send to the component 452.The air mover controller 448 can similarly control the component 452 based on operational state information indicating that a particular feature or mode of the device 408 is enabled. For example, in response to the air mover controller 448 receiving information indicating that a fast charging feature of the computing device 408 is enabled, the air mover controller 448 can access a look-up table or other data structure to retrieve information indicating the control signal that the air mover controller 448 is to send to the component 452. A fast charging mode of a computing device can be any charging mode in which a battery internal to the device 408 is charged at a faster rate than that in one or more other charging modes of the device 408. The amount of heat generated by the device's charging circuitry can scale with the rate at which the charging circuitry charges the battery. Fast charging rates that are achievable in some existing computing devices may not be achievable in passively cooled mobile computing devices due to thermal limitations.In some embodiments, the air mover controller 448 can control the flow rate of the forced air based on current consumption information passed over one or more data lines in the cable in accordance with a cable or connector protocol (e.g., USB-C). For example, the air mover controller 448 can receive information being passed along one or more data lines of a cable in accordance with a cable or connector protocol indicating an amount of current being drawn by the mobile computing device 408, an amount of current being consumed by a charging component or charging circuitry of the device 408, and/or an amount of current being consumed by a component of the device 408.In some embodiments, the air mover controller 448 can adjust a flow rate of the forced air based on air mover control information received from the mobile computing device 408. For example, the air mover control information can indicate that the air mover is not to provide forced air, is to be powered down, is to be powered up, or is to be provide forced air at a specified flow rate. The specified flow rate can be relative flow rate (e.g., low, medium, high; level 1, 2, 3, 4, 5) or a specific flow rate (e.g., a flow rate indicated in cubic feet per minute).In some embodiments, the air mover controller 448 can control the forced air flow rate based on a measure of how much current is flowing through a power line carried by the cable 416. The current flowing through the power line can be used as a proxy for how much power is being consumed by the device 408. The measure of how much current is flowing through the power line carried by the cable 416 can be an analog or digital signal generated by current sensing circuitry located in the air mover 404. In some embodiments, the current sensing circuitry can comprise a current sensing resistor located in-line with a power wire carried by the cable 416 and the measure of how much current is flowing through the power line is a measure of how much current is flowing through the current sensing resistor as sensed or determined by the current sensing circuitry. In some embodiments, the air mover controller 448 can cease providing forced air based on the information received from the mobile computing device. For example, the air mover controller 448 can cease providing forced air if it receives operational state information indicating the mobile computing device is in an idle state.Thus, by being able to adjust the flow rate of forced air provided to a mobile computing device based on the performance or operational state information of the mobile computing device, the technologies described herein provide a closed-loop dynamic cooling solution.FIG. 5 illustrates a first example method of controlling the flow rate of forced air provided to a mobile computing device. At 510 of method 500, forced air is provided to a mobile computing device by an air mover via one or more cooling channels connected to the air mover and the mobile computing device. At 520, a measure of current flowing through a power wire carried by a cable connected to the mobile computing device is determined by the air mover. At 530, a flow rate of the forced air is controlled by the air mover based on the measure of how much current is flowing through the power wire.FIG. 6 illustrates a second example method of controlling the flow rate of forced air provided to a mobile computing device. At 610 of method 600, forced air is provided to a mobile computing device by an air mover via one or more cooling channels connected to the air mover and the mobile computing device. At 620, mobile computing device performance information is received by the air mover over one or more data wires carried by a cable connected to the mobile computing device. At 630, a flow rate of the forced air is controlled by the air mover based on the mobile computing device performance information.FIG. 7 illustrates a third example method of controlling the flow rate of forced air provided to a mobile computing device. At 710 of method 700, forced air is provided to a mobile computing device by an air mover via one or more cooling channels connected to the air mover and the mobile computing device. At 720, mobile computing device performance information is received by the air mover over one or more data wires carried by a cable connected to the mobile computing device. At 730, a flow rate of the forced air is controlled by the air mover based on the mobile computing device operational state information.FIG. 8 illustrates a fourth example method of controlling the flow rate of forced air provided to a mobile computing device. At 810 of method 800, forced air is provided to a mobile computing device by an air mover via one or more cooling channels connected to the air mover and the mobile computing device. At 820, user presence information is received by the air mover over one or more data wires carried by a cable connected to the mobile computing device. At 830, a flow rate of the forced air is controlled by the air mover based on the user presence information.The technologies described herein can be performed by or implemented in any of a variety of computing devices, including mobile computing devices (e.g., smartphones, handheld computers, tablet computers, laptop computers, portable gaming consoles, 2-in-1 convertible computers, portable all-in-one computers), non-mobile computing systems (e.g., desktop computers, servers, workstations, stationary gaming consoles, set-top boxes, smart televisions, rack-level computing solutions (e.g., blade, tray, or sled computing systems)), and embedded computing systems (e.g., computing systems that are part of a vehicle, smart home appliance, consumer electronics product or equipment, manufacturing equipment). As used herein, the term "computing system" includes computing devices and includes systems comprising multiple discrete physical components. In some embodiments, the computing systems are located in a data center, such as an enterprise data center (e.g., a data center owned and operated by a company and typically located on company premises), managed services data center (e.g., a data center managed by a third party on behalf of a company), a colocated data center (e.g., a data center in which data center infrastructure is provided by the data center host and a company provides and manages their own data center components (servers, etc.)), cloud data center (e.g., a data center operated by a cloud services provider that host companies applications and data), and an edge data center (e.g., a data center, typically having a smaller footprint than other data center types, located close to the geographic area that it serves).FIG. 9 is a block diagram of an example computing system in which technologies described herein may be implemented. Generally, components shown in FIG. 9 can communicate with other shown components, although not all connections are shown, for ease of illustration. The computing system 900 is a multiprocessor system comprising a first processor unit 902 and a second processor unit 904 comprising point-to-point (P-P) interconnects. A point-to-point (P-P) interface 906 of the processor unit 902 is coupled to a point-to-point interface 907 of the processor unit 904 via a point-to-point interconnection 905. It is to be understood that any or all of the point-to-point interconnects illustrated in FIG. 9 can be alternatively implemented as a multi-drop bus, and that any or all buses illustrated in FIG. 9 could be replaced by point-to-point interconnects.The processor units 902 and 904 comprise multiple processor cores. Processor unit 902 comprises processor cores 908 and processor unit 904 comprises processor cores 910. Processor cores 908 and 910 can execute computer-executable instructions in a manner similar to that discussed below in connection with FIG. 10 , or other manners.Processor units 902 and 904 further comprise cache memories 912 and 914, respectively. The cache memories 912 and 914 can store data (e.g., instructions) utilized by one or more components of the processor units 902 and 904, such as the processor cores 908 and 910. The cache memories 912 and 914 can be part of a memory hierarchy for the computing system 900. For example, the cache memories 912 can locally store data that is also stored in a memory 916 to allow for faster access to the data by the processor unit 902. In some embodiments, the cache memories 912 and 914 can comprise multiple cache levels, such as level 1 (L1), level 2 (L2), level 3 (L3), level 4 (L4) and/or other caches or cache levels. In some embodiments, one or more levels of cache memory (e.g., L2, L3, L4) can be shared among multiple cores in a processor unit or among multiple processor units in an integrated circuit component. In some embodiments, the last level of cache memory on an integrated circuit component can be referred to as a last level cache (LLC). One or more of the higher levels of cache levels (the smaller and faster caches) in the memory hierarchy can be located on the same integrated circuit die as a processor core and one or more of the lower cache levels (the larger and slower caches) can be located on an integrated circuit dies that are physically separate from the processor core integrated circuit dies.Although the computing system 900 is shown with two processor units, the computing system 900 can comprise any number of processor units. Further, a processor unit can comprise any number of processor cores. A processor unit can take various forms such as a central processing unit (CPU), a graphics processing unit (GPU), general-purpose GPU (GPGPU), accelerated processing unit (APU), field-programmable gate array (FPGA), neural network processing unit (NPU), data processor unit (DPU), accelerator (e.g., graphics accelerator, digital signal processor (DSP), compression accelerator, artificial intelligence (AI) accelerator), controller, or other types of processing units. As such, the processor unit can be referred to as an XPU (or xPU). Further, a processor unit can comprise one or more of these various types of processing units. In some embodiments, the computing system comprises one processor unit with multiple cores, and in other embodiments, the computing system comprises a single processor unit with a single core. As used herein, the terms "processor unit" and "processing unit" can refer to any processor, processor core, component, module, engine, circuitry, or any other processing element described or referenced herein.In some embodiments, the computing system 900 can comprise one or more processor units that are heterogeneous or asymmetric to another processor unit in the computing system. There can be a variety of differences between the processing units in a system in terms of a spectrum of metrics of merit including architectural, microarchitectural, thermal, power consumption characteristics, and the like. These differences can effectively manifest themselves as asymmetry and heterogeneity among the processor units in a system.The processor units 902 and 904 can be located in a single integrated circuit component (such as a multi-chip package (MCP) or multi-chip module (MCM)) or they can be located in separate integrated circuit components. An integrated circuit component comprising one or more processor units can comprise additional components, such as embedded DRAM, stacked high bandwidth memory (HBM), shared cache memories (e.g., L3, L4, LLC), input/output (I/O) controllers, or memory controllers. Any of the additional components can be located on the same integrated circuit die as a processor unit, or on one or more integrated circuit dies separate from the integrated circuit dies comprising the processor units. In some embodiments, these separate integrated circuit dies can be referred to as "chiplets". In some embodiments where there is heterogeneity or asymmetry among processor units in a computing system, the heterogeneity or asymmetric can be among processor units located in the same integrated circuit component. In embodiments where an integrated circuit component comprises multiple integrated circuit dies, interconnections between dies can be provided by the package substrate, one or more silicon interposers, one or more silicon bridges embedded in the package substrate (such as Intel@ embedded multi-die interconnect bridges (EMIBs)), or combinations thereof.Processor units 902 and 904 further comprise memory controller logic (MC) 920 and 922. As shown in FIG. 9 , MCs 920 and 922 control memories 916 and 918 coupled to the processor units 902 and 904, respectively. The memories 916 and 918 can comprise various types of volatile memory (e.g., dynamic random-access memory (DRAM), static random-access memory (SRAM)) and/or non-volatile memory (e.g., flash memory, chalcogenide-based phase-change non-volatile memories), and comprise one or more layers of the memory hierarchy of the computing system. While MCs 920 and 922 are illustrated as being integrated into the processor units 902 and 904, in alternative embodiments, the MCs can be external to a processor unit.Processor units 902 and 904 are coupled to an Input/Output (I/O) subsystem 930 via point-to-point interconnections 932 and 934. The point-to-point interconnection 932 connects a point-to-point interface 936 of the processor unit 902 with a point-to-point interface 938 of the I/O subsystem 930, and the point-to-point interconnection 934 connects a point-to-point interface 940 of the processor unit 904 with a point-to-point interface 942 of the I/O subsystem 930. Input/Output subsystem 930 further includes an interface 950 to couple the I/O subsystem 930 to a graphics engine 952. The I/O subsystem 930 and the graphics engine 952 are coupled via a bus 954.The Input/Output subsystem 930 is further coupled to a first bus 960 via an interface 962. The first bus 960 can be a Peripheral Component Interconnect Express (PCIe) bus or any other type of bus. Various I/O devices 964 can be coupled to the first bus 960. A bus bridge 970 can couple the first bus 960 to a second bus 980. In some embodiments, the second bus 980 can be a low pin count (LPC) bus. Various devices can be coupled to the second bus 980 including, for example, a keyboard/mouse 982, audio I/O devices 988, and a storage device 990, such as a hard disk drive, solid-state drive, or another storage device for storing computer-executable instructions (code) 992 or data. The code 992 can comprise computer-executable instructions for performing methods described herein. Additional components that can be coupled to the second bus 980 include communication device(s) 984, which can provide for communication between the computing system 900 and one or more wired or wireless networks 986 (e.g. Wi-Fi, cellular, or satellite networks) via one or more wired or wireless communication links (e.g., wire, cable, Ethernet connection, radiofrequency (RF) channel, infrared channel, Wi-Fi channel) using one or more communication standards (e.g., IEEE 902.11 standard and its supplements).In embodiments where the communication devices 984 support wireless communication, the communication devices 984 can comprise wireless communication components coupled to one or more antennas to support communication between the computing system 900 and external devices. The wireless communication components can support various wireless communication protocols and technologies such as Near Field Communication (NFC), IEEE 1002.11 (Wi-Fi) variants, WiMax, Bluetooth, Zigbee, 4G Long Term Evolution (LTE), Code Division Multiplexing Access (CDMA), Universal Mobile Telecommunication System (UMTS) and Global System for Mobile Telecommunication (GSM), and 5G broadband cellular technologies. In addition, the wireless modems can support communication with one or more cellular networks for data and voice communications within a single cellular network, between cellular networks, or between the computing system and a public switched telephone network (PSTN).The system 900 can comprise removable memory such as flash memory cards (e.g., SD (Secure Digital) cards), memory sticks, Subscriber Identity Module (SIM) cards). The memory in system 900 (including caches 912 and 914, memories 916 and 918, and storage device 990) can store data and/or computer-executable instructions for executing an operating system 994 and application programs 996. Example data includes web pages, text messages, images, sound files, and video data to be sent to and/or received from one or more network servers or other devices by the system 900 via the one or more wired or wireless networks 986, or for use by the system 900. The system 900 can also have access to external memory or storage (not shown) such as external hard drives or cloud-based storage.The operating system 994 can control the allocation and usage of the components illustrated in FIG. 9 and support the one or more application programs 996. The application programs 996 can include common computing system applications (e.g., email applications, calendars, contact managers, web browsers, messaging applications) as well as other computing applications.The computing system 900 can support various additional input devices, such as a touchscreen, microphone, monoscopic camera, stereoscopic camera, trackball, touchpad, trackpad, proximity sensor, light sensor, electrocardiogram (ECG) sensor, PPG (photoplethysmogram) sensor, galvanic skin response sensor, and one or more output devices, such as one or more speakers or displays. Other possible input and output devices include piezoelectric and other haptic I/O devices. Any of the input or output devices can be internal to, external to, or removably attachable with the system 900. External input and output devices can communicate with the system 900 via wired or wireless connections.In addition, the computing system 900 can provide one or more natural user interfaces (NUIs). For example, the operating system 994 or applications 996 can comprise speech recognition logic as part of a voice user interface that allows a user to operate the system 900 via voice commands. Further, the computing system 900 can comprise input devices and logic that allows a user to interact with computing the system 900 via body, hand, or face gestures.The system 900 can further include at least one input/output port comprising physical connectors (e.g., USB, IEEE 1394 (FireWire), Ethernet, RS-232), a rechargeable battery, charging circuitry to charge the battery, a global satellite navigation system (GNSS) receiver (e.g., GPS receiver); a gyroscope; an accelerometer; and/or a compass. A GNSS receiver can be coupled to a GNSS antenna. The computing system 900 can further comprise one or more additional antennas coupled to one or more additional receivers, transmitters, and/or transceivers to enable additional functions.In addition to those already discussed, integrated circuit components, integrated circuit constituent components, and other components in the computing system 994 can communicate with interconnect technologies such as Intel@ QuickPath Interconnect (QPI), Intel® Ultra Path Interconnect (UPI), Computer Express Link (CXL), cache coherent interconnect for accelerators (CCIX®), serializer/deserializer (SERDES), Nvidia® NVLink, ARM Infinity Link, Gen-Z, or Open Coherent Accelerator Processor Interface (OpenCAPI). Other interconnect technologies may be used and a computing system 994 may utilize more or more interconnect technologies.It is to be understood that FIG. 9 illustrates only one example computing system architecture. Computing systems based on alternative architectures can be used to implement technologies described herein. For example, instead of the processors 902 and 904 and the graphics engine 952 being located on discrete integrated circuits, a computing system can comprise an SoC (system-on-a-chip) integrated circuit incorporating multiple processors, a graphics engine, and additional components. Further, a computing system can connect its constituent component via bus or point-to-point configurations different from that shown in FIG. 9 . Moreover, the illustrated components in FIG. 9 are not required or all-inclusive, as shown components can be removed and other components added in alternative embodiments.FIG. 10 is a block diagram of an example processor unit 1000 to execute computer-executable instructions as part of implementing technologies described herein. The processor unit 1000 can be a single-threaded core or a multithreaded core in that it may include more than one hardware thread context (or "logical processor") per processor unit.FIG. 10 also illustrates a memory 1010 coupled to the processor unit 1000. The memory 1010 can be any memory described herein or any other memory known to those of skill in the art. The memory 1010 can store computer-executable instructions 1015 (code) executable by the processor core 1000.The processor unit comprises front-end logic 1020 that receives instructions from the memory 1010. An instruction can be processed by one or more decoders 1030. The decoder 1030 can generate as its output a micro-operation such as a fixed width micro operation in a predefined format, or generate other instructions, microinstructions, or control signals, which reflect the original code instruction. The front-end logic 1020 further comprises register renaming logic 1035 and scheduling logic 1040, which generally allocate resources and queues operations corresponding to converting an instruction for execution.The processor unit 1000 further comprises execution logic 1050, which comprises one or more execution units (EUs) 1065-1 through 1065-N. Some processor unit embodiments can include a number of execution units dedicated to specific functions or sets of functions. Other embodiments can include only one execution unit or one execution unit that can perform a particular function. The execution logic 1050 performs the operations specified by code instructions. After completion of execution of the operations specified by the code instructions, back-end logic 1070 retires instructions using retirement logic 1075. In some embodiments, the processor unit 1000 allows out of order execution but requires in-order retirement of instructions. Retirement logic 1075 can take a variety of forms as known to those of skill in the art (e.g., re-order buffers or the like).The processor unit 1000 is transformed during execution of instructions, at least in terms of the output generated by the decoder 1030, hardware registers and tables utilized by the register renaming logic 1035, and any registers (not shown) modified by the execution logic 1050.Software and firmware may be embodied as instructions and/or data stored on non-transitory computer-readable storage media. As used herein, the term "circuitry" can comprise, singly or in any combination, non-programmable (hardwired) circuitry, programmable circuitry such as processor units, state machine circuitry, and/or firmware that stores instructions executable by programmable circuitry. A computing system referred to as being programmed to perform a method can be programmed to perform the method via software, hardware, firmware, or combinations thereof.Any of the disclosed methods (or a portion thereof) can be implemented as computer-executable instructions or a computer program product. Such instructions can cause a computing system or one or more processor units capable of executing computer-executable instructions to perform any of the disclosed methods. As used herein, the term "computer" refers to any computing system, device, or machine described or mentioned herein as well as any other computing system, device, or machine capable of executing instructions. Thus, the term "computer-executable instruction" refers to instructions that can be executed by any computing system, device, or machine described or mentioned herein as well as any other computing system, device, or machine capable of executing instructions.The computer-executable instructions or computer program products as well as any data created and/or used during implementation of the disclosed technologies can be stored on one or more tangible or non-transitory computer-readable storage media, such as volatile memory (e.g., DRAM, SRAM), non-volatile memory (e.g., flash memory, chalcogenide-based phase-change non-volatile memory) optical media discs (e.g., DVDs, CDs), and magnetic storage (e.g., magnetic tape storage, hard disk drives). Computer-readable storage media can be contained in computer-readable storage devices such as solid-state drives, USB flash drives, and memory modules. Alternatively, any of the methods disclosed herein (or a portion) thereof may be performed by hardware components comprising non-programmable circuitry. In some embodiments, any of the methods herein can be performed by a combination of non-programmable hardware components and one or more processing units executing computer-executable instructions stored on computer-readable storage media.The computer-executable instructions can be part of, for example, an operating system of the computing system, an application stored locally to the computing system, or a remote application accessible to the computing system (e.g., via a web browser). Any of the methods described herein can be performed by computer-executable instructions performed by a single computing system or by one or more networked computing systems operating in a network environment. Computer-executable instructions and updates to the computer-executable instructions can be downloaded to a computing system from a remote server.Further, it is to be understood that implementation of the disclosed technologies is not limited to any specific computer language or program. For instance, the disclosed technologies can be implemented by software written in C++, C#, Java, Perl, Python, JavaScript, Adobe Flash, C#, assembly language, or any other programming language. Likewise, the disclosed technologies are not limited to any particular computer system or type of hardware.Furthermore, any of the software-based embodiments (comprising, for example, computer-executable instructions for causing a computer to perform any of the disclosed methods) can be uploaded, downloaded, or remotely accessed through a suitable communication means. Such suitable communication means include, for example, the Internet, the World Wide Web, an intranet, cable (including fiber optic cable), magnetic communications, electromagnetic communications (including RF, microwave, ultrasonic, and infrared communications), electronic communications, or other such communication means.As used in this application and the claims, a list of items joined by the term "and/or" can mean any combination of the listed items. For example, the phrase "A, B and/or C" can mean A; B; C; A and B; A and C; B and C; or A, B and C. As used in this application and the claims, a list of items joined by the term "at least one of' can mean any combination of the listed terms. For example, the phrase "at least one of A, B or C" can mean A; B; C; A and B; A and C; B and C; or A, B, and C. Moreover, as used in this application and the claims, a list of items joined by the term "one or more of' can mean any combination of the listed terms. For example, the phrase "one or more of A, B and C" can mean A; B; C; A and B; A and C; B and C; or A, B, and C.The disclosed methods, apparatuses, and systems are not to be construed as limiting in any way. Instead, the present disclosure is directed toward all novel and nonobvious features and aspects of the various disclosed embodiments, alone and in various combinations and subcombinations with one another. The disclosed methods, apparatuses, and systems are not limited to any specific aspect or feature or combination thereof, nor do the disclosed embodiments require that any one or more specific advantages be present or problems be solved.Theories of operation, scientific principles, or other theoretical descriptions presented herein in reference to the apparatuses or methods of this disclosure have been provided for the purposes of better understanding and are not intended to be limiting in scope. The apparatuses and methods in the appended claims are not limited to those apparatuses and methods that function in the manner described by such theories of operation.Although the operations of some of the disclosed methods are described in a particular, sequential order for convenient presentation, it is to be understood that this manner of description encompasses rearrangement, unless a particular ordering is required by specific language set forth herein. For example, operations described sequentially may in some cases be rearranged or performed concurrently. Moreover, for the sake of simplicity, the attached figures may not show the various ways in which the disclosed methods can be used in conjunction with other methods.The following examples pertain to additional embodiments of technologies disclosed herein.Example 1 is an apparatus comprising: a cable comprising a plurality of wires; one or more cooling channels external to the cable; an air mover connected to the one or more cooling channels, the air mover to generate forced air and provide the forced air to the cooling channels; and a connector located at an end of the cable to connect the wires and the cooling channels to a mobile computing device.Example 2 comprises the apparatus of Example 1, wherein the air mover is integrated into the cable.Example 3 comprises the apparatus of Example 1, wherein the air mover is external to the cable.Example 4 comprises the apparatus of any one of Examples 1-3, wherein the end of the cable is a first end of the cable and the air mover is positioned at a point along the cable between the first end of the cable and a second end of the cable.Example 5 comprises the apparatus of any one of Examples 1-4, wherein the connector is a first connector and the end is a first end of the cable, the air mover connected to a power wire of the plurality of wires at a point along the cable between the first connector and a second connector located at a second end of the cable.Example 6 comprises the apparatus of any one of Examples 1-5, wherein the cooling channels are internal to the cable.Example 7 comprises the apparatus of any one of Examples 1-6, wherein the air mover is releasably attachable to the cable.Example 8 comprises the apparatus of Example any one of Examples 1-7, wherein the connector comprises a first connector portion that encloses the wires and one or more second connector portions that enclose the cooling channels, the one or more second connector portions releasably attachable to the first connector portion.Example 9 comprises the apparatus of Example any one of Examples 1-9, further comprising an air mover controller to control a flow rate of the forced air generated by the air mover through the cooling channels.Example 10 comprises the apparatus of Example 9, further comprising current sensing circuitry, the air mover controller to control the flow rate of the forced air based on a measure of how much current is flowing through a power wire of the plurality of wires provided by the current sensing circuitry.Example 11 comprises the apparatus of Example 10, wherein the current sensing circuitry comprises a current sensing resistor in-line with the power wire, the measure of how much current is flowing through the power wire provided by the current sensing circuitry indicating an amount of current flowing through the current sensing resistor.Example 12 comprises the apparatus of Example 9, wherein the cable further comprises one or more data wires, the air mover controller to receive mobile computing device power consumption information over the one or more data wires, the air mover controller to control the flow rate of the forced air based on the mobile computing device power consumption information.Example 13 comprises the apparatus of Example 9, wherein the cable further comprises one or more data wires, the air mover controller to receive mobile computing device current consumption information over the one or more data wires, the air mover controller to control the flow rate of the forced air based on the mobile computing device current consumption information.Example 14 comprises the apparatus of Example 13, wherein the mobile computing device current consumption information indicates an amount of current drawn by charging circuitry of the mobile computing device.Example 15 comprises the apparatus of Example 13, wherein the mobile computing device current consumption information indicates an amount of current drawn by a system-on-a-chip (SoC) of the mobile computing device.Example 16 comprises the apparatus of Example 9, wherein the cable further comprises one or more data wires, the air mover controller to receive mobile computing device temperature information over the one or more data wires, the air mover controller to control the flow rate of the forced air based on the mobile computing device temperature information.Example 17 comprises the apparatus of Example 9, wherein the cable further comprises one or more data wires, the air mover controller to receive mobile computing device operational state information over the one or more data wires, the air mover controller to control the flow rate of the forced air based on the mobile computing device operational state information.Example 18 comprises the apparatus of Example 9, wherein the cable further comprises one or more data wires, the air mover controller to receive information indicating an operating temperature of the mobile computing device and a critical temperature of the mobile computing device, the air mover controller to control the flow rate of the forced air based on a difference between the critical temperature of the mobile computing device and the operating temperature of the mobile computing device.Example 19 comprises the apparatus of Example 9, wherein the cable further comprises one or more data wires, the air mover controller to receive information indicating an operating temperature of a mobile computing device component and a critical temperature of the mobile computing device component, the air mover controller to control the flow rate of the forced air based on a difference between the critical temperature of the mobile computing device component and the operating temperature of the mobile computing device component.Example 20 comprises the apparatus of Example 9, wherein the cable further comprises one or more data wires, the air mover controller to receive user presence information over the one or more data wires, the air mover controller to control the flow rate of the forced air based on the user presence information.Example 21 comprises the apparatus of Example 20, wherein the air mover controller is to increase the flow rate of the forced air if the user presence information indicates that a user is present at the mobile computing device.Example 22 comprises the apparatus of any one of Examples 1-21, wherein the connector comprises: a first connector portion to connect the wires to the mobile computing device; and one or more second connector portions to connect the cooling channels to the mobile computing device.Example 23 comprises the apparatus of any one of Examples 1-22, wherein the apparatus is a power adapter to convert an external power supply signal to an input power supply signal that is suitable for use by the mobile computing device.Example 24 is a method comprising: providing forced air to a mobile computing device by an air mover via one or more cooling channels connected to the air mover and the mobile computing device; determining, by the air mover, a measure of current flowing through a power wire carried by a cable connected to the mobile computing device; and controlling, by the air mover, a flow rate of the forced air based on the measure of how much current is flowing through the power wire.Example 25 comprises the method of Example 24, wherein the measure of current flowing through the power wire is a measure of current flowing through a current sensing resistor.Example 26 is a method comprising: providing forced air to a mobile computing device by an air mover via one or more cooling channels connected to the air mover and the mobile computing device; receiving, by the air mover, mobile computing device performance information over one or more data wires carried by a cable connected to the mobile computing device; and controlling, by the air mover, a flow rate of the forced air based on the mobile computing device performance information.Example 27 comprises the method of Example 26, wherein the mobile computing device performance information comprises mobile computing device power consumption information, the controlling the flow rate of the forced air based on the mobile computing device performance information comprises controlling the flow rate of the forced air based on the mobile computing device power consumption information.Example 28 comprises the method of Example 26, wherein the mobile computing device performance information comprises mobile computing device current consumption information, the controlling the flow rate of the forced air based on the mobile computing device performance information comprises controlling the flow rate of the forced air based on the mobile computing device current consumption information.Example 29 comprises the method of Example 28, wherein the mobile computing device current consumption information indicates an amount of current drawn by a charging circuit of the mobile computing device, the controlling the flow rate of the forced air based on the mobile computing device current consumption information comprises controlling the flow rate of the forced air based on the amount of current drawn by the charging circuit of the mobile computing device.Example 30 comprises the method of Example 28, wherein the mobile computing device current consumption information indicates an amount of current drawn by an integrated circuit component of the mobile computing device, the controlling the flow rate of the forced air based on the mobile computing device current consumption information comprises controlling the flow rate of the forced air based on the amount of current drawn by the integrated circuit component of the mobile computing device.Example 31 comprises the method of Example 26, wherein the mobile computing device performance information comprises mobile computing device temperature information, the controlling the flow rate of the forced air based on the mobile computing device performance information comprises controlling the flow rate of the forced air based on the mobile computing device temperature information.Example 32 comprises the method of Example 26, wherein the mobile computing device performance information comprises information indicating an operating temperature of the mobile computing device and a critical temperature of the mobile computing device, the controlling the flow rate of the forced air based on the mobile computing device performance information comprising controlling the flow rate of the forced air based on a difference between the critical temperature of the mobile computing device and the operating temperature of the mobile computing device.Example 33 comprises the method of Example 26, wherein the mobile computing device performance information comprises information indicating an operating temperature of a mobile computing device component and a critical temperature of the mobile computing device component, the controlling the flow rate of the forced air based on the mobile computing device performance information comprising controlling the flow rate of the forced air based on a difference between the critical temperature of the mobile computing device component and the operating temperature of the mobile computing device component.Example 34 is a method comprising: providing forced air to a mobile computing device by an air mover via one or more cooling channels connected to the air mover and the mobile computing device; receiving, by the air mover, mobile computing device operational state information over one or more data wires carried by a cable connected to the mobile computing device; and controlling, by the air mover, a flow rate of the forced air based on the mobile computing device operational state information.Example 35 is a method comprising: providing forced air to a mobile computing device by an air mover via one or more cooling channels connected to the air mover and the mobile computing device; receiving, by the air mover, user presence information over one or more data wires carried by a cable connected to the mobile computing device; and controlling, by the air mover, a flow rate of the forced air based on user presence information.Example 36 comprises the method of Example 35, wherein controlling the flow rate of the forced air based on the user presence information comprises increasing the flow rate of the forced air if the user presence information indicates that no user is present at the mobile computing device.Example 37 is a system comprising: a mobile computing device; a cable comprising a plurality of wires; one or more cooling channels external to the cable; an air mover connected to the one or more cooling channels, the air mover to generate forced air and provide the forced air to the cooling channels; and a connector located at an end of the cable to connect the wires and the cooling channels to a computing device.Example 38 comprises the system of Example 37, wherein the system is passively cooled.Example 39 is a system comprising: a mobile computing device; and a cooling means external to the mobile computing device to generate forced air and provide the forced air to the mobile computing device, a flow rate of the forced air based on performance or an operational state of the mobile computing.Example 40 comprises the system of Example 39, wherein the system is passively cooled.Example 41 is one or more computer-readable storage media storing computer-executable instructions that, when executed, cause an air mover to perform any one of the methods of Examples 24-36.Example 42 is an apparatus comprising a means to perform any one of the method of Examples 24-36. |